

Even if Illinois was feasible, I don’t think I’d want that. I’d rather fix the system. And quit dancing around the issue of Puerto Rico statehood.


Even if Illinois was feasible, I don’t think I’d want that. I’d rather fix the system. And quit dancing around the issue of Puerto Rico statehood.


I think it depends what’s on your phone. I don’t use mine for email or banking; it’s 2FA, phone calls, and a map. I’m using a Galaxy S8 that I purchased in the summer of 2017, and I don’t get any updates any more.
If I had bank account information or access to other sensitive data I’d be a lot more concerned.
My biggest problem is apps that stop working. My carrier doesn’t support my phone with their voicemail app, for example.


I’ve been running mail servers for about thirty years; my personal ones and production for 100K+ users.
The personal one is a pain for the reasons you mentioned. I use sendmail instead of postfix, but I was able to use some rules to push certain messages through other relays.
I signed up for Amazon SES and have so far stayed in their free tier. Mail coming from one of my addresses always goes through SES, and mail from any address to certain domains (aol.com, gmail.com, etc.) go through SES as well.
It allows me to ensure delivery for my important mails, but leave things up to chance for less important ones.
It’s the best solution I’ve been able to come up with for a really annoying situation. Big Tech ruined it all.


One is that I can keep family email (everyone on the server) in the same ecosystem, so private information send between family members isn’t as likely to leak.
Another is also privacy – my mail isn’t being used to build a profile about me.
I also like the control and the ability to look at logs. If I don’t get an email, I can look at the server and figure out why it didn’t show up. It just provides more information for me.


I’ve been using my own cloud-hosted SMTP relay and Zimbra server for over a decade now, and I love it.
There can be a bit of a learning curve, and in some cases sites won’t accept mail from cloud-hosted domains. I add those domains to a rule in sendmail that sends those domains through Amazon SES, and then they get accepted.
If you do go this route, just make sure that your recovery emails or 2FA for things like your registrar go somewhere else. If your cloud provider pulls the plug on you or something you don’t want to be stuck waiting for an email that can’t arrive.
I love the level of control that I have over my email and wouldn’t have it any other way.
tl;dr: steep learning curve, but worth it in the long run. Keep gmail as a recovery/2FA account or something, though.


To me, I feel like this is a problem perpetuated by management. I see it on the system administration side as well – they don’t care if people understand why a tool works; they just want someone who can run it. If there’s no free thought the people are interchangeable and easily replaced.
I often see it farmed out to vendors when actual thought is required, and it’s maddening.


So he’s saying people wouldn’t sacrifice much if they were to leave Meta-backed services?


TechDirt is a larger, well-known site.
I’ve had similar things happen to my much less popular site and it took a long time to get it resolved (this wasn’t with Cloudflare, though).
I’m curious what the process would look like for a small startup or something.


I feel an urge to go play Horizon Zero Dawn now.


I’m still using my Galaxy S8 with only one problem: Verizon’s voicemail app won’t run on something this old. Every other app is fine. It figures that the only app that encourages me to upgrade is from the phone company.


Technically, each time that it is viewed it is a republication from copyright perspective. It’s a digital copy that is redistributed; the original copy that was made doesn’t go away when someone views it. There’s not just one copy that people pass around like a library book.


Again, isn’t that the site’s prerogative?
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
User-agent: ia_archiver
Disallow:
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There’s no need to sneak around.


Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It’s a fine line. Is archive.org a library (wasn’t there a court case about this recently…) or are they republishing?
Either way, it doesn’t matter for me any more. The pages are gone from the archive, and they won’t archive any more.


Shouldn’t that be the content creator’s prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?


I’m thinking about it from the perspective of an artist or creator under existing copyright law. You can’t just take someone’s work and republish it.
It’s not allowed with books, it’s not allowed with music, and it’s not even allowed with public sculpture. If a sculpture shows up in a movie scene, they need the artist’s permission and may have to pay a licensing fee.
Why should the creation of text on the internet have lesser protections?
But copyright law is deeply rooted in damages, and if advertising revenue is lost that’s a very real example.
And I have recourse; I used it. I used current law (DMCA) to remove over 1,000,000 pages because it was my legal right to remove infringing content. If it had been legal, they wouldn’t have had to remove it.


how do you expect an archive to happen if they are not allowed to archive while it is still up.
I don’t want them publishing their archive while it’s up. If they archive but don’t republish while the site exists then there’s less damage.
I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.
But I don’t republish anything that I archive because that dilutes the value of the original creator.


Yes, some wikipedia editors are submitting the pages to archive.org and then linking to that instead of to the actual source.
So when you go to the Wikipedia page it takes you straight to archive.org – that is their first stop.


It’s user-driven. Nothing would get archived in this case. And what if the content changes but the page remains up? What then? Fairly sure this is why Wikipedia uses archives.
That’s a good point.
Pretty sure mainstream ad blockers won’t block a custom in-house banner. And if it has no tracking, then it doesn’t matter whether it’s on Archive or not, you’re getting paid the same, no?
Some of them do block those kinds of ads – I’ve tried it out with a few. If it’s at archive.org I lose the ability to report back to the sponsor that their ad was viewed ‘n’ times (unless, ironically, if I put a tracker in). It also means that if sponsorship changes, the main drivers of traffic like Wikipedia may not see that. It makes getting new sponsors more difficult because they want something timely for seasonal ads. Imagine sponsoring a page, but Wikipedia only links to the archived one. Your ad for gardening tools isn’t reflected by one of the larger drivers of traffic until December, and nobody wants to buy gardening tools in December.
Yes, I could submit pages to archive.org as sponsorship changes if this model continues.
It was a much bigger deal when we used Google ads a decade ago, but we stopped in early 2018 because tracking was getting out of hand.
If I was submitting pages myself I’d be all for it because I could control when it happened. But there have times when I’ve edited a page and totally screwed it up, and archive.org just happened to grab it at that moment when the formatting was all weird or the wrong picture was loaded. I usually fix the page and forget about it until I see it on archive.org later.
I asked for pages like that to be removed, but archive.org was unresponsive until I used a DMCA takedown notice.


I don’t think you know what SEO is. I think you know what bad SEO is.
Anyhow, Wikipedia is always free to link somewhere else if they can find better content.
/rant on I think CloudFlare is the direct result of the enshitififcation of development work.
People write an insecure app in Express/Flask/whatever, deploy it to the internet, then bolt on Cloudflare as a WAF and add Datadog because they have no idea what’s happening under the hood or limited themselves with their up-front choices.
This is marketed as progress. /rant off
But there are valid use cases like you mentioned. And it’s the enshitifed sites that fund that free tier.
There’s some irony about the Fediverse going through a centralized service, but I don’t know of a better free answer. A cheap answer might be a VPS with Caddy and automatic Lets Encrypt, but it’s not turnkey.