jrockway 3 hours ago

I think that if we didn't do TLS, every ISP would be injecting ads into websites these days. Making it difficult for middle-of-the-road interlopers is a good thing. ISPs don't want the customer service burden of proxy configurations and custom certs (god knows your IT department hates the support aspect of this tampering), so TLS keeps us free of excessive advertising. (Of course, they like do tampering with DNS which is why we have to do DNS-over-HTTPS. If you make it easy to tamper with your traffic, your ISP has a good business case to tamper with your traffic. Sad but true.)

I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load, but either nation states are compelling the issue of certs that aren't in the CT database, or they are and you can just get a list of who the nation states are spying on. Seems like less of a problem than it was a decade ago.

The author seems to miss the one guarantee that certificates provide; "the same people that controlled this site on $ISSUANCE_DATE control the site right now". That can be a useful guarantee.

  • pavel_lishin 3 hours ago

    We had this happen to us at work, once.

    We were working on some feature for a client's website, and suddenly things started breaking. We eventually tracked it down to some shoddy HTML + Javascript being on our page that we certainly didn't put there, and further investigation revealed that our ISP - whom we were paying for a business connection - was just slapping a fucking banner ad on most of the pages that were being served.

    This was around ... 2008? I wonder if they were injecting it into AJAX responses, too.

    My boss called them up and chewed them several new assholes, and the banner was gone by afternoon.

    • brightball 3 hours ago

      I don't know what the official name for the phenomena is where you widely experience a huge problem, the market reacts to fix the problem almost completely, people who never experienced the problem in the first place because the world before them solved it begin to complain about the solution, people defending the solution are mocked by people who have no context, the solution is rolled back and all the people who did it are happy with their win for a brief moment in time, then the original problem comes back in force but all of the walls put up to tear down the original solution make it 1000x harder to fix.

      I feel like there needs to be a name for this. For now, "Those who do not learn from history are doomed to repeat it." is the most apt I think.

      Happens constantly when you're essentially born on 3rd base. Maybe that's the proper name. Born on 3rd Base Syndrome.

    • benjiro 3 hours ago

      > suddenly things started breaking. We eventually tracked it down

      Amateur level ... Around 2006, we enjoyed some clients complaining why information on our CMS was being duplicated.

      No matter what we did, there was no duplication on our end. So we started to trace the actions from the from the client (inc browser, ip etc). And low and behold, we got one action coming from the client, and another from a different IP source.

      After tracing back the IP, it was a anti virus company. We installed the software on a test system, and ... Yep, the assh** duplicated every action, inc browser settings, session, you name it.

      Total and complete mimic beyond the IP. So any action the user did + the information of the page, was send to their servers for "analyzing".

      Little issue ... This was not from the public part of our CMS but the HTTPS protected admin pages!

      Sure, our fault for not validating the session with extra IP checks but we did not expect the (admin only) session to leak out from a HTTPS connection.

      So we tried to see if they reacted to login attempts at several bank pages. O, yes, they send the freaking passwords etc. We tried on a unused bank account, o, look, it was duplicating bank actions (again, bank at fault for not properly checking the session / ip).

      It only failed on a bank transfer because the token for authorization was different on their side, vs our request.

      You can imagine that we had a rather, how to say, less then polite talks / conversation with the software team behind that anti-virus. They "fixed it" in a new release. Did they remove the whole tracking? Nowp, they just removed the code for the session stealing if the connection was secure.

      O, and the answer to why they did it. "it a bug" (yea, right, your mimic a total user behavior, and its a "bug"). Translation: Legal got up their behinds for that crap and they wanted to avoid legal issues with what they did.

      Remember folks, if its free your the product. And when its paid, you are often STILL the product. And yes, that was a paid anti-virus "online protection". And people question why i never run any anti-virus software beyond a off-line scan from time to time, and have Windows "online" protections disabled.

      Companies just can not stop themselves from being greedy. Same reason why i NEVER use Windows 11... You expect if you paid for Windows, Office or whatever, to not be the product, but hey ...

    • sehugg 3 hours ago

      Haha, yeah this kind of stuff made HTTP long polling requests over mobile pretty fun. IIRC, we ran HTTP over IMAP and POP3 ports for cases where port 80 was unreliable.

    • jabroni_salad 3 hours ago

      My ISP (Mediacom) appears to have a deal with certain websites to display service messages. The only two I've encountered it on is Amazon and Facebook but they are somehow able to insert a maintenance banner at the top of those two when downtime is anticipated or if I am near the end of my bandwidth quota. Haven't gotten any ads this way but they have the technology.

      • akerl_ an hour ago

        The only ways I can think of where this would be possible is if:

        1. You're somehow connecting to Facebook and Amazon over HTTP, not HTTPS

        2. Your browser has an extension from your ISP installed that's interfering with content

        3. You've trusted a root CA from your ISP in your browser's trust store

      • Philip-J-Fry 3 hours ago

        Pretty sure this is done on your router. They terminate TLS on your router, inject their malware and then re-encrypt.

        • notpushkin 3 hours ago

          > and then re-encrypt

          How?

          • tialaramex 3 hours ago

            One thing you can arrange is "Oh, you need to trust our Router's security thing" and so you're adding a new private root CA trust, then they "just" issue CA certs which they've arranged for you to trust. This is commonly how corporate and institutional systems are set up, it's a terrible idea but it's very common.

            One thing that helps drive it away at work is that we're a University, and essentially all the world's universities have a common authenticated WiFi (because students and perhaps more importantly, academics, just travel from one to another and expect stuff to work, if you got a degree in the last 20 or so years you likely used this, eduroam) but obviously they don't trust each other on this stuff so their sites all use the Web PKI, the same public trust as everybody else, internal stuff might not, but the moment you're asking some History professor to manually install a certificate you might as well assign them a dedicated IT person, so, everything facing ordinary users has public certs from, of course, Let's Encrypt.

            Edited to name eduroam specifically.

            • notpushkin an hour ago

              > This is commonly how corporate and institutional systems are set up, it's a terrible idea but it's very common.

              Tbh makes it kinda sense for those systems, when used only with internal tools and on company devices... but yeah I’d just (of course) Let’s Encrypt if I was setting it up for a client.

        • jabroni_salad 3 hours ago

          I use an older ubiquiti edgerouter X so that would be pretty impressive.

    • navigate8310 3 hours ago

      This was common back in the early 2010s with Indian ISPs as well, particularly the state-controlled BSNL.

    • dunham 3 hours ago

      Our app reports all of the runtime exceptions to the server. We had one years ago (maybe before 2008) that was caused by somebody's "toolbar" replacing a method like Element.appendChild with one that sometimes crashed.

      This inspired me to add a list of all script tags to error reports.

    • nurettin 3 hours ago

      The modern version of that is brave or ublock or screen reader extensions or spyware inserting JS or data attributes which leads to user complaints. We don't need ISPs hacking lines, people do it to themselves when they sign up to shady sms services on download sites.

  • nasretdinov 3 hours ago

    I remember when Wi-Fi was first introduced in Moscow metro system (underground trains) in 2014, this is exactly what happened: most sites were HTTP and thus allowed ads to be injected essentially as form of payment for the Wi-Fi service. Almost immediately most Russian web sites switched to HTTPS because the ads often broke CSS layout and caused other issues in general

    • stop_nazi 3 hours ago

      And who now uses metro wifi «от оленевода»? Noone

  • bilekas 3 hours ago

    > I think that if we didn't do TLS, every ISP would be injecting ads into websites these days.

    Thats the least of the problems, they (anyone with basic access to your network actually) could easily overwrite every cooking or session on your machine to use their referral links. IE : Honey &* PayPal's Fraud [0] without you having any idea, now maybe you don't care, but it's stealing other peoples potential earnings.

    [0] https://www.theverge.com/24343913/paypal-honey-megalag-coupo...

  • bgwalter 3 hours ago

    ISPs should be regulated like common carriers. Modifying the data in transit should be illegal. ISP supercookies should be illegal.

    • Avamander 3 hours ago

      We can do that and also use HTTPS to be more certain of it.

  • throw_a_grenade 3 hours ago

    > I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load [...]

    They can MITM the connection between the host and LE (or any other CA resolver, ACME or non-ACME, doesn't matter). This was demonstrated by the attack against jabber.ru, at the time hosted in OVH. I recommend reading the writeup by the admin (second link from the top in TFA).

    This worked, because no-one checked CT.

    • 1718627440 3 hours ago

      They can also just tell some CA to sign a certificate.

      • throw_a_grenade 2 hours ago

        I don't believe this happens. Should something like this happen, the CA would be immediately distrusted by browsers, not as punishment but to deter state actors. It would give CAs argument, “we won't do it, because it means end of business for us”. Compelling by the state to do something that destroys a company is illegal in many jurisdictions, in the law that prescribes what the state can order employees of the company and what they cannot.

        • 1718627440 27 minutes ago

          The don't really need to order employees of the company, they can just do it. Either by completely owning a CA or by just going in and doing it. If it should be hidden, they can do it as part of an unrelated warrant.

          > the CA would be immediately distrusted by browsers, not as punishment but to deter state actors.

          Do you think browsers operate outside of states?

          > Compelling by the state to do something that destroys a company is illegal in many jurisdictions

          How would it destroy the company? It might affect reputation, but as long as it wasn't the company doing it on its own, they can just claim to be the victim (, which they are). It will only affect the company, if is becomes public knowledge, which the state actor doesn't want anyway. I don't think reputation to not respond to legal warrants is protected by the law. Also for example the USA is famous for installing malware on other countries head of state.

          Honestly this is the kind of law enforcement, which is fair in my opinion. It is much more preferable to mandated scanning (EU Chat Control), making the knowledge or selling of math illegal or sabotaging public encryption standards. No general security is undermined. It's just classic breaking in into some system and intercepting. Granted I think states shouldn't do it outside of their jurisdiction, but that is basically intelligence services fighting with each other.

          • throw_a_grenade 15 minutes ago

            > How would it destroy the company?

            If you're in business of selling X.503 certs trusted by browsers, then not being trusted by browsers kinda limits the marketability of your product.

            I don't believe the browsers could be coerced to not distrust such a CA. In every root program I know there's a clause that membership to the program is at browser's pleasure. (Those that have public terms, i.e. not msft, but I'd assume those have similar language.)

            Re: they can just do it, well, I think they'd be distrusted the same.

            In Symantecgate one of the reasons for distrust was that they signed FPKI bridge, so I think no CA in the future will sign a subca that will sign FPKI certs.

            > Also for example the USA is famous for installing malware on other countries head of state.

            Yeah, exactly. I think they have more targeted ways that risk less detection and less collateral damage.

            • 1718627440 2 minutes ago

              Well what destroys the company is not the generation of a certificate, but the publication. I think the state would compel the company not disclose it, so they would coerce the company into not destroying itself.

              Do you thing Google or Apple are going to care? They bowed down to China, I think the state they have their headquarters in has even more leverage. As for Mozilla Firefox on Linux, maybe, but I wouldn't trust this too much either.

              > I think they have more targeted ways that risk less detection and less collateral damage.

              I think they don't really need to care about this, it was quite clear that no other state is publicly doing anything against this.

    • fragmede 3 hours ago

      LE checks from multiple places, so you'd have to MITM all of them, which makes it seem rather challenging to actually pull off.

      • Ajedi32 3 hours ago

        AFAIK that's not a required feature of the DV process, and even if it were it wouldn't help if the MITM was happening between the website and the wider internet.

        That said, I don't think there's a way to stop a nation state from seizing control of a domain they control the TLD name servers for without something like Namecoin where the whole DNS system is redesigned to be self-sovereign.

        • tialaramex 2 hours ago

          Multi-perspective is or will be (I didn't pay attention to the timeline) required by the Baseline Requirements which are effectively the rules for how Web PKI certs work.

          The system is tamper evident not tamper proof. A nation state adversary can indeed impersonate my web site and obtain a new certificate, but the Web Browser doesn't trust that certificate without seeing Proof it was in the CT logs. So, now the nation state adversary need Proof it was Logged.

          Whoever issued them the proof has 24 hours to include that dodgy certificate in their public logs for everyone to see. If they lie and don't actually log it, the proof will be worthless and if shown to a trust root this bad proof will result in distrust of the log's operator. That's likely a six or seven figure investment thrown away, for each time this happens.

          On the other hand if they do log it, everybody can see what was issued and when, which is inconvenient if you'd prefer to be subtle like the NSA and to some extent Mossad. If you're happy to advertise that you're the bad guys, like the Russians and North Koreans, you do have the small problem that of course nobody trusts you, so, you can't expect any co-operation from the other actors...

          • Ajedi32 an hour ago

            Yes, CT makes any sort of certificate issuance attack relatively "loud", but as you seem to be aware that doesn't actually stop the attack from happening in the first place unless the attacker cares about keeping it a secret.

            This isn't like a missisuance where you can blame the CA and remove them from the root stores; they'd just be following the normal domain validation processes prescribed in the BRs.

            • tialaramex an hour ago

              The loudness means that when people yell "The government are doing X" you can go see for yourself, are they doing X? No? So what was the yelling about?

              Going to Portland to check whether it's on fire would be a lot of effort - so to some extent I must take it on trust that it's not actually on fire despite Donald Trump's statement - whereas visiting crt.sh to check for the extra certificates somebody claims the US government issued is trivial.

              • Ajedi32 36 minutes ago

                You wouldn't necessarily know whether the certificates were obtained by the US government or another random attacker. They have the CA's name on them and the website name, not the attacker's name.

                I'm not saying there's no value in being able to detect when you're compromised. I'm just saying it would be better if the compromise wasn't possible to begin with.

      • throw_a_grenade 2 hours ago

        They just MITMed on the link between the victim and it's immediate next hop, most likely by coercing the ISP (OVH). (See the writeup, where the admin discusses TTL values). No amount of multiview is sufficient if you control the uplink. Both DNS resolution and IP routing worked fine and IP packets were intercepted in attacker-controlled envirenment (on-path MITM box).

        What would somewhat help would be CAA record with specified ACME account key. The attackers would then have to alter DNS record, would be harder as you describe. (Or pull the key from VM disk image, which would cross another line).

  • octoberfranklin 3 hours ago

    Your comment is disingenuous -- the article isn't arguing against TLS. It is arguing against WebPKI.

    You can stop ISP ad injection with solutions much less complex than WebPKI.

    Simply using TOFU-certificates (Trust On First Use) would achieve this. It also gives you the "people who controlled this website the first time I visited it still control it" guarantee you mention in your last paragraph.

    TOFU isn't ideal, but it's an easy counterexample to your claims.

    • iamnothere 3 hours ago

      TOFU would allow the ISP to MITM every connection and then serve you ads. The ISP could simply provide their own cert to you.

    • blenderob 3 hours ago

      > Simply using TOFU-certificates (Trust On First Use) would achieve this.

      As a user how would I know if I should trust the website's public key on first use?

      • akerl_ 3 hours ago

        I guess we could organize regional parties where site operators and users meet up and exchange key material. I'm sure that will scale and won't have any problems of its own.

      • 1718627440 3 hours ago

        The same way I know which real person is serving me the website. I don't I merely know that the owner doesn't change randomly.

      • octoberfranklin 3 hours ago

        The same way you know if you should trust the WebPKI Rube-Goldberg-contraption: you don't.

        It's a counterexample, not a recommendation.

        If you need this guarantee, use self-certifying hostnames like Tor *.onion sites do, where the URL carries the public key. More examples of this: https://codeberg.org/amjoseph/not-your-keys-not-your-name

        • akerl_ 3 hours ago

          I trust the WebPKI infra quite a bit. Cert validation is publicly logged, CAs that do nefarious things get booted from browser trust stores.

          I can set which CAs can sign certs for my domains, and monitor if any are issued that I didn't expect.

  • stop_nazi 3 hours ago

    1. using http-only for decades, never seen “injections” 2. just change ISP

    • bilekas 3 hours ago

      > using http-only for decades, never seen “injections”

      This has to be a rage bait comment, but anyway, how do you expect 'injections' to show up on 'http-only' ?

      "Don't mind us, we're just sitting in the middle of your traffic here and recording your logins in plaintext"

      • stop_nazi 3 hours ago

        I'm not talking about logins, it's supposed to be encrypted. If I go to read news that is open to an unlimited number of people, there is no need for encryption: the information is open.

        • Avamander 3 hours ago

          You assume that you will _never_ read something that might be out in the open but at the same time the fact that you're reading it might be the thing that needs protecting? A public invitation to a protest against your autocratic government, for example?

          • pKropotkin 2 hours ago

            A public invitation to protest against my authoritarian government should not turn on total paranoia mode and cipher the opening hours of the local bakery. It's unnecessary I'd also like to remind you that the vast majority of e-mails are still unencrypted

            • tialaramex 2 hours ago

              > vast majority of e-mails are still unencrypted

              Kinda sorta. In transit most email is encrypted, the big mail providers all both speak and expect TLS encryption when moving mail. Almost everybody configures TLS encrypted IMAP if they use a client, or reads email over HTTPS

              > A public invitation to protest against my authoritarian government should not turn on total paranoia mode

              The expectations ordinary people have for how the web works are not met by the basic HTTP protocol. They need HTTPS to deliver those basic assumptions. Who decides the hours of the local bakery? Is it Jeff Bezos? HTTP says that seems fine, but HTTPS says no, the bakery gets to decide, not Jeff.

            • Avamander 2 hours ago

              Can you say that for everyone though, that they should have a local bakery and use its opening hours? There are also more cases than that, where something being public does not mean that someone should see you looking at that info.

              While the situation with emails is worse it does not mean it should be like that.

    • Avamander 3 hours ago

      > 2. just change ISP

      Not a viable option in a lot of places. Nor does anyone really even want to consider this possibility of their ISP being able to MITM something in the first place.

      • stop_nazi 3 hours ago

        If a provider does not provide data transmission, that provider is not competent. Period

    • perching_aix 3 hours ago

      > just change ISPs

      I sure love when decisions reduce themselves to single points of consideration by virtue of them being discussed in a heated internet forum thread

    • stop_nazi 3 hours ago

      The problem with the horrible injections on the page can be solved very simply. If the information on a page is open, just pass that page openly and pass a checksum of page in the header. To prevent this sum from being tampered with, the server will encrypt it. Not the whole page, just the sum. You will save a lot of CPU time on server and on client, reduce CO₂ and so on

      • Avamander 2 hours ago

        So TLS with some "eNULL" ciphersuite. People have been there, tried that. There's very very little practical value in that over just doing proper encryption as well.

woodruffw 3 hours ago

> Not this time. The technical problems are easy to solve. For decades, users of SSH have had a system (save the certificate permanently the first time you connect, and warn if it ever changes) that is optimal in a sense: it works at least as well as any other solution. It's trivial to implement, is completely free, involves no third parties, and lasts forever. To the surprise of absolutely no one, web browsers don't support it.

This is completely backwards: TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank, and (2) shouldn't be exposed to any MITM risk because they forget to. The entire point of a public key infrastructure like the Web PKI is to ensure that technical and non-technical people alike get transport security.

(The author appears to unwittingly concede this point with the SSH comparison -- asking my grandparents to learn SSH's host pinning behavior to manage their bank accounts would be elder abuse. It works great for nerds, and terribly for everyone else.)

  • ericbarrett 3 hours ago

    Does it even work great for nerds? I have seen a distressing amount of turning host key warnings off, or ignoring the warnings forever, or replacing a host key without any curiosity or investigation. Seems even worse in the cloud, where systems change a lot.

    • woodruffw 3 hours ago

      > Does it even work great for nerds?

      No, but I was extending a charitable amount of credulousness :-)

    • evilduck 3 hours ago

      Even amongst nerds I've seen a significant amount of key pair re-use in my time, both 1:n::dev:servers and sometimes even 1:n::organization:devs. The transport security is moot when the user(s) discard all precautions and best practices on either end.

      • Avamander 3 hours ago

        Even in such cases it's not really moot if a forward-secure scheme is used, only old legacy implementations might not by now. So just the key being shared between machines does usually not compromise the security of individual sessions, especially not retroactively.

    • ghusto 2 hours ago

      Please let's not break something that works really well just to cater to those who don't know how to use the tools of their trade.

    • MrDarcy 3 hours ago

      The platform engineering team at my big corp work simply disabled host key checking in the cloud tool Python script they wrote for all of us to log into our bastion hosts.

      For prod.

      ssh —-known-hosts-file=/dev/null

      • 20after4 an hour ago

        Wow, that is a level of DGAF I haven't encountered before in production. No wonder data breaches are so common with that kind of YOLO security practices.

        • MrDarcy 25 minutes ago

          To be fair, it’s all EC2 so provenance of the host is well established.

    • Spivak 3 hours ago

      I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect. Just scream if it ever changes. I get that they're expecting me to compare it to something out of band but nobody does that.

      • jeroenhd 3 hours ago

        Depends on the server. A VM you just installed on your own machine? A lab machine on the proxmox cluster? Probably.

        A new cloud VM running in another city? I would trust it by default, but you don't have a lot of choice in many corporate environments.

        Funnily enough, there is a solution to this: SSH has a certificate authority system that will let your SSH clients trust the identity of a server if the hostkey is signed and matches the domain the SSH CA provided.

        Like with HTTPS, this sort of works if you're deploying stuff internally. No need to check fingerprints or anything, as long as whatever automation configured your new VM signs the generated host key. Essentially, you get DV certificates for SSH except you can't easily automate them with Let's Encrypt/ACME because SSH doesn't have tooling like that.

      • blenderob 3 hours ago

        > I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect.

        Why is it reasonable to trust the key on first use? What if the first use itself has a man-in-the-middle that presents you the middle-man's key? Why should I trust it on first use? How do I tell if the key belongs to the real website or to a middle-man website?

        • 1718627440 3 hours ago

          What is the "real website"? You do not know this in the general case, it is just some rando on the internet, which is indistinguishable from a middle-man.

  • blenderob 3 hours ago

    > TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank

    This! Forget about average user. As a technical user too I don't know how I would compare fingerprints every single time without making a mistake. I could install software or write my own to do this on desktop but what would I do on cell phones?

    And TOFU requires "trust" on first use. How do I make sure that if I should be trusting the website public key on first use? It doesn't seem like any easier to solve than PKI.

    • akerl_ 3 hours ago

      This is the sleight of hand being employed when folks suggest TOFU mechanisms. The problem with any communication boils down to trust. The modern web PKI has a bunch of complexity and a plenty of rough edges in how it handles resolving that trust. TOFU is then proposed as a simpler solution with none of those pesky rough edges, but it doesn't have the rough edges because it leaves all the hard parts as an exercise for the reader.

      It's a bit like suggesting that AES-GCM has risks so we ought to just switch to one-time-pads.

    • Avamander 3 hours ago

      > How do I make sure that if I should be trusting the website public key on first use? It doesn't seem like any easier to solve than PKI.

      Usually such questions get replied to with a recommendation of implementing DNSSEC. Which is also obviously PKI and in many ways worse than WebPKI.

    • perching_aix 3 hours ago

      It's the usual hilarious flow of "HTTPS is dogshit, so here's the SSH fingerprint you should trust instead, served over HTTPS of course".

      • arielcostas 3 hours ago

        SSH fingerprints can also be provided via DNS with the SSHFP[0] DNS record, which coupled with DNSSEC and supposing you trust the DNS root and intermediate entities (whether that's IANA/ICANN, or alternatives like OpenNIC or Namecoin) allows you to check the SSH server fingerprints without HTTPS. At some point you probably need to trust someone anyway.

        Or you can always get the fingerprint out of band. If it's some friend granting you SSH access to their server, or a vendor, or whatever, you can ask them to write the fingerprint on a piece of paper and give it to you, with you checking the paper comes from them and then checking them.

        [0]: https://datatracker.ietf.org/doc/html/rfc4255

        • perching_aix 3 hours ago

          Couldn't you just use DANE/TLSA at that point?

NoahZuniga 3 hours ago

> You can make the warning go away by paying a third-party—who then pays Google—to sign your website's SSL certificate

This is just not true!!!! CAs don't pay google to be in their root store.

> But if someone is able to perform a man-in-the-middle attack against your website, then he can intercept the certificate verification, too

The reasoning goes that most MITM (potential) attacks are between you and your ISP. Let's encrypt can connect to the backbone basically directly, so most MITM attacks won't reach them. Also, starting on September 15, 2025 (Let's encrypt has been doing this for a while already though) all domain validation requests have to be made from multiple perspectives, making MITM attacks harder.

  • nicce 3 hours ago

    I don’t know whether they pay for Google but Google can dictate many things; otherwise they drop certificates from Chrome and this has happened.

    • NoahZuniga 3 hours ago

      Well, I do! And Google doesn't get paid!

      > otherwise they drop certificates from Chrome and this has happened.

      As far as I know, all the CAs Google dropped, this was because the CA misbehaved and misissued certs or was obviously failing at their job. Also, all CAs google has removed from their root store have also been removed by mozilla (or weren't removed because mozilla never included them).

    • akerl_ 3 hours ago

      You're thinking of the CAB, which dictates which CAs are trusted. Google is a participant in that. The things they dictate are public and have to do with security requirements, not whether or not they pay Google money.

      • NoahZuniga 3 hours ago

        This is not true! CAB is a place where CAs and browsers agree on what the rules for CAs should be. Google, Mozilla, Microsoft and Apple all administrate their own root stores which individually decide what CAs are trusted on their platforms. Individual root stores decide on the rules for inclusion in their stores themselves, but these rules are essentially: You follow CAB rules + a few extra things. Mozilla for example requires (besides CAB rules) that whenever a CA becomes aware of an issue, they post a bug to bugzilla and get their shit together pretty quickly and keep mozilla up to date on what they're doing.

        • akerl_ 3 hours ago

          This would feel a lot more like a relevant nit to pick if there were actually meaningful differences where I might go get a TLS cert and find it's trusted in Chrome but not Firefox or vice versa.

          • NoahZuniga 3 hours ago

            Chrome vs Firefox doesn't matter that much, but more significantly windows trusts more CAs than Chrome and Firefox. Not sure about the exact amount, but it seems to be somewhat significant amount. You can take a look at https://www.ccadb.org/resources I looked at it but couldn't quickly get a number, so no number in my comment.

1718627440 3 hours ago

> But if someone is able to perform a man-in-the-middle attack against your website, then he can intercept the certificate verification, too. In other words, Let's Encrypt certificates don't stop the one thing they're supposed to stop.

But the certificate is signed with the key of Let's Encrypt and your own, both of which the private key never leave the server.

  • voidmain 3 hours ago

    The author is claiming that a sufficiently capable attacker can MITM the ACME protocol used to automatically renew certificates (and thus get a valid certificate issued for the victim domain with the attacker's private key). This is probably true as far as it goes, but certificate transparency logs make such attacks easy to detect, and browsers will not accept certificates that are not in the logs. Web sites that do not monitor CT logs probably are vulnerable to well resourced attacks of this kind, but I don't think there is a huge plague of them, maybe because attackers with the ability to MITM DNS requests for LE don't want to burn that capability on such easily detected attacks.

    • ameliaquining 3 hours ago

      Also, if the CA runs the ACME check from five different validation servers that aren't all on the same continent, which Let's Encrypt does and all other CAs will be required to do in a couple years, then it is dramatically harder to simultaneously MITM them all. And if you really want to, you can use DNS-01 with DNSSEC, which means an attacker would have to be able to compromise DNSSEC on top of everything else.

  • ownagefool 3 hours ago

    Yeah, the argument apparently doesn't really grok how certificates are issued and why the changes exist.

    Manual long term keys are frowned upon due to potential keyleaks, such as heartbleed, or admin misuse, such as copy of keys on lots of devices when you were signing that 10 year key.

    Automated and short lived keys are the solutions to these problems and they're pretty hard to argue against, especially as the key never leaves the server, so the security concerns are invalid.

    That's not to say you can't levy valid criticism. I'm not sure if the author is entirely serious either though.

    p.s. Certbot and Cert-manager are probably fine, but they're also fairly interesting attack vectors

  • bilekas 3 hours ago

    Yeah it reads as if the OP misunderstands the attack vectors of SSL. If there's a misconfiguration, or the server admin is not correctly authenticating the authority, then sure. But skips over what they mean.

    Being generous I would say they are referring to if the client has an invalid ssl approved on their local, in which case its a client problem.

    To ignore Encryption altogether is a silly idea. Maybe it shouldn't be so centralised to 1 company though.

  • maratc 3 hours ago

    My IT department performs a man-in-the-middle attack against all my VPN traffic, and issues on-the-fly certificates for all the sites I visit. There is zero warning on my side, and the only way I know of it is because I'm a nerd who looks into certificate chains sometimes. My other nerd coworkers are blissfully unaware.

    EDIT: I understand how it works. This wasn’t my point.

    • jval43 3 hours ago

      They need to install their root certificate into your work machine's trust store. Which they can only do because they control the machine (or VPN software), and would not be possible for a regular machine.

      • maratc 2 hours ago

        Many people are using VPNs these days. Nothing prevents vpn-du-jour.com from similarly messing with your traffic. Moreover, any software you install with privileges could also install certificates. In this sense, “a regular machine” is only the one which has no other software installed.

        The point (I think) that TLA is trying to make is that encryption isn’t enough. It wouldn’t be a good situation where someone looks at their house burning and says “well at least nobody could ever read my https traffic.”

        • ghusto 2 hours ago

          > Nothing prevents vpn-du-jour.com from similarly messing with your traffic

          The browser not trusting the CA that signed the certificate prevents this. As the commenter said above, they would first need to install a certificate into your list of trusted certs for this to work. Your IT department can do that because they have root on your machine, vpn-du-jour.com can not, and neither can anybody else without root.

          • maratc 2 hours ago

            It's been my belief that, when I download “VPN-du-jour Connector” from vpn-du-jour.com (the one with the green “Connect and Surf Securely” button), I need to give that installer root privileges (so it could “manage my VPN configuration.”)

            Also, I believe that when I download “Shoot Your Friends Online” and install that, it also asks for root privileges (in order to make sure that no cheating software runs on my computer that would allow me to “shoot more of my friends quicker.”)

            I also think that when I install “Freecell Advanced,” it also comes with “Freecell Advanced Updater” that needs root privileges (in order to “update Freecell Advanced.”)

            Do I understand correctly that there is nothing stopping all three of these — running with root privileges — from installing certificates?

            • 1718627440 an hour ago

              Yes, that's why having installers not provided by the OS is a bad idea.

    • akerl_ 3 hours ago

      This only works because you company's endpoints have been configured to trust the company's root CA. Which makes sense, because it's their device and their VPN.

    • AntronX 3 hours ago

      I wonder if you could roll your own VPN tunnel that directly connects to your home internet IP and passes custom encrypted payload that your IT department cannot decode. Would they just drop the connection if they can't inspect what your are sending?

      • 1718627440 3 hours ago

        The issue is that they control the device he is using, so they could simply verify it on device.

    • keepamovin 3 hours ago

      Yes, but if you use HSTS a regular browser will flag that. Perhaps your browser is also "MITM"d via a management policy? hehe :S

  • Spivak 3 hours ago

    Also MITMing a user is much easier than MITMing Let's Encrypt themselves who perform multiple checks from different locations.

NegativeK 4 hours ago

This article feels like an opinion piece with an axiom of perfection or nothing.

  • samcat116 3 hours ago

    Thats most articles posted to HN

  • Antibabelic 3 hours ago

    Perfection is a very useful guide star. Just because it may not exist doesn't mean we shouldn't hold up deeply flawed projects to it.

  • llm_nerd 4 hours ago

    Indeed, their main grip seems to be with DV, and they seem to hold only EV certs as legitimate. They miss the entire value proposition and purpose of DVs.

    MITM is a user->service concern. If someone is between a service and LE, there are much bigger problems.

    • Ajedi32 3 hours ago

      Certainly a MITM between a website and LE is less likely than a MITM between a user on a random public Wi-Fi network and the website, but I've often wondered why more attention hasn't been given to securing the domain validation process itself.

      There are a lot of random internet routers between CAs and websites which effectively have the ability to get certificates for any domain they want. It just seems like such an obvious vulnerability I'm kinda shocked it hasn't been exploited yet. Perhaps the fact that it hasn't is a sign such an attack is more difficult than my intuition suggests.

      Still, I'd be a lot more comfortable if DNSSEC or an equivalent were enforced for domain validation. Or perhaps if we just cut out the middleman and built a PKI directly into the DNS protocol, similar to how DANE or Namecoin work.

      • ameliaquining 3 hours ago

        A lot of attention has been given to securing the domain validation process. The primary defense is Multi-Perspective Issuance Corroboration, which Let's Encrypt already does and all CAs will be required to do in a couple years. The idea is that you run the check from five different servers on two different continents, so that compromising just one internet router isn't enough, you have to get one on every path, which is much harder to pull off.

        Also, Let's Encrypt validates DNSSEC for DNS-01 challenges, so you can use that if you like, although CAs in general are not required to do this, there are various reasons why a site operator might not want to, and most don't.

        There are two fundamental problems with DANE that make it unworkable, and that would presumably also apply to any similar protocol. The first is compatibility: lots of badly behaved middleboxes don't let DNSSEC queries through, so a fail-closed system that required end-user devices to do that would kick a lot of existing users off the internet (and a fail-open one would serve no security purpose). The other is game-theoretic: while the high number of CAs in root stores is in some ways a security liability, it also has the significant upside that browsers can and do evict misbehaving CAs, secure in their knowledge that those CAs' customers have other options to stay online. And since governments know that'll happen, they very rarely try to coerce CAs into misissuing certificates. By contrast, if the keepers of the DNSSEC keys decided to start abusing their power, or were coerced into doing so, there basically wouldn't be anything that anyone could do about it.

        • Ajedi32 an hour ago

          MPIC is good but not foolproof if the website itself is being MITMd. DNSSEC validation is better but not required, as you said, and even if it were HTTP-01 would just immediately become the new weak point.

          I think you're wrong about DANE's flaws applying to "any similar protocol". The ossification problem could be solved by DNS over HTTPS cutting out the middle boxes, though I agree adoption of that will take time; much as adoption of HTTPS itself has. The game theory problem has been solved by CT; as you noted. You just need to subject certificates issued through the new system to the same process.

          Remember that any actor capable of siezing control of DNS can already compromise the existing PKI by fulfilling DNS-01 challenges. You're not going to be able to solve that problem without completely replacing DNS with a self-sovereign system similar to Namecoin, though I can't imagine that happening anytime soon.

    • Joker_vD 3 hours ago

      EV certs seem to have basically the same verification policies that CAs had for ordinary certificates back in the early 2000s (i.e., really not that much at all), so I am intrigued as to what the DV has to offer except "it's basically self-signed but with extra steps and the rest of the world will trust it".

      > If someone is between a service and LE

      There is always someone there: my ISP, my government that monitors my ISP, the LE's ISP, and the US government that monitors the LE's ISP.

  • DannyBee 3 hours ago

    I mean, it's not untypical as a view.

    In reality, successful society lives halfway down tons of slippery slopes at any given point in time, and engineers in particular hate this. Yet this has been true since basically forever.

    I'm sure cavemen engineers complained about how it's not secure to trust that your cave is the one with the symbol you made on the wall, etc.

  • rini17 4 hours ago

    Sure let's eat crap without complaint, nothing is perfect anyway. /s

sigmar 3 hours ago

His points aren't bad, but it seems like a great example of "perfect is the enemy of good." Let's Encrypt does an incredible amount of good by adding SSL to sites that wouldn't have had it otherwise.

  • ghusto 2 hours ago

    His points against LetsEncrypt are that:

    - It introduces an exploitable attack vector

    - He sees it as a Trojan Horse, and fears for what will happen in the future

    There are a few static sites I run where there is no exchange of information. I'm locked into ensuring certificates exist for these sites, even though there's nothing to protect (unless you count the ensuring the content is really from me as protecting something).

  • nearbuy 3 hours ago

    Except his points are mostly straight up factually wrong.

    • sigmar an hour ago

      It does kind of suck that Let's Encrypt is entirely funded by donations from corporations like Google and Facebook. If they pulled support what would happen? Would 92% of websites we visit get downgraded to http?[1]

      Also his point that it "supplants better solutions" is inarguably true. The 2010s had lots of conversations about certificate transparency and CA changes that just don't happen today because the existence of Let's Encrypt made it so easy to put a cert-signed website online.

      [1] of US firefox users: https://letsencrypt.org/stats/

AndrewStephens 3 hours ago

It is a shame that HTTPS is required for sites these days but that doesn't change the fact that it really is necessary, even for the smallest of blogs.

HTTPS does three interrelated things:

Encryption - the data cannot be read by an intermediary, which protects your readers' privacy. You don't want people to know what pages you read on BigBank.com or EmbarassingFetish.com.

Tamper Proofing - the data cannot be changed by an intermediary, which protects your readers' (and your server) from someone messing with the data, say substituting one bank account number for another when setting up a payment, etc.

Site Authentication - ensures that the browser is connected to the server it says it is, which also prevents proxying. Without this an intermediary can impersonate any site.

Before the big push for encrypting everything it was not uncommon to hear of ISPs inspecting all traffic to sell to advertisers, or even injecting ads directly into pages. HTTPS makes this much more difficult.

  • jeroenhd 3 hours ago

    HTTPS is hardly required for websites. Web applications may restrict sensitive actions to HTTPS, but websites over HTTP still work fine.

    I try to avoid them because they allow sketchy ISPs to inject ads and other weirdness into my browser, but normal browsers will still accept HTTP by default.

    If you don't want people to know you're visiting EmbarrassingFetish.com, EmbarrassingFetish.com also needs to implement ECH (eSNI's replacement) and your browser must have it enabled, otherwise anyone can on the line can still sniff out what domain you're connecting to.

    I don't think site authentication is practical, though. For some use cases it works (i.e. validating the origin before firing off a request to a U2F/FIDO2 authenticator), but for normal users, mybank.com and securemybank.com may as well be equivalent (and some shitty important services actually use fake sounding domains like that, like PayPal for instance). Unless you remember the country and state and town your bank is registered in, even EV certificates can't help you because there can be multiple companies with the name Apple Inc. that all deserve a certificate for their website.

    • AndrewStephens 3 hours ago

      Hey, I only read EmbarrassingFetish.com for the recipes section (I recommend their carrot cake.) I'm not into the rest of the stuff there and you can't prove it thanks to HTTPS.

      More seriously, you are not wrong. Site Authentication is still a problem and actually the weakest part of HTTPS but it is also more of a people problem than a technical one. Nothing stops somebody from registering MyB4nk.com but at least HTTPS stops crooks spoofing MyBank.com exactly.

    • kbolino 2 hours ago

      > but websites over HTTP still work fine

      The best attack surfaces always do. If I'm a smart attacker, why would I impair your experience (at least, until I get what I want)? It's better to give you a false sense of security. There are, of course, dumber attacks that will show obvious signs. While many people do fall prey to such attacks from lapses in, or impairment to, their judgment, the smarter attacks hide themselves better.

      The classical model of web security based around "important" sites and "sensitive" actions has been insufficient for decades. It was certainly wrong by the time the first coffee shop/airport/hotel wifi was created; by the time the first colocation provider/public cloud was created; by the time every visitor/student/employee of any library/university/company was given open Internet access; etc.

  • kbolino 3 hours ago

    I think this is the classical explanation and set of examples, which only really explain why HTTPS should be used on "important" websites. But HTTPS should be used on every website and you need a different explanation/example for justifying that.

    To connect to a website on the Internet, you must traverse a series of networks that neither you nor the website control. If the traffic is not tamper-proof, no matter how "unimportant" it may seem, it presents the opportunity for manipulation. All it takes is one of the nodes in the path to be compromised.

    Scripts can be injected--even where none already exist; images can be modified--you see a harmless cat picture, the JPEG library gets a zero-day exploit; links can be added and manipulated--taking you to other, worse sites with more to gain by fooling you.

    None of this is targeted at you or the website per se. It's targeted at the network traffic. You're just the victim.

    • Avamander 3 hours ago

      > If the traffic is not tamper-proof, no matter how "unimportant" it may seem, it presents the opportunity for manipulation. All it takes is one of the nodes in the path to be compromised.

      It also ignores one really important fact that these pipes are not perfect, they do introduce errors into the stream. To ensure integrity we would still need to checksum everything and in a way that no eager router "fixes".

      We want our bank statements to be bit-perfect, our family pictures not to be corrupted, so on and on.

      So even if someone handwaves away all the reasons why we need encryption everywhere (which is insane), we would still need something very similar to TLS and CAs being used. Previous TLS versions have even had "eNULL" ciphersuites.

      • kbolino 2 hours ago

        It would have been nice to have been able to keep eNULL around, but a) it was basically never used in practice and b) the way it worked practically guaranteed it was impossible for the average sysadmin to get right. There's never really a situation in which you might want to negotiate eNULL instead of a specific encryption algorithm. Either the site/page is encrypted or it isn't. Encryption-or-not is on a completely different axis from the type of encryption to use. And configuring older versions of SSL/TLS involved traversing a minefield of confusing, arcane, and trap-laden knobs whose documentation was written for the wrong audience.

        • Avamander 2 hours ago

          > There's never a situation in which a website might want to negotiate eNULL instead of an encrypted option.

          Precisely, without some magic handwaving there aren't any reasons.

          eNULL was/would also kinda useful if one wanted to debug something without turning off TLS completely. But that's not worth the complexity keeping it around.

  • ghusto 2 hours ago

    > which protects your readers' privacy. You don't want people to know what pages you read on BigBank.com or EmbarassingFetish.com

    DNS requests leak this information.

    > Tamper Proofing > Site Authentication

    There are _many_ sites where this is not important. I want HTTPS for my bank, but I couldn't care less if someone wants to spend the time and effort to intercept and change pages from a blog I read.

    • kbolino an hour ago

      > I couldn't care less if someone wants to spend the time and effort to intercept and change pages from a blog I read.

      I do not understand why so many people think having, say, zero-day exploits served to them is not a problem.

      The blog is not the target; the unsecured connection is.

      Approximately nobody is taking the time to hand craft a specific modification of some random blog. They develop and use tools that manipulate any packet streams which allow tampering, without the slightest concern for how (un-)important the source of those packets is.

1718627440 3 hours ago

> The official way to renew Let's Encrypt certificates is automatically, with a tool called certbot. It downloads a bunch of untrusted data from the web, and then feeds that data into your web server, all as root.

Why would you run certbot as root? You don't do that with any other server.

  • jval43 3 hours ago

    It used to be the case that you had to run certbot as root or it just wouldn't work. At least not officially, you could get it work without root but it wasn't supported.

    The official docs still recommend doing so: >Certbot is most useful when run with root privileges, because it is then able to automatically configure TLS/SSL for Apache and nginx.

    • Avamander 3 hours ago

      I think I've never ran it as root since it came out by using the `webroot` method, where certbot just writes the challenges to a specified path it has access to and that's it.

    • 1718627440 3 hours ago

      I haven't experienced that, since I prefer acmetool.

charles_f 2 hours ago

I remember when you had to give verisign a few 100 or thousands every year, some random dev would download the thing to their machine and circulate it by email for update. None of the competitors were cheaper either. That day wasn't either better or more secure, less so. Ironically, the solution author is pushing forward (basically self signed) is much worse to prevent mitm attacks.

I somewhat agree with the precept, it's not great that the web is controlled by Google, beyond just tls certs. Something that changed since this was written is precisely that you have alternatives like zerossl.

Saying that letsencrypt doesn't bring any security is plain wrong though. The OWASP top ten doesn't list certificate theft or chain of trust mitm attack, but does have a category for cryptographic failures. My hotel has full control of the wifi, but it hardly has an opportunity to mitm my chain of trust. Same goes for ISP. When you have a cert corresponding to your dns record, it at least shows that you have some control over the infra that is behind that record.

jjgreen 4 hours ago

Heh, that page "Verified by Let's Encrypt"

  • croes 4 hours ago

    > Update 2023-11-05 Yeah, I've got an LE cert now. And I don't want to talk about it.

    • _def 3 hours ago

      That quote is the only thing you have to read of that article besides the headline.

    • dijit 3 hours ago

      The ironic observation about the page using an LE cert is fantastic; Browser mandates make the encryption discussion moot. If you don't use it, your argument literally won't load for a modern audience.

      It speaks to the problem of digital decay. We can still pull up a plain HTTP site from 1995, but a TLS site from five years ago is now often broken or flagged as "insecure" due to aggressive deprecation cycles. The internet is becoming less resilient.

      And this has real, painful operational consequences. For sysadmins, this is making iDRAC/iLO annoying again.

      (for those who don't know what iDRAC/iLO are, it's the out-of-band management controller that let you access a server's console (KVM) even when the OS is toast. The shift from requiring crappy, insecure Java Web Start (JWS) to using HTML5 was a massive win for security and usability - old school sysadmins might remember keeping some crappy insecure browser around (maybe on a bastion host) to interact with these things because they wouldn't load on modern browsers after 6mo)

      Now, the SSL/TLS push is undoing that. Since the firmware on these embedded controllers can't keep pace with Chrome's release schedule, the controllers' older, functional certificates are rejected. The practical outcome is that we are forced to maintain an old, insecure browser installation just to access critical server hardware again.

      We traded one form of operational insecurity (Java's runtime) for another (maintaining a stale browser) all because a universal security policy fails to account for specialised, slow-to-update infrastructure... I can already hear the thundering herd approaching me: "BUT YOU NEED FIRMWARE UPDATES" or "YOU NEED TO DEPRECATE YOUR FIRMWARES IF NOT SUPPORTED".. completely tone-deaf to the environments, objectives and realities where these things operate.

      • notatoad 3 hours ago

        >if you don't use it, your argument literally won't load for a modern audience

        this is just a flat-out lie. yes, modern browsers will stilll load websites over http. come on.

        • lanyard-textile 3 hours ago

          And your ISP will be happy to show pop-up advertisements all over your HTTP website.

          And you, the owner, will likely be to blame by the user.

        • dijit 3 hours ago

          Like all things, it's complicated.

          Direct sites will load with a "Not Secure" warning, includes on the site might not load without chrome://settings/content/insecureContent

          And of course: you won't manage to be visible to Google itself, as you'll be down-ranked for not having TLS.

          If you happen to have a .dev domain: you're on the HSTS Preload list, so your site literally won't load.

          • dragonwriter 3 hours ago

            > And of course: you won't manage to be visible to Google itself, as you'll be down-ranked for not having TLS.

            You’ll be visible to Google (otherwise there would be nothing to downrank), you will just be less visible on Google.

peacebeard 3 hours ago

SSL benefits a user entering a password on a public network.

A MITM attack against your renewal does not expose your private key. I don’t think that causes the harm the article suggests.

  • 1718627440 3 hours ago

    It however does allow to intercept all future connections to your webserver until you recognize it and publish a revocation certificate.

    • Avamander 3 hours ago

      No. The private key does not leave the server, you can't use the certificate without it.

      • 1718627440 2 hours ago

        When you MITM a certificate request the attacker can provide it's own key.

        • Avamander 2 hours ago

          That's not what you described though, because then you wouldn't be able to revoke it once you notice it. You can't revoke a certificate without its private key. (Then it's only the CA who could if you convince them of the misissuance. Which probably means proving current access right now and asking the CA to revoke it.)

          In any case if someone can become the thing you're trying to validate, be it access to an IP address or some DNS zone, you're kinda out-of-luck anyways. Though WebPKI has CT, which will give you some insight into it, unlike everything else out there.

lanyard-textile 3 hours ago

Let’s Encrypt has always been a saving grace in my eyes: When it first entered the scene, it solved a problem we all loathed dealing with.

So I’ve always been fond of it and never really thought twice of it. While it’s rare for companies to support a shared resource together, this was a situation where it made sense.

But this is a good reminder to be wary of even the most benevolent looking tools and processes.

jesprenj 3 hours ago

I agree that HTTPS is not needed in most cases but ACME challenge to obtain a LE cert can be done securely:

* domain has DNSSEC * domain has CAA records only allowing DNS challenge and disallowing insecure HTTP challenge

but if we rely on DNSSEC we can just use DANE/TLSA and don't need the mess of CA/PKI

  • Avamander 3 hours ago

    > but if we rely on DNSSEC we can just use DANE/TLSA and don't need the mess of CA/PKI

    DNSSEC is PKI. We don't want to rely on it because it's significantly worse than WebPKI.

gmuslera 3 hours ago

https does 2 things: encrypt the communication (self-signed certificates are good enough for this), and verify that the site you are connecting to is what it seem to be, because a certification authority trusted by your browser signed the certificate that that site presents, and it should had validated somehow that the site belongs to its rightful owner.

The second part is the important one in this context, because there are ways to trick your dns resolution or ip routing. The dns resolution part is mitigated with DoH (that it also uses https with certificate), but that doesn't covers everything.

It might not be so fundamental for some just browsing sites, but for the ones you send data (not just credit card info) you may run into some risks.

  • BenjiWiebe 3 hours ago

    Self signed certificates really aren't good enough for encryption, unless you're doing TOFU before the MITM happens.

    Otherwise the evil MITM can decrypt the traffic, modify/inspect it, and re-encrypt it with their own self-signed certificate, and you're none the wiser.

    • gmuslera 3 hours ago

      The encryption is ok, but you are not talking with the right party.

  • 1718627440 3 hours ago

    Maybe it would have been better if we had encrypted only form data instead and only signed websites not encrypted them like package managers do. This also would allow caching in the network.

    • Avamander 2 hours ago

      We have that with Signed Exchanges. Rarely used in practice though.

hk1337 3 hours ago

I don't think anybody should go as all-in as the OP in the article but he might have some good points. Why does everything have to be https? Like, if I am writing a basic blog, with no forms, no CC payments, doesn't capture anything sensitive, why do I need an SSL certificate to appear as a valid site?

  • the_snooze 3 hours ago

    It's about integrity, not confidentiality. Without HTTPS, anyone between you and the server can mess with the content you're receiving. It could be injecting ads, or make the website upside-down [1], or replace downloads with malicious links.

    [1] https://pete.ex-parrot.com/upside-down-ternet.html

  • perching_aix 3 hours ago

    Because I want to read your words, not whatever other person's or machine's standing between your and my machine.

    It's one of the key points the author takes an issue with. That PKI is not MITM resistant enough, in the ways they dream of. That they need to monitor the CT logs, and that not being a whole lot.

    • hk1337 2 hours ago

      That's a niche problem to have though. Most random bloggers are not going to be targets of MITM that rewrites their posts. If that becomes a problem, then sure, add HTTPS.

      • perching_aix 2 hours ago

        You can read about the history of the non-nicheness of this problem in other comments. It also doesn't need targeting.

  • jeroenhd 3 hours ago

    Because your website will sell me Viagra and tell me to buy cryptocurrency when I visit it on some public wifis. Your words are also obscured by ISPs telling me that my router needs replacing and that I've nearly used up my data cap.

    You don't need TLS for your blog, though. Browsers will still connect to port 80 if you don't enable HTTPS.

thenthenthen 4 hours ago
  • riffic 3 hours ago

    static hosting is the way to go in 2025 if anyone is looking to avoid sudden traffic spikes taking down a questionably configured WordPress.

    • perching_aix 3 hours ago

      If they have a funny bone available, they should host it using Cloudflare Pages, just to really dot the i and cross the t on the hilarity of their intentional or not hypocrisy of using TLS, and specifically Let's Encrypt as their CA, since.

p0w3n3d 3 hours ago

  Issued to: michael.orlitzky.com
  Issued by:
  - Common Name (CN) E7
  - Organisation (O) Let's Encrypt
By the way what is the alternative to let's encrypt nowadays for a humble blog creator?
  • sigio 3 hours ago

    There is also ZeroSSL, buypass until recently (though they stopped this month), SSL.com Google, Actalis and others

DarkmSparks 4 hours ago

Tbh completely agree.

But also, There is no choice now. Best we can do is encourage people to use web browsers that let people visit http sites, and afaik, that doesn't exist anymore.

thayne 2 hours ago

There is a lot of misinformation in here.

For one thing the verification doesn't just make a single http request, it makes several from many different nodes. There is a risk that your hosting provider MitMs the verification, but you need some level of trust in your hosting provider anyway, and in some cases that is actually a feature, as it allows your hosting provider to manage the certificates for you.

And that is one way that traditional CAs verified domain ownership for DV certs.

Is it perfect? Absolutely not. Is it better than nothing? Absolutely.

I do wish that DANE, or something similar had caught on.

Also, if you trust Let's Encrypt to be your CA it seems very strange to consider certificates provided by them as "untrusted". Also, certbot, or many of the other options don't necessarily need to be run as root. And many webservers support getting acme certs themselves. Also, there is nothing stopping you from verifying the certs are valid before using them.

Also, with a short expiration time, automation is basically required, which means that you set up the automation, and some monitoring that the renewal happens correctly, and then just let it go. And your renewal process is continually tested. With manual renewal, you have to remember to renew it, and remember how to renew it, a long time after the last time you did it. It is much more likely that you forget.

AntronX 3 hours ago

> Update 2023-11-05 Yeah, I've got an LE cert now. And I don't want to talk about it.

Please do tell. I'm curious what forced him to join The Borg.

ozgrakkurt 3 hours ago

Super informative, always seemed like http is fine unless you are doing something security critical. But even blogs use https.

  • codethief 2 hours ago

    > always seemed like http is fine

    But it's really not, as countless comments here in this thread have correctly pointed out.

  • gjsman-1000 3 hours ago

    ...because most people re-use the same password across all websites, despite decades of begging them not to. In which case, do you want a blog exposing their password in plaintext?

    Nobody does; so there's very little to lose by also encrypting.

    • topaz0 3 hours ago

      why would you need a password to view a blog

      • gjsman-1000 3 hours ago

        To leave a comment with a consistent identity

      • Avamander 3 hours ago

        To log in and create a new blogpost?

jillesvangurp 3 hours ago

Messing around with certificates is indeed a bit of dreary busywork that just is the way it is because nobody seems to be around anymore to fix things properly in terms of standards.

I agree with much of this article. IMHO certificates signed by a certificate certbot are only marginally more secure than a self signed certificate. Basically you can prove your domain is yours with a certificate ... by proving your domain is yours to let's encrypt via a DNS check. That sounds a bit recursive. At this point there is not a lot being checked or verified by signing authorities.

IMHO the current focus on shortening certificate validity periods just highlights how inadequate certificates are and helps exactly noone stay safe. This is 100% certified a website. With a domain. Owned by somebody random on the internet. That's all that the certificate guarantees.

Any scammer learned years ago how to get certificates for their scam domains. Short of blocking those domains faster than they pop up (good luck), there's no way to derive any more meaning from those certificates than "this is a website".

It would help for there to be more authorities. And also to have longer expiry periods. I don't need this busywork in my life whether it's worrying about automating, monitoring, etc. or just about paying off some gate keeper for some meaningless check + bureaucracy. Longer expiry enables more strict/expensive checks to happen and browsers should be checking specific certificates against blacklists. Rotating certificates frequently makes both those things less practical and devalues the whole notion of a certificate. Any scammer will just use the same services to get the same kind of certificates.

Also, the reason we can't rely on the DNS yet, is that there is still a lot of legacy software relying on non secure ways to talk to the DNS. And that being a bit of infrastructure that predates the whole notion of having certificates also means that the biggest risk is a lot of legacy insecure DNS infrastructure that is easy to spoof and can't be trusted. Anyone see a problem with that and how certificates are issued? Otherwise, we could just stick our public keys in there and self sign our own certificates. But secure DNS is a prerequisite.

drob518 3 hours ago

Site is getting crushed, so I can’t access it, but upvoted for the sheer contrarian headline.

polaris421 3 hours ago

Funny how we're all securing our websites with time bombs that need defusing every 90 days.

  • sschueller 3 hours ago

    It's going to be less very soon, they are talking about 14 days...

tasuki 3 hours ago

> So we're spending $3,600,000 every year on certificates that aren't any better than self-signed

Gimme self-signed certificates please. With the ability to verify that the certificate was signed by whoever controls the domain I'm accessing. Abolish all certificate authorities. That's all I ask.

eimrine 3 hours ago

Now I know I want my weblog to be HTTP-only, let's have the balls.

jchw 3 hours ago

ACME renewal feels less like a time bomb than traditional renewal, even though it happens more often. Showing manually-renewed certs expiring as evidence for why ACME is a bad idea is literally completely backwards!

> My medical opinion: if it hurts, maybe you should stop doing it.

Funny enough, that's the exact opposite of the common wisdom for deployment:

> If it’s painful, do it often.

The idea is that if you were to wait months between deployments and do enormous deployments, there is a very good chance that you will have problems every time. First of all, if it's infrequent, you can tolerate things like downtime windows for deployment, which are unideal. Second of all, it batches tons of changes at once, which increases the chances you'll need to roll it all back. Thirdly, it makes it harder to even figure out what went wrong, since the problem-causing change could've gone in months ago.

By having ACME renewel happen very often, it should become apparent very quickly when they're not working, much closer to when you made the change that broke it. I believe this is an improvement full-stop. If you want it to work even better, add alerting when the certificate gets too old and monitoring/observability on the renewal processes. That gives you multiple layers of assurance that you probably wanted to have anyways.

Finally, it seems like the importance of encrypting all Internet traffic is just missing from the calculus presented here; that's just silly. I'm not going to go into it. It isn't imperative that literally every website is always encrypted all of the time, but for a multitude of reasons it is ideal if 99% of them are 99% of the time. Let's Encrypt might allow for a MitM if you can pass HTTP-01 or DNS-01 momentarily, but you know what's even easier? Just being literally anywhere in the path of someone's HTTP connection and being able to perform a MitM with having compromised nothing about the CA system or the website itself. Even if we allow for some sites to sit back on HTTP, it matters that 99% of the Internet is on HTTPS because it makes MitM attacks like this highly unattractive. This is good when you're on untrusted or potentially adversarial networks... Which is increasingly many of them.

The other thing missing here is just how clever the CA system has gotten. Mozilla and Google have together made this system work surprisingly well despite its flaws. The CT system makes issuing bad certificates very unattractive, as Google and Mozilla can fiercely enforce the rules, and CT makes it nearly impossible to hide when you go against them. With CT, CAA records, and other tools available, you can at least know with damn near certainty if someone did exploit the CA system or your infrastructure and pull certificates for your properties. With these improvements, relying on the CA system doesn't feel nearly as ugly.

And also, you don't necessarily need to use LE. I think LE is the most competent of the ACME providers, but many paid services provide ACME support, and ZeroSSL provides another free ACME service.

Shorter lived certificates also have other benefits that are not mentioned. For example, if certificates can last 5 entire years, a revoked certificate also has to be able to last that long. This makes CRLs pretty much untenable, and forces something like OCSP, which is bad for privacy. Shorter certificate lifespans were a big part of how Firefox was able to leave OCSP behind in favor of a more advanced version of the CRL scheme, a solid win for both privacy and TLS latency.

All in all, the juice is clearly worth the squeeze.

riffic 3 hours ago

yeah I'm taking operational sysadmin advice from someone whose site can't stand 35 minutes of HN traffic hugging.

  • 1718627440 3 hours ago

    Working fine here. Just a bit slow.

eduction 4 hours ago

Chesterton's Fence - why did everyone start encyrpting their websites?

His critiques of why LE is flawed security wise are spot on and I suspect something like SSH keys as he suggests would be pretty much as good.

But there's a reason we're encrypting everything, and the time when we started encrypting offers a clue as to why. Mass surveillance threat actors are not going to go to the trouble and visibility of MITMing every cert connection, but they will (and in the case of NSA did) happily go to the trouble of hoovering up network traffic en masse and watching how people surf. HTTPS provides some protection there because it at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.

The idea that $3.6m is a lot of money to encrypt a huge chunk of web traffic, or that Google is eagerly guarding the money it makes (?) off web certs, which must be a tiny fraction of its actual income, is a clue that this is maybe not a greedy conspiracy.

  • SoftTalker 3 hours ago

    > why did everyone start encyrpting their websites

    Because Google forced us to, by throwing up scary warnings if we didn't do it.

    Google doesn't care about $3.6mm. They do care about the additional control they have by this scheme.

    > [HTTPS] at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.

    This assumes there isn't a secret firehose feed from Google to the NSA, which I don't think is a safe assumption.

  • grepps09 3 hours ago

    Very much agree on the last point. Controlling the de facto CA for all non-corporate web sites still gives Google a lot of control over who gets to be visible on the Internet, and that’s where the value in LE is. The direct income from SSL certs are completely insignificant.

noirscape 3 hours ago

In general a lot of the modern HTTPS approach "feels" broken. Looking at it purely from a usability perspective, HTTPS combines two things when it really should only be doing one of them:

* Encryption is the first thing HTTPS does and the one I'd argue that actually matters the most. It prevents your ISP or other middle parties from snooping or modifying what packets end up being shown to the end user. This is something that fundamentally doesn't require a CA to work. A self signed certificate is just as secure as one issued by a certificate authority on the matter of encryption; you just run an openssl command and you have a certificate, no CA needed (although a CA could still be useful for ie. Trusting updated certificates in the same chain, there's little reason to demand this to be done through a third party from a security perspective.)

* The second one is identification. Basically, the certificate is meant to give the idea that the site you're visiting is trusted and verified to belong to somebody. This is what CAs provided... except in practice, CA identification guarantees basically doesn't exist anymore. Finding the entity a certificate is issued to is hard to do in modern browsers, ever since a security researcher proved that it's relatively trivial to do a name collision attack, so browser developers (aka Chrome and Mozilla) hide it behind click through windows and don't show them anymore by default. Since browsers mandate HTTPS for as many APIs as they can get away with, everyone including garden variety scammers just gets an HTTPS certificate, which utterly defeats the entire purpose. CAs are essentially sitting in the middle and unless a third party suddenly demands you get an OV/EV certificate, the argument to not just use the CA that gives literally anyone who asks a certificate after the barest minimum effort to prove they own a domain is pretty questionable. Your bank might use an OV/EV certificate, but your average person seeing the bank website will not visually see any difference between that and a scam site. Both got perfectly legitimate certificates; one just got them from LetsEncrypt instead, where they had to give no details on the certificate. Only nerds look at the difference when visiting sites, and more people than nerds use banks.

Since identification is utterly dead, the entire CA structure feels like it gives little security to a modern browser as opposed to just going with a TOFU scheme like we do for SSH. Functionally, a CA ran by a sysadmin has the exact same guarantee as a CA ran by LetsEncrypt on the open internet for encryption purposes, except LE gets to be in browser and OS root programs. They might as well have the same security standards once you bring in CAA records.

Final note: there's something backwards about how a plain HTTP connection just gets a small label in the browser to complain about it, while a HTTPS certificate that's a single minute out of date will lead to giant full screen red pages that you have to click through. For consistency, the HTTP page should be getting the same scare pages from an encryption perspective, but they don't.

moralestapia 4 hours ago

This reads like satire but ... it isn't?

  • ocdtrekkie 4 hours ago

    The entire behavior of the PKI regime seems like satire, but here we are. A massive amount of fragility introduced to the Internet to basically protect a few edge cases while not addressing any real practical attacks.

    It's been six years, this author is still right, and now the idiots at the CA/B have decided to move the bomb to a 47 day timer for the whole Internet.

    • akerl_ 3 hours ago

      MITM of unencrypted HTTP was so common that it was outright a business model for many ISPs.

      Anybody could look up a guide online on how to monitor who at their starbucks was logging into Facebook or whatever. We were having to train a generation of humans to be afraid of public wifi.

      • 1718627440 3 hours ago

        > it was outright a business model for many ISPs.

        I'm not sure if I would object to that if it would be used sparsely and you could opt out.

        • akerl_ 3 hours ago

          It was not and you could not.

    • PaulHoule 3 hours ago

      You could tell many different stories for how we got from a world where people made their own web sites to one where people just post on Facebook, but the transition to https is part of that story.

    • colonial 3 hours ago

      My dude, before HTTPS, anyone could go to a Starbucks and skim every customer's Facebook session with a free Firefox extension. That's not an "edge case."

      • Avamander 3 hours ago

        I even remember running some prank app on my Android that MITM-ed everyone's connections and started slowly removing letters from the website or replaced all the images with cat pictures. It worked super well. That could've have been more than 10-15 years ago.

        Things have improved significantly with HTTPS adoption.

VikRubenfeld 4 hours ago

This just can't be true. There's no way no one noticed this before. https can't be this dumb.

  • mikeocool 3 hours ago

    I think their critique is that to verify domain ownership, Let's Encrypt makes a request to your website over HTTP to the check the challenge -- which is true (because if you don't yet have an SSL certificate they can't make a request over HTTPS).

    I think they are implying that if someone can man in the middle your website, then they can also man in the middle this request, and issue a certificate for you domain. However, the threat model of man in the middle between a user and your web server is very different than man in the middle between let's encrypt and your web server.

    Before that widespread use of HTTPS it was trivial to connect to a coffeeshop's wifi network and sniff everyone else's traffic, and ISPs would man in the middle you to inject their own adds in websites you were looking at.

    On the other hand to man in the middle Let's Encrypt -> your web server, you likely need to be state level actor and/or be or have hacked a major telecom (assuming your web server is running in a reputable data center). Folks like that can almost certainly already issue a certificate for your domain without running a man in the middle on Let's Encrypt.

  • nhumrich 3 hours ago

    It's not. Certs are designed to protect the average user from MitM, not to protect corporations from it.

    • 1718627440 3 hours ago

      They do protect corporations, because they can just let there certificate be actually validated of band.