Aurornis 11 hours ago

> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

  • karaterobot 10 hours ago

    > In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

    He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

  • torginus 2 hours ago

    And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.

  • low_tech_love 6 hours ago

    The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.

  • jart 10 hours ago

    Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

    It's always more comfortable for people to blame the thing rather than the person.

    • InitialLastName 9 hours ago

      More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.

      • jart 9 hours ago

        Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.

        • ZYbCRq22HbJ2y7 8 hours ago

          Ah yeah, fentanyl drug adulterers, what great benefactors of society.

          Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.

          https://en.wikipedia.org/wiki/Common_good

          • jart 7 hours ago

            No one desires adulterated fentanyl.

            • harperlee 32 minutes ago

              Fentanyl adulterators, market creators and resellers certainly do, for higher margin selling and/or increased volume.

            • ZYbCRq22HbJ2y7 7 hours ago

              No one has desire for adulteration, but they have a desire for an opiate high, and are willing to accept adulteration as a side effect.

              You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.

    • PeeMcGee 6 hours ago

      I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.

      • friendzis 3 hours ago

        If you are in the news business you basically have to.

Animats 5 hours ago

The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.

Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.

The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.

DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.

[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...

[2] https://apnews.com/article/us-intelligence-services-ai-model...

  • D_Alex 4 hours ago

    The really big problem in open source intelligence has been for some time that data to support just about anything can be found. OSINT investigations start with a premise, look for data that supports the premise and rarely look for data that contradicts it.

    Sometimes this is just sloppy methodology. Other times it is intentional.

  • B1FF_PSUVM an hour ago

    > listen to Radio Albania just in case somebody said something important

    ... or just to know what they seem to be thinking, which is also important.

jruohonen 18 hours ago

"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654

  • johnnyanmac 2 hours ago

    >This isn’t hypothetical. This is happening now, in real-world workflows.

    Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.

    Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output

  • cmiles74 14 hours ago

    Anyone using these tools would do well to take this article to heart.

    • mr_toad 2 hours ago

      I think there’s a lot of people who use these tools because they don’t like to read.

  • gneuron 11 hours ago

    Reads like it was written by AI.

0hijinks 9 hours ago

It sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:

> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”

> It spits out a convincing response: “Paris, near Place de la République.” ...

> But a trained eye would notice the signage is Belgian. The license plates are off.

> The architecture doesn’t match. You trusted the AI and missed the location by a country.

Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."

How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...

At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.

EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...

  • MadnessASAP 8 hours ago

    The advantage of the AI in this scenario is the starting point. You now can start cross referencing signage, language, license plates, landmarks. To verify or disprove the conclusion.

    A further extension to the AI "conversation" might be: "What other locations are similar to this?" And "Why isn't it those locations?" Which you can then cross reference again.

    Using AI as an entry point into massive datasets (like millions of photos from around the world) is actually useful. Correlation is what AI is good, but not infallible, at.

    Of course false correlations exist and correlation is not causation but if you can narrow your search space from the entire world to the Eiffel tower in Paris or in Vegas you're ahead of the game.

palmotea 14 hours ago

One way to achieve superhuman intelligence in AI is to make humans dumber.

  • ryao 9 hours ago

    This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal.

    • SoftTalker 9 hours ago

      The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI.

      • ryao 7 hours ago

        The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly.

  • xrd 9 hours ago

    If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on.

  • boringg 12 hours ago

    The cultural revolution approach to AI.

  • imoverclocked 14 hours ago

    That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.

  • 6510 11 hours ago

    I thought: A group working together poorly isn't smarter than the smartest person in that group.

    But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.

    • trentlott 10 hours ago

      That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some.

    • jimmygrapes 11 hours ago

      anybody who's ever tried to play bar trivia with a team should recognize this

      • rightbyte 3 hours ago

        What do you mean? You can protest against bad but fast answers and check another box with the pen.

  • yieldcrv 12 hours ago

    Right, superhuman would be relative to humans

    but intelligence as a whole is based on a human ego of being intellectually superior

    • caseyy 12 hours ago

      That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism.

      Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.

      If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.

      • yieldcrv 7 hours ago

        Some things I think about are how different the goals could be

        For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.

        A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.

        • caseyy 3 hours ago

          This reminds me of the "universal building blocks of life" or the "standard model of biochemistry" I learned at school in the 90s. It held that all life requires water, carbon-based molecules, sunlight, and CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur).

          Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight.

          Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting.

pcj-github 10 hours ago

This resonates with me. I feel like AI is making me learn slower.

For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.

  • imadethis 9 hours ago

    I've found the same effect when I ask the LLM to do the thinking for me. If I say "rewrite this function to use a list comprehension", I don't retain anything. It's akin to looking at Stack Overflow and copying the first result, or going through a tutorial that tells you what to write without ever explaining it.

    The real power I've found is using it as a tutor for my specific situation. "How do list comprehensions work in Python?" "When would I use a list comprehension?" "What are the performance implications?" Being able to see the answers to these with reference to the code on my screen and in my brain is incredibly useful. It's far easier to relate to the business logic I care about than class Foo and method Bar.

    Regarding retention, LLMs still doesn't hold a candle to properly studying the problem with (well-written) documentation or educational materials. The responsiveness however makes it a close second for overall utility.

    ETA: This is regarding coding problems specifically. I've found LLMs fall apart pretty fast on other fields. I was poking at some astrophysics stuff and the answers were nonsensical from the jump.

  • jart 10 hours ago

    Try using the LLM as a learning tool, rather than asking it to do your job.

    I don't really like the way LLMs code. I like coding. So I mostly do that myself.

    However I find it enormously useful to be able to ask an LLM questions. You know the sort of question you need to ask to build an intuition for something? Where it's not a clear problem answer type question you could just Google. It's the sort of thing where you'd traditionally have to go hunt down a human being and ask them questions? LLMs are great at that. Like if I want to ask, what's the point of something? An LLM can give me a much better idea than reading its Wikipedia page.

    This sort of personalized learning experience that LLMs offer, your own private tutor (rather than some junior developer you're managing) is why all the schools that sit kids down with an LLM for two hours a day are crushing it on test scores.

    It makes sense if you think about it. LLMs are superhuman geniuses in the sense of knowing everything. So use them for their knowledge. But knowing everything is distracting for them and, for performance reasons, LLMs tend to do much less thinking than you do. So any work where effort and focus is what counts the most, you're better off doing that yourself, for now.

  • eschaton 10 hours ago

    Why are you using an LLM at all when it’ll both hamper your learning and be wrong?

    • dwaltrip 9 hours ago

      > While AI has been very helpful in lowering the bar to /begin/ learning Rust

  • neevans 5 hours ago

    Nah you are getting it wrong the issue here is YOU NO LONGER NEED TO LEARN RUST thats why you are learning it slow.

  • whatnow37373 7 hours ago

    The world will slowly, slowly converge on this but not before many years of hyping and preaching about how this shit is the best thing since sliced bread and shoving it into our faces all day long, but in the meantime I suggest we be mindful of our AI usage and keep our minds sharp. We might be the only ones left after a decade or two of this.

treyfitty 14 hours ago

Well, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.

  • OgsyedIE 14 hours ago

    Similar criticisms that outsiders need to do their own research to acquire foundational-level understanding before they start on the topic can be made about other popular topics on Hn that frequently use abbreviations, such as TLS, BSDs, URL and MCP, but somehow those get a pass.

    Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.

    • Aeolun 11 hours ago

      I think if I can expect my mom to know what it is, I shouldn’t have to define it in articles any more.

      So TLS and URL get a pass, BSD’s and MCP need to be defined at least once.

      • ChadNauseam 11 hours ago

        Your mom knows what TLS is? I'm not even sure that more than 75% of programmers do.

      • jonjojojon 11 hours ago

        Does your mom really know what TLS means? I would guess that even "tech savvy" members of the general public don't.

  • caseyy 11 hours ago

    OSINT = open source intelligence. It’s the whole of openly accessible data fragments about a person or item of interest, including their use for intelligence-gathering objectives.

    For example, suppose a person shares a photo online, and your intelligence objective is to find where they are. In that case, you might use GPS coordinates in the photo metadata or a famous landmark visible in the image to achieve your goal.

    This is just for others who are curious.

  • walterbell 14 hours ago
    • lmm 11 hours ago

      > Offline version: https://www.kiwix.org

      That doesn't actually work though. Try to set it up and it just fails to download.

      • walterbell 11 hours ago

        On which platform? It's a mature project that has been working for years on desktops and phones, with content coverage that has expanded beyond wikipedia, e.g. stackoverflow archives. Downloadable from the nearest app store.

  • dullcrisp 13 hours ago

    Ironically, my local barber shop also wouldn't explain to me what OSINT stands for.

  • Daub 2 hours ago

    There is a lot to be said for the academic tradition of only using an acronym/abbreviation after you have first used the complete term.

  • hmcq6 13 hours ago

    The OSINT framework isn’t meant to be an intro to OSINT. This is like getting mad that https://planningpokeronline.com/ doesn’t explain what Kanban is.

    If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information

  • jrflowers 9 hours ago

    Volunteering “I give up if the information I want isn’t on the first page of the first website that I think of” in a thread about AI tools eroding critical thinking isn’t the indictment of the site that you linked to that you think it is.

    There is a whole training section right there like you just didn’t feel like clicking on it

ridgeguy 12 hours ago

I think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.

  • ttyprintk 9 hours ago

    The final essay for my OSINT cert was to pick a side: critical thinking can/cannot be taught.

sepositus 11 hours ago

> Participants weren’t lazy. They were experienced professionals. But when the tool responded quickly, confidently, and clearly they stopped doing the hard part.

This seems contradictory to me. I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature. If they didn't research the tool and its limitations, that's lazy. At some point, they stopped believing in this limitation and offloaded more of their thinking to it. Why did they stop? I can't think of a single reason other than being lazy. I don't accept the premise that it's because the tool responded quickly, confidently, and clearly. It did that the first 100 times they used it when they were probably still skeptical.

Am I missing something?

  • NegativeK 11 hours ago

    The idea that everyone is either full lazy or not lazy is a bit reductionist. People change their behavior with the right (or wrong) stimulus.

    Also, I won't remotely claim that it's the case here, but external pressures regularly push people into do the wrong thing. It doesn't mean anyone is blameless, but ignoring those pressures or the right (or wrong) stimuli makes it a lot harder to actually deal with situations like this.

    • sepositus 10 hours ago

      > The idea that everyone is either full lazy or not lazy is a bit reductionist.

      Fair point. My intention isn't to be absolute, though. Even in a relative sense, I can't imagine a scenario where some level of laziness didn't contribute to the problem, even in the presence of external factors.

      It seems like the author was eliminating laziness with their statement and instead putting the primary force on the LLM being "confident." This is what I'm pushing back against.

  • lambda 11 hours ago

    > I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature.

    Most people don't actually critically evaluate LLMs for what they are, and actually buy into the hype that it's a super-intelligence.

    • sepositus 11 hours ago

      Yeah, which I consider a form of intellectual laziness. Another reason to doubt that these professionals "were not being lazy."

  • ip26 8 hours ago

    Could have performed accurately in their past usage, building trust. Sometimes it will also get something right that is downright shocking, far beyond what you hoped.

  • esafak 10 hours ago

    It's deceptively easy to trust the AI when it gives you mostly plausible answers.

BrenBarn 9 hours ago

It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."

It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.

To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.

  • ketzo 9 hours ago

    It’s already becoming politicized, in the lowercase-p sense of the word. One is assumed to be either pro- or anti-AI, and so you gotta do your best to signal to the reader where you lie.

    • ZYbCRq22HbJ2y7 8 hours ago

      > so you gotta do your best to signal to the reader where you lie

      Or what?

      • brain5ide 8 hours ago

        Or the reader will put you into a category yourself and won't be willing to look at the essence of the argument.

        I'd say the better word for that is polarising than political, but they synonims these days.

  • overgard 8 hours ago

    Well I mean, nitpick, but Fentanyl is a useful medication in the right context. It's not inherently evil.

    I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?)

    But I actually think AI (not AGI) as an assistant can be helpful.

    • Terr_ 8 hours ago

      > I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. [...] (not AGI)

      Speaking of Wisdom and a different "AGI", I think there's an old Dungeons and Dragons joke that can be reworked here:

      Intelligence is knowing than an LLM uses vector embeddings of tokens.

      Wisdom is knowing LLMs shouldn't be used for business rules.

    • brain5ide 8 hours ago

      Are we talking about structural things or about individual perspective things?

      At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.

      At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.

      Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.

      • walterbell 8 hours ago

        > build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner

        A few thousand years of pre-LLM primary sources remain available for evaluation by humans and LLMs.

        • coryrc 7 hours ago

          You and I remember pre-AI famous works. "Hey, I'm pretty sure Odysseus took a long time to get home". Somebody goes and prints 50 different AI-generated versions of the _Odyssey_, how are future generations supposed to know which is real and which is fake?

          • walterbell 6 hours ago

            > how are future generations supposed to know which is real

            Reality/truth/history has always been an expensive pursuit in the face of evolving pollutants.

          • noosphr 7 hours ago

            This is literally how the Odyssey was passed down for the 2000 years before the printing press was invented.

            Every work had multiple versions. All versions were different. Some versions were diametrically opposed to others.

            Have a look at Bible scholarship to see just _how_ divergent texts can become by nothing more than scribe errors.

            • samtheprogram 6 hours ago

              99.9999999% sure that was their point? Why else would they bring up that particular work?

    • spooky_action 4 hours ago

      What evidence is there that tarrif policy was LLM generated?

      • calcifer 4 hours ago

        There are uninhabited islands on the list.

        • KoolKat23 3 hours ago

          Despite people's ridicule this is normal practice, prevents loopholes being exploited.

          • mr_toad 2 hours ago

            It seems more likely that bad data was involved.

            There are actually export statistics (obviously errors, possibly fraud) for these islands. Someone probably stuck the numbers in a formula without digging a little deeper.

      • af78 4 hours ago

        There are people who asked several AI engines (ChatGPT, Grok etc.) “what should the tariff policy be to bring the trade balance to zero?” (quoting from memory) an the answer was the formula used by the Trump administration. If I find the references I will post them as a follow-up.

        Russia, North Korea and handful of other countries were spared, likely because they sided with the US and Russia at the UN General Assembly on Feb 24 of this year, in voting against “Advancing a comprehensive, just and lasting peace in Ukraine.” https://digitallibrary.un.org/record/4076672

        EDIT: Found it: https://nitter.net/krishnanrohit/status/1907587352157106292

        Also discussed here: https://www.latintimes.com/trump-accused-using-chatgpt-creat...

        The theory was first floated by Destiny, a popular political commentator. He accused the administration of using ChatGPT to calculate the tariffs the U.S. is charged by other countries, "which is why the tariffs make absolutely no fucking sense."

        "They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater," Destiny, who goes by @TheOmniLiberal on X, shared in a post on Wednesday.

        > I think they asked ChatGPT to calculate the tariffs from other countries, which is why the tariffs make absolutely no fucking sense.

        > They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater. https://t.co/Rc45V7qxHl pic.twitter.com/SUu2syKbHS

        > — Destiny | Steven Bonnell II (@TheOmniLiberal) April 2, 2025

        He attached a screenshot of his exchange with the AI bot. He started by asking ChatGPT, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on even-playing fields when it comes to trade deficit? Set minimum at 10%."

        "To calculate tariffs that help level the playing field in terms of trade deficits (with a minimum tariff of 10%), you can use a proportional tariff formula based on the trade deficit with each country. The idea is to impose higher tariffs on countries with which the U.S. has larger trade deficits, thus incentivizing more balanced trade," the bot responded, along with a formula to use.

        John Aravosis, an influencer with a background in law and journalism, shared a TikTok video that then outlined how each tariff was calculated; by essentially taking the U.S. trade deficit with the country divided by the total imports from that country to the U.S.

        "Guys, they're setting U.S. trade policy based on a bad ChatGPT question that got it totally wrong. That's how we're doing trade war with the world," Aravosis proclaimed before adding the stock market is "totally crashing."

    • XorNot 6 hours ago

      Honestly this post seems like misplaced wisdom to me: your concern is the development of AGI displacing jobs and not the numerous reliability problems with the analytic use of AI tools in particular the overestimate of LLM capabilities because they're good at writing pretty prose?

      If we were headed straight to the AGI era then hey, problem solved - intelligent general machines which can advance towards solutions in a coherent if not human like fashion is one thing but that's not what AI is today.

      AI today is enormously unreliable and very limited in a dangerous way - namely it looks more capable then it is.

  • croes 8 hours ago

    It’s a rant against the wrong usage of a tool not the tool as such.

    • Turskarama 8 hours ago

      It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

      • Terr_ 6 hours ago

        My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM.

        If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent."

        The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful".

        Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality."

      • croes 6 hours ago

        That's the real danger of AI.

        The false promises of the AI companies and the false expectations of the management and users.

        Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data.

        They trust AI before it's even there and don't even consider a transition period where they check if the result are correct.

        Like with security convenience prevails.

        • blackqueeriroh 5 hours ago

          But isn’t this just par for the course with every new technological revolution?

          “It’ll change everything!” they said, as they continued to put money in their pockets as people were distracted by the shiny object.

      • xpe 4 hours ago

        > All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

        If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude.

        Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

        This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations.

        • Turskarama 4 minutes ago

          Here's the root of the problem though, how do you know that the AI is actually "thinking" more carefully, as opposed to just pretending to?

          The short answer is: you can know for a fact that it _isn't_ thinking more carefully because LLMs don't actually think at all, they just parrot language. LLMs are performing well when they are putting out what you want to hear, which is not necessarily a well thought out answer but rather an answer that LOOKS well thought out.

        • GeoAtreides 4 hours ago

          > Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

          Ask on a Wednesday. During a full moon. While in a shipping container. Standing up. Keep a black box on your desk as the sacred GenAI avatar and pray to it. Ask while hopping on one leg.

    • mike_hearn 5 hours ago

      Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.

      It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.

  • dragonwriter 8 hours ago

    Because "rant" is irrational, and the author wants to be seen as staking out a rational opposition.

    Of course, every ranter wants to be seen that way, and so a protest that something isn't a rant against X is generally a sign that it absolutely is a rant against X that the author is pre-emptively defending.

    • voxl 8 hours ago

      I've rarely read a rant that didn't consist of some good logical points

      • croes 8 hours ago

        Doesn‘t mean listing logical points makes it a rant

        • throwaway290 6 hours ago

          If logical points are all against sth that is debatable then it's a rant. They can be good points tho.

          • croes 6 hours ago

            • Instead of forming hypotheses, users asked the AI for ideas.

            • Instead of validating sources, they assumed the AI had already done so.

            • Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

            These are point against certain actions with a tool not against the tool.

            AI is for the starting point not the final result.

            AI must never be the last step but it often is because people trust computers especially if they answer in a confident language.

            It's the ELIZA effect all over again.

    • YetAnotherNick 6 hours ago

      The classic hallmark of rant is picking some study, not reading the methodology etc and making wild conclusion on it. For example for a study it says:

      > The study revealed a clear pattern: the more confidence users had in the AI, the less they thought critically

      And the study didn't even checked that. They just plotted the correlation between how much user think they rely on AI vs how much effort they think they saved. Isn't it expected to be positive even if they think as critically.

      [1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

  • aprilthird2021 9 hours ago

    The other thing is that the second anyone even perceives an opinion to be "anti-AI" they bombard you with "people thought the printing press lowered intellect too!" Or radio or TV or video games, etc.

    No one ever considers that maybe they all did lower our attention spans, prevent us from learning as well as we used to, etc. and now we are at a point we can't afford to keep losing intelligence and attention span

    • mike_hearn 5 hours ago

      I think people don't consider that because the usual criticism of television and video games is that people spend too long paying attention to them.

      One of the famous Greek philosophers complained that books were hurting people's minds because they no longer memorized information, so this kind of complaint is as old as civilization itself. There is no evidence that we would be on Mars by now already if we had never invented books or television.

      • pasabagi 4 hours ago

        Pluto? Plotto? Platti?

        Seriously though, that's a horrible bowdlerization of the argument in the Phaedrus. It's actually very subtle and interesting, not just reactionary griping.

    • nostrebored 6 hours ago

      That’s a much harder claim to prove. The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?

      If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?

      I suspect that as the problem spaces diverge enough you’ll have two skill sets. Who can solve n problems the fastest and who can determine which k problems require deep thought and narrow direction. Right now we have the same group of people solving both.

      • friendzis 4 hours ago

        > The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?

        Gell-Mann Amnesia. Attention span limits the amount information of information we can process and with attention spans decreasing, increases to information flow stop having a positive effect. People simply forget what they started with even if that contradicts previous information.

        > If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?

        You don't end up solving the problem in near constant time, you end up applying the last suggested solution. There's a difference.

  • SoftTalker 9 hours ago

    The difference is that between a considered critique and unhinged venting.

  • yapyap 5 hours ago

    It’s not a rant against fentanyl, it’s a rant against irresponsible use of fentanyl.

    Just like this is a rant against irresponsible use of AI.

    Hope this helps

    • johnisgood an hour ago

      Yes, that makes much more sense.

  • throwaway894345 8 hours ago

    TFA makes the point pretty clear IMHO: they aren’t opposed to AI, they’re opposed to over-reliance on AI.

  • EGreg 6 hours ago

    Reminds me of people who say “there is nothing wrong with capitalism but…”

    You shall not criticize the profit!

  • woah 9 hours ago

    They have to preface their articles with "This isn’t a rant against AI." because there are a lot of rants against AI out there, such as your comment.

  • johnisgood 5 hours ago

    Both substances and AI can be used responsibly. It is not the fault of substances nor AI.

    People is why we can't have anything nice. It sucks.

    I have medical reasons to take opioids, but in the eyes of people, I am a junkie. I would not be considered a junkie if I kept popping ibuprofen. It is silly. Opioids do not even make me high to begin with (it is complicated).

    • johnisgood an hour ago

      I bet the downvotes are done by people who have absolutely no need to take any medications, or have no clue what it is like to be called a junkie for the rest of your life for taking medications that were prescribed to begin with.

      Or if not, then what, is it not true that both substances and AI can be used responsibly, and irresponsibly?

      "People is why we can't have anything nice. It sucks." is also true, applies to many things, just consider vending machines alone, or bags in public (for dog poop) and anything of the sort. We no longer have bags anymore, because people stole it. A great instance of "this is why we can't have nice things". Pretty sure you can think of more.

      Make the down-votes make sense, please.

      (I do not care about the down-votes per se, I care about why I am being disagreed with without any responses.)

Animats 9 hours ago

You have to use machine filtering of some kind, because there's too much information.

A director of NSA, pre 9/11, once remarked that the entire organization produced about two pieces of actionable intelligence a day, and about one item a week that reached the President. An internal study from that era began "The U.S. Government collects too much information".

But that was from the Cold War era, when the intelligence community was struggling to find out basic things such as how many tank brigades the USSR had. After 9/11, the intel community had to try to figure out what little terrorist units with tens of people were up to. That required trolling through far too much irrelevant information.

torginus 2 hours ago

Most cybersecurity is just a smoke show anyways, presentation matters more than content. AI is just good at security theather as humans are.

ramonverse 2 hours ago

> Not because analysts are getting lazy, but because AI is making the job feel easier than it actually is.

But all the examples feel like people are being really lazy, e.g.

> Paste the image into the AI tool, read the suggested location, and move on.

> Ask Gemini, “Who runs this domain?” and accept the top-line answer.

ghssds 7 hours ago

I like how all these articles miss the elephant in the room: using a chatbot as an assistant is offering your data, thoughts, insights, and focus of interests to a corporation that's at best neutral and at worse hostile. Moreover, that corporation may also share anything with business partners, governments, and law enforcement institutions with unknown objectives.

  • vasco 7 hours ago

    > is offering your data, thoughts, insights, and focus of interests to a corporation that's at best neutral and at worse hostile

    For a second I thought you were talking about the fact we all have jobs doing exactly that!

    • walterbell 5 hours ago

      > jobs doing exactly that

      Hopefully narrowed by team, role and task..

zora_goron 12 hours ago

I wrote about some similar observations in the clinical domain -- I call it the "human -> AI reasoning shunt" [0]. Explicitly requesting an AI tool to perform reasoning is one thing, but a concern I have is that, with the increasing prevalence of these AI tools, even tasks that theoretically are not reasoning-based (ie helping write clinical notes or answer simple questions) can surreptitiously offload some degree of reasoning away from humans by allowing these systems to determine what bits of information are important or not.

[0] https://samrawal.substack.com/p/the-human-ai-reasoning-shunt

ringeryless 4 hours ago

I question the notion that such tools are necessary or admissible in my daily life.

Mere observation of others has shown me the decadence that results from even allowing such "tools" into my life at all.

(who or what is the tool being used?)

I have seen zero positive effects from the cynical application of such tools in any aspect of life. The narrative that we "all use them" is false.

tqi 10 hours ago

It's been less than 3 years, yet this guy is already able to confidently predicting a "collapse of critical thinking." I'm sure that is the product of rational analysis and not confirmation bias...

  • ergonaught 9 hours ago

    There is published research demonstrating the effect, but sure, snark your way into your own confirmation bias. Why not?

    The idea that humans in general actually do any thinking is demonstrably false.

    • tqi 6 hours ago

      Give me a break, I read the article, I'm not convinced does anything to further his specific claims about the community. Frankly the whole thing is just a rant about how things were better back when.

      "But the tradecraft is slipping. Analysts are skipping the hard parts. They’re trusting GenAI to do the heavy cognitive lifting, and it’s changing how we operate at a foundational level."

      Next we're going to be hearing about how participation trophies and DEI are also contributing to this imagined "problem."

  • farts_mckensy 10 hours ago

    Really more of a moral panic than a coherent analysis, but that's not unusual.

nottorp 4 hours ago

Why OSINT? That goes for any domain.

Besides "OSINT" has been busy posting scareware for years, even before "AI".

There's so much spam that you can't figure out what the real security issues are. Every other "security article" is about "an attacker" that "could" obtain access if you were sitting at your keyboard and they were holding a gun to your head.

BariumBlue 14 hours ago

Good point in the post about confidence - most people equate confidence with accuracy - and since AIs always sound confident, they always sound correct

  • kibwen 12 hours ago

    There's a reason the "con" in "con man" stands for "confidence". Turns out the illusion of confidence is the easiest way to hack the human brain.

    • jfengel 11 hours ago

      Except that's not what "confidence man" means. It means that you gain their confidence/trust, then betray it.

      A con man often uses the illusion of confidence to gain trust, though that's not the only way. The reverse also works: gain their trust by seeming unconfident and incapable, and thus easily taken advantage of.

  • rglover 13 hours ago

    Yep. Last night I was asking ChatGPT (4o) to help me generate a simple HTML canvas that users could draw on. Multiple times, it spoke confidently of its not even kind of working solution (copying the text from the chat below):

    - "Final FIXED & WORKING drawing.html" (it wasn't working at all)

    - "Full, Clean, Working Version (save as drawing.html)" (not working at all)

    - "Tested and works perfectly with: Chrome / Safari / Firefox" (not working at all)

    - "Working Drawing Canvas (Vanilla HTML/JS — Save this as index.html)" (not working at all)

    - "It Just Works™" (not working at all)

    The last one was so obnoxious I moved over to Claude (3.5 Sonnet) and it knocked it out in 3-5 prompts.

    • numpad0 4 hours ago

      IME, it's better to just delete erroneous responses and fix prompts until it works.

      They are much better at fractally subdividing and interpreting inputs like a believer of a religion, than at deconstructing and iteratively improving things like an engineert. It's waste of token count trying to have such discussions with an LLM.

    • dullcrisp 13 hours ago

      To be fair, I wouldn't really expect working software if someone described it that way either.

      • rglover 12 hours ago

        Those are not my prompts. Those were the headings it put above the code it generated in its responses.

        Even if my prompt was low-quality, it doesn't matter. It's confidently stating that what it produced was both tested and working. I personally understand that's not true, but of all the safety guards they should be putting in place, not lying should be near the top of the list.

        • mattgreenrocks 12 hours ago

          Intellectual humility is just as rare with AI as it is with humans.

    • Aeolun 11 hours ago

      4o is almost laughably bad at code compared to Claude.

  • morkalork 13 hours ago

    The number of times I've caught chatgpt passing off something borked with perfect confidence is growing but what's truly annoying is when you point it out and you get that ever so cheerful "oh I'm so sorry teehee" response from it. It's dumb stuff too like a formula it's simplified based on a assumption that was never prompted.

ringeryless 4 hours ago

Aka, i have no problem being explicitly anti AI as a bad idea to begin with. This is what I think, that it is a foolish project from the get go.

Techne is the Greek word for HAND.

Daub 2 hours ago

Am I the only one to have to search for what OSINT was an acronym for?

Terr_ 4 hours ago

> What Dies When Tradecraft Goes Passive?

Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.

vincnetas 7 hours ago

Tried one exercise from the article, to ask gemini to identify owner of domain (my domain). Gemini was very confident and very wrong.

I bet any OSINT person would have had my name and contact in half an hour.

ingohelpinger 6 hours ago

It's true, so often chatgpt has to apologize because it was wrong. lol

  • dambi0 5 hours ago

    Do you think humans are less likely to be wrong or just less likely to apologize when they are?

    • ingohelpinger 3 hours ago

      i think being wrong is fine, but being wrong intentionally is not very human, this is due to emotions, consciousnesses, pride etc. which ai does not have as of now, and this leads me to believe, it's just another religion which will be used to "make the world a better place" :D

Barrin92 12 hours ago

> “Paris, near Place de la République.” It sounds right. You move on. But a trained eye would notice the signage is Belgian. The license plates are off. The architecture doesn’t match. You trusted the AI and missed the location by a country.

I genuinely hope if you're a professional intelligence analyst it doesn't take a trained eye to distinguish Paris from Belgium. Genuinely every day there's articles like this. The post about college students at elite universities who can't read, tariff policy by random number generator, programmers who struggle to solve first semester CS problems, intelligence analysts who can't do something you can do if you play Geoguessr as a hobby. Are we just getting dumber every year? It feels like we're falling off a cliff over the last decade or so.

Like, the entire article boils down to "verify information and use critical thinking", you'd think someone working in intelligence and law enforcement which this author trains knows this when they get hired?

  • dexwiz 12 hours ago

    I think most people over estimate the average persons intelligence. Complaints like this are as old as time if you read primary sources from past generations.

petesergeant 6 hours ago

Relevant today as I unpick some unit tests I let AI write and turn out to be very plausible-looking at first and second glance, but turned out to test nothing of value when properly examined.

cess11 4 hours ago

"OSINT" has had a rather quick collapse in that area for quite some time, many participants under that label are basically propaganda outlets for whatever state or other.

Maybe the article addresses that, I'm not permitted to read it, likely because I'm using IPv6.

Forensic Architecture is a decent counterexample, however. They've been using machine learning and computer synthesis techniques for years without dropping in quality.

roenxi 12 hours ago

This article seems a bit weird because it doesn't talk about whether the quality of the analysis went up or down afterwards.

To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.

AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.

ImHereToVote 4 hours ago

The trouble with OSINT is that they often take the opinions of "good" government officials and journalists at face value.

This sort of lazy thinking doesn't miss a beat when it comes to take the opinions of an LLM at face value.

Why not? It sounds mostly the same. The motivations to believe AI, is exactly the same as the motivation to believe government officials and journalists.

FrankWilhoit 16 hours ago

A crutch is one thing. A crutch made of rotten wood is another.

  • add-sub-mul-div 15 hours ago

    Also, a crutch for doing long division is not the same as a crutch for general thinking and creativity.

    • rini17 13 hours ago

      It isn't something completely new, there are many cases of unwarranted trust in machines even before computers existed. AI just adds persuasion.

      The "Pray Mr. Babbage..." anecdote comes to mind: https://www.azquotes.com/quote/14183

  • zarmin 11 hours ago

    This comment indirectly represents my current biggest fear with respect to AI; I have encountered a disturbing lack of comprehension for figurative language. Abstractions, analogies, and figurative language are, I believe, critical tools for thinking. "Rotten wood, what are you even saying?"

    People also seem to be losing their ability to detect satire.

    I'm concerned GenAI will lower creative standards too, that people will be fine with the sound of suno, or the look of Dall-E. How then would the arts evolve?

    • danielbln 5 hours ago

      How will arts evolve? By recombining these things, as it always has. I swear, this thread is a collection of the most curmudgeony people. "People no longer use their memory now that all those pesky books are around".

      The kids will be alright.

voidhorse 12 hours ago

The main takeaway of this whole LLM chatbot nonsense to me is how gullible people are and how low the bar is.

These tools are brand new and have proven kinks (hallucinations, for example). But instead of being, rightly, in my view, skeptical, the majority of people completely buy into the hype and already have full automation bias when it comes to these tools. They blindly trust the output, and merrily push forth AI generated, incorrect garbage that they themselves have no expertise or ability to evaluate. It's like everyone is itching to buy a bridge.

In some sense, I suppose it's only natural. Much of the modern economy sustains itself on little more than hype and snake oil anyway, so I guess it's par for the course. Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye. Worse, they all hype it up even further. Makes me feel like the whole LLM business is some kind of ponzi scheme given how willingly users will schill for these products for nothing.

  • mattgreenrocks 11 hours ago

    For sure. I look at the shilling people do for The Next Big Thing (such as AI) and think, “if you put that much time and care into acquiring competence in something useful you wouldn’t need to be chasing internet clout along with all the other too-online people.”

  • antegamisou 10 hours ago

    > Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye.

    That's the main problem, it's becoming the standard in everything.

smashah 11 hours ago

At the end of the day it is people who are doing OSINT and their self/ai confidence is a reflection of their fallibility, just as being manipulated by intelligence operatives in their discord servers to be peer pressured into pushing a certain narrative. OSINT should be about uncovering objective truth in a sea full of lies in a storm of obfuscation through a tsunami of misinformation caused by an earthquake of disinformation. Now these OSINT people need to battle the siren song of clout (and being first).

I doubt anyone can do it perfectly every time, it requires a posthuman level of objectivity and high level of information quality that hardly ever exists.

nonrandomstring 15 hours ago

> This isn’t a rant against AI. I use it daily

It is, but it adds disingenuous apologetic.

Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism:

  Some really bad thing X is happening. Everyone knows X is happening.
  There is evidence X is happening, But I am *not* arguing against X
  because that would brand me a Luddite/outsider/naysayer.... and we
  all know a LOT of money and influence (including my own salary)
  rests on nobody talking about X.
Practically every article on the negative effects of smartphones or social media printed in the past 20 years starts with the same chirpy disavowal of the authors actual message. Something like;

"Smartphones and social media are an essential part of modern life today... but"

That always sounds like those people who say "I'm not a racist, but..."

Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?

  • rini17 13 hours ago

    There were too many cheap accusations of hypocrisy "you say X is bad so why do you use it yourself". So everyone is now preempting it.

  • trinsic2 13 hours ago

    I think this is a good point regardless of how much you have been down voted. I hope your not using this context to sub-communicate this issue isn't important. If not, It might have been better to put your last line at the top

    • nonrandomstring 4 hours ago

      The subject is deadly serious, and I only wish I could amplify it more. The abdication of reason and responsibility to machines is desperately dumb and no good will come of it.

      Maybe what I'm getting at is this [0] poem of Taylor Mali. Somehow we all lost our nerve to challenge really, really bad things, wrapping up messages in tentative language. Sometimes that's a genuine attempt at balance, or honesty. But often these days I feel an author is trying too hard to distance themself from ... from themself.

      It's a a silly bugbear, I know.

      [0] https://taylormali.com/poems/totally-like-whatever-you-know/

  • Aeolun 11 hours ago

    > It is, but it adds disingenuous apologetic.

    It’s not. It’s a rant against people and their laziness and gullibility.

AIorNot 14 hours ago

This is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing age

OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking

AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too

  • card_zero 14 hours ago

    The article is all about that oversight. It ends with a ten point checklist with items such as "Did I treat GenAI as a thought partner—not a source of truth?".

  • cmiles74 14 hours ago

    So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.

    • mattgreenrocks 11 hours ago

      I’ve read enough pseudo-intellectual Internet comments that I tend to subconsciously apply a slight negative bias to posts that appear to try too hard to project an air of authority via confidence. It isn’t always the best heuristic, as it leaves out the small set of competent and well-marketed people. But it certainly deflates my expectations around LLM output.

  • salgernon 13 hours ago

    OSINT (not a term I was particularly familiar with, personally) actually goes back quite a ways[1]. Software certainly makes aggregating the information easier to accumulate and finding signal in the noise, but bad security practices do far more to make that information accessible.

    [1] https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...

    • eesmith 8 hours ago

      Back in the 1990s my boss went to a conference where there was a talk on OSINT.

      She was interested in the then-new concept of "open source" so went to the talk, only to find it had nothing to do with software development.