⚙️ Family poisoned after using AI-generated mushroom hunting book

Good morning. Nvidia was sued last week by YouTuber David Milette (who recently sued OpenAI). The claim here is not for copyright infringement, but for unjust enrichment and competition.

The timing comes almost immediately after a 404 Media investigation revealed the extent to which Nvidia trained its models on watching YouTube videos without permission, consent, or compensation.

Anyway, I hope that wherever you are, you’re enjoying summer while it lasts.

— Ian Krietzberg, Editor-in-Chief, The Deep View

One of the greatest promises of AI technology lies in how it relates to brain-computer interface (BCI) devicesBCI combines hardware (implanted in the brain) and software that uses artificial intelligence and machine learning to decode and digitally express brain signals.

One of the areas that BCI aims to focus on is using these methods to restore speech in patients who have lost the ability to speak.

The details: Researchers implanted sensors into the brain of a 45-year-old man with amyotrophic lateral sclerosis (ALS) who had largely lost the ability to speak. The implant was able to decode his attempts to speak, and convert those brain signals into lines of text that a computer could read.

And thanks to the latest text-to-speech technology, the computer’s “voice” sounded like the patient’s voice before ALS.

  • According to UC Davis, the system achieved and maintained a 97% accuracy rate. “This is the highest of its kind.”

  • The real breakthrough, according to the researchers, lies in accurate and reliable decoding in real time. Previous attempts in this area were much less successful.

“Not being able to communicate is so frustrating and demoralizing,” said Casey Harrell, the patient who participated in the study. “It’s like being in prison. Something like this technology will help people get back into life and society.”

The hardest part about onboarding new employees is teaching them how to use the software they need.

How it works: Guidde’s genAI-powered platform lets you quickly create high-quality how-to guides for any software or process. And it requires no prior design or video editing experience.

  • With Guidde, teams can quickly and easily create personalized internal (or external) training content at scale, efficiently sharing knowledge across organizations and saving everyone involved significant time.

Source: Made with AI by The Deep View

San Francisco City Attorney David Chiu announced last week A court case against the owners of 16 of the most visited non-consensual deepfake porn websites. The lawsuit is filed on behalf of the people of the state of California.

The details: The lawsuit alleges that the companies violated state and federal laws prohibiting deepfake pornography, revenge pornography and child pornography. The company names were not disclosed to limit traffic to them.

  • “This investigation has taken us into the darkest corners of the internet, and I am absolutely appalled for the women and girls who have endured this exploitation,” Chiu said. “We need to be very clear that this is not innovation — this is sexual abuse.”

  • The widespread accessibility of genAI has enabled the harassment of women and girls – from Taylor Swift to high school girls – through the cheap and easy distribution of non-consensual deepfake pornographic images.

Chiu said the websites in question had been visited more than 200 million times in the first six months of 2024 alone; most require subscriptions for their “nudify” services, meaning they “profit from non-consensual pornographic images of children and adults.”

If you want to use AI in your investment strategy, you should consider the following: Public.

The all-in-one investment platform lets you build a portfolio of stocks, options, bonds, cryptocurrencies and more, while applying the latest AI technology – for powerful performance analysis – to help you achieve your investment goals.

Join the audienceand build your primary portfolio with AI-driven insights and analytics.

OpenAI said On Friday, it was announced that accounts linked to a covert Iranian operation that used ChatGPT to generate content about, among other things, the US presidential election, had been blocked.

The company said it saw no indication that the content was seen by a large audience.

The details: The ‘cluster’ of accounts banned by OpenAI was linked to Operation Storm-2035an Iranian network that previously operated fake news sites specifically targeting American voters.

  • The network used ChatGPT to generate content and then shared that content on social media. OpenAI said it identified “dozens of accounts” linked to the network on Twitter and one on Instagram, but added that none of the social posts appeared to have attracted much attention.

“We take seriously any attempt to leverage our services for foreign influence operations,” OpenAI said.

The context: OpenAI in May reported that it had disrupted five different covert influence operations that used ChatGPT to generate polarizing disinformation.

Source: Made with AI by The Deep View

When we talk about AI-generated misinformation, we often talk about political misinformation, mass-produced misinformation designed to turn people against politicians and legislative proposals, or to discourage them from voting, etc.

But as MIT recently published AI Risk Repository points out that the full threat of disinformation is somewhat broader, as false or misleading AI-generated content can pollute entire information ecosystems, in some cases leading to physical harm.

You probably understand where I’m going with this.

What happened: A UK based Reddit user said last week that their “family was poisoned after using an AI-generated mushroom identification book we purchased from a major online retailer.” The book was described by the user as a way for “beginners to safely get started picking mushrooms.”

  • After they got sick, they examined the book more closely and found a lot of evidence that it was likely generated by AI. Some of this evidence contained obvious editorial errors: “In conclusion, we can say that morels are delicious mushrooms that you can eat from August until the end of summer. Please let me know if I can help you with anything else.”

  • When searching for the author’s name, the user also discovered that the “expert” doesn’t seem to exist online. The online store in question — which the user declined to name — has taken the page offline.

“We didn’t know it was AI generated when we bought it,” the user wrote. “This wasn’t disclosed on the website!”

  • But there were also cases – reported by 404 Media — of AI-generated mushroom guides that flooded Amazon last summer. Those books were later removed, even though they highlighted this new type of downstream AI-generated harm.

  • As the New York Mycological Society said at the time: “Amazon and other outlets are flooded with AI books on foraging and identification. Only buy books from well-known authors and foragers, it can literally mean life or death.”

The danger here is clear.

We have built trust by default in many areas; if someone has written a book, chances are they know what they are talking about. In the pre-AI world, there was little need to fervently vet the author of a chosen book (at the very least, the publisher stood between an author and outright misinformation).

  • But genAI (combined with social media and self-publishing) has allowed certain parties to pose as experts to make a quick buck.

  • GenAI makes things plausible enough on the face of it to demonstrate that there is a real risk – as that Reddit post proves – not that people will follow hallucinatory advice from ChatGPT, but that they will consume content further down the line that is dangerously polluted, without those involved knowing it.

This new, but necessary, world of distrust demands much more of society by default. Assumptions are no longer good enough. Everyone must become capable and verifiable fact-checkers.

And the reality is that most people won’t.

In some cases, people get hurt. In other cases, voters get influenced and democracy gets poisoned.

“I’ve said it before, and I’ll say it again: AI is not going to destroy us in a Terminator-style uprising,” said a former librarian wrote on Twitter in response to the Reddit post. “AI is going to destroy us by polluting the information we trust with dangerous misinformation and nonsense that people will end up believing.”

Thanks for reading today’s edition of The Deep View!

Until next time.

Here’s your take on former Google CEO Eric Schmidt’s startup advice:

Nearly half of you think that his advice — essentially stealing IP to build a product and, if it’s successful, hiring lawyers to clean up the mess — won’t work in the long run. The lawsuits, you say, are coming.

About a third of you think it will continue to work. After all, it has worked so far.

  • “I am amazed that there is no moral sense in any aspect of these responses. So is stealing okay as long as you don’t get caught? Even in the jungle, among the most primitive tribes, there are norms and values.”

*Disclosure: All investments involve risk of loss, including loss of principal. Brokerage services for U.S. listed, registered securities, options, and bonds in a self-directed account are provided by Public Investing, Inc., member FINRA & SIPC. Cryptocurrency trading services are provided by Bakkt Crypto Solutions, LLC (NMLS ID 1828849), which is licensed to engage in virtual currency activities by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC.

Alpha is an experiment brought to you by Public Holdings, Inc. (“Public”). Alpha is an AI research tool powered by GPT-4, a generative large language model. Alpha is experimental technology and may provide incorrect or inappropriate answers. The output of Alpha should not be construed as investment research or recommendations and should not be relied upon as the basis for an investment decision. All Alpha output is provided “as is.” Public makes no representations or warranties regarding the accuracy, completeness, quality, timeliness, or any other characteristic of such output. You use Alpha output at your own risk. You should independently evaluate and verify the accuracy of such output for your own use case.

You May Also Like

More From Author