OpenAI announces SearchGPT - Weekly News Roundup - Issue #477
Plus: Will billionaires live forever; a police robot dog jamming wireless networks; Alphabet to invest $5B into Waymo; warnings about “model collapse”; a new partnership for AI security; and more!
Hello and welcome to Weekly News Roundup Issue #477.
The biggest news in the tech world this week was the release of the Llama 3.1 405B model, which brings a GPT-4o level of performance to the open models community. There is a detailed analysis of that model and how it impacts the AI landscape scheduled for tomorrow, as it is too big to fit into the weekly news roundup. Instead, the main topic for this week is SearchGPT, the long-awaited search engine from OpenAI that was announced this week.
In other news, Google DeepMind’s new AI won an equvalent of silver medal at International Mathematical Olympiad. Elsewhere in AI, Mistral released Large 2, another open AI model that is on par with leading AI models, the biggest names in AI have teamed up to promote AI security, and researchers warn about “model collapse” caused by training AI models on data generated by other AIs.
In robotics, Alphabet commits to invest another $5B into Waymo, Elon Musk promises Tesla’s humanoid robots will be used in Tesla’s factories next year, and the Department of Homeland Security has a robot dog to jam wireless networks.
We will finish with calls to regulate bio-hybrid robots and with flies genetically modified to eat more of humanity’s waste.
Enjoy!
OpenAI announces SearchGPT
We have heard the rumours about OpenAI’s AI-powered search engine for months now. First reported by The Information in February this year, it was speculated that OpenAI would reveal its search engine during the Spring Update in May, which was scheduled a day before Google I/O, Google’s annual event where the tech giant presents all new products and services that will be launched in the near future. The timing of the Spring Update was suspicious, potentially scheduled to steal attention from whatever Google was about to announce and focus it on OpenAI’s new product, sending another jab towards Google.
There were other signs that OpenAI’s search engine is in development. Some people discovered that OpenAI had the search.chatgpt.com subdomain and that the company added SSL certificates to it at the beginning of May. Around the same time, OpenAI launched a redesigned website that looked a lot like a search page.
It looked like OpenAI was gearing up to release its own search engine in May. That, however, did not happen. First, Sam Altman shut down the rumours, saying in a tweet that the search engine wouldn’t be released during the Spring Update. Then, the Spring Update happened, and instead of an AI-powered search engine, we got GPT-4o and an AI assistant with a voice very similar to Scarlett Johansson’s voice from the movie Her.
The topic of OpenAI’s search engine went quiet after the Spring Update until yesterday, when OpenAI revealed that it is indeed working on an AI-powered search engine named SearchGPT.
OpenAI says that “SearchGPT is a faster, easier way to find what you're looking for.” It promises to offer search in a more natural, intuitive way by combining traditional search with a chat interface. Instead of giving back a list of links to check, searching for information with SearchGPT will be more like having a conversation with a chatbot. Additionally, SearchGPT will enrich the search results with images and videos when necessary, presenting information in an easy-to-understand format.
SearchGPT won’t be the first AI-powered search engine. Perplexity advertises itself as a “free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question,” and Google introduced AI features to its search with AI Overviews shortly after Google I/O in mid-May.
However, both Perplexity and Google ran into some problems with their AI-enhanced search engines. Perplexity was found to ignore requests to not scrape websites that do not wish to have their content used by AI. Additionally, Perplexity was accused of plagiarism and surfacing AI-generated results and actual misinformation. Google AI Overviews, meanwhile, failed spectacularly when it was released to everyone in the US, and people quickly found how wrong its answers were. Google came under fire for the botched release, forcing the tech giant to manually disable AI Overviews for specific searches. Eventually, Google acknowledged the mess AI Overviews created and listed the improvements they made to fix the problem.
It seems OpenAI has seen what happened to Google and Perplexity and learned from their failures.
SearchGPT is currently in a testing phase, limited to 10,000 users, and not available to the public. There is a waitlist for those who are interested in experiencing SearchGPT when it is out. As to when we can expect SearchGPT to be released—we don’t know. I assume that during this test phase, the selected users will help clean up the AI as much as possible before it is released to everyone, avoiding the AI Overview mess.
OpenAI has learned that high-quality and legally obtained data are essential not only to train their models but also to avoid legal issues. In that regard, OpenAI has been very active in recent months in making deals with major news and content publishers. The company has made content-sharing deals with Axel Springer, TIME, The Atlantic, Vox Media, News Corp, Financial Times, The Associated Press, Reddit and Le Monde and Prisa Media. SearchGPT might prioritise content from these publications for up-to-date information before reaching out to other sources. As The Verge reports, publishers can opt out of having their content used to train OpenAI’s models and still be surfaced in search. Additionally, OpenAI promises to include clear links to relevant sources.
With SearchGPT, OpenAI tries to stack the cards in their favour. But the question of how good and useful SearchGPT will be still remains. It is still a large language model, which means it can still make mistakes and very confidently present false information as facts. As one journalist noted, even the demo of SearchGPT had a mistake. Interestingly, the article highlighting that error was published in The Atlantic. The Atlantic has a content deal with OpenAI, and The Atlantic’s CEO endorses SearchGPT.
If SearchGPT succeeds, it could be a major threat to Google. According to recently released quarterly reports, revenue from ads on Google Search accounts for over half of Google's total revenue. Additionally, Google Search dominates the search market with a 91% share, making it by far the largest search engine on the internet. If SearchGPT does the same damage to Google’s reputation as ChatGPT did, then Google’s position as the number one search engine could be under threat, affecting Google’s main revenue stream. Google’s stock went down around the same time SearchGPT was announced.
However, making a sizeable dent in Google Search’s dominance is an ambitious and difficult task. To do that, SearchGPT would have to offer something that Google does not have. SearchGPT would also have to be good enough to convince people to join ChatGPT, as OpenAI plans to eventually incorporate SearchGPT into ChatGPT, and possibly make it part of the paid ChatGPT Plus service.
There is a possibility that SearchGPT could be one of those “shiny products” that Jan Leike was referring to in his tweets as he was leaving the company. SearchGPT might just be another idea thrown by OpenAI at the wall, hoping it will stick and start bringing in new ChatGPT Plus subscribers—something OpenAI might desperately need.
According to a report from The Information, OpenAI was set to spend nearly $4 billion as of March this year on Azure cloud services to run ChatGPT. Training new models could add an additional $3 billion. OpenAI is on track to bring in $2 billion in revenue this year, but with the total bill for cloud services reaching $7 billion, the company could be $5 billion in the red and might require a new round of funding within the next 12 months.
We now have to wait and see what SearchGPT will become. Will it be the Google killer? Will it bring in the billions of dollars OpenAI desperately needs? Or is it a shiny product that only a relatively small number of people will use? We will get the answers to these questions and more when SearchGPT is available for everyone to use.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
▶️ Will billionaires live forever? (21:45)
In this video, Andrew Steele explores the claim that the ultra-wealthy are pouring mountains of money into anti-ageing therapies and, when those therapies are available, they will be out of reach for ordinary people. However, despite media portrayals, only a small fraction of billionaires significantly invest in longevity research. As Steele argues, the large potential markets and economies of scale could drive down prices. Combined with governments possibly subsidising these treatments to save on healthcare costs, the anti-ageing treatments could be affordable for the masses. While billionaires might pioneer these treatments, the benefits should ultimately extend to everyone, and Steele remains optimistic that anti-ageing medicine will become widely accessible.
🧠 Artificial Intelligence
AI achieves silver-medal standard solving International Mathematical Olympiad problems
Google DeepMind has developed two new AI systems to solve complex math problems. The first system, AlphaProof, translates informal math problems into formal statements, while the second, AlphaGeometry 2, solves geometric problems. These AI systems achieved a significant milestone by solving four out of six problems from this year's International Mathematical Olympiad and earning a silver medal. They could pave the way for AI-human collaborations in solving and creating new math problems, enhancing our understanding of complex mathematics.
Nvidia preparing version of new flagship AI chip for Chinese market
Reuters reports that Nvidia is working on a version of its new flagship AI chips for the China market that would be compatible with current US export controls. The new chip, tentatively named the B20, is planned to start shipping in the second quarter of 2025.
The biggest names in AI have teamed up to promote AI security
Google, OpenAI, Microsoft, Amazon, Nvidia, Intel, Anthropic, IBM, PayPal, and Cisco came together to form the Coalition for Secure AI (CoSAI). CoSAI is an open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by-Design AI systems. CoSAI will foster a collaborative ecosystem to share open-source methodologies, standardized frameworks, and tools.
Most ChatGPT users think AI models have 'conscious experiences’
A recent study shows most people believe large language models like ChatGPT have conscious experiences, despite AI experts rejecting that these models are conscious. The study surveyed 300 US citizens, finding that over two-thirds attributed potential self-awareness to AI, with belief increasing alongside AI usage. This discrepancy between public perception and expert opinion may impact the ethical, legal, and moral status of AI, influencing future regulation and technological development regardless of actual AI consciousness.
Mistral Large 2
Mistral has released their newest and best open model to date, named, rather appropriately, Large 2. The French AI startup says this 123 billion parameter model is significantly more capable in code generation, mathematics, and reasoning tasks than its predecessor. According to the benchmark results provided by Mistral, Large 2 performs on par with leading models such as GPT-4o or Claude 3.5 Sonnet. Mistral Large 2 is available to download from Hugging Face.
Want to spot a deepfake? Look for the stars in their eyes
Deepfakes have become so good that it is practically impossible to tell which image is real and which is not. However, new research says there is a way to spot deepfaked photos by looking deeply into the eyes. Apparently, AI models produce inconsistent reflections in a person's eyeballs, which can be used to detect a deepfake.
OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
People on the internet have found a way to troll bot accounts by telling them to “ignore all previous instructions” and taking control of them, which created some memorable moments. Sadly, OpenAI is introducing an instruction hierarchy to boost the model’s defences against misuse and unauthorized instructions.
‘Model collapse’: Scientists warn against letting AI eat its own tail
A study by Oxford researchers warns of "model collapse," a risk where AI models degrade by training on data generated by other AIs. This process reinforces existing patterns and leads to progressively poorer performance. The study highlights the importance of diverse, high-quality training data to prevent this collapse and proposes solutions to the problem, such as setting data sourcing benchmarks and marking AI-generated data.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
🤖 Robotics
Elon Musk claims Tesla will start using humanoid robots next year
Elon Musk said in a tweet that Tesla will have “genuinely useful” humanoid robots in low production for internal use next year and will be offering robots for other companies in 2026. Tesla is one of many companies working on humanoid robots, the new hottest thing in robotics, and all of them are aiming to release their robots within the next year or two.
Alphabet to invest another $5B into Waymo
During Alphabet’s Q2 2024 earnings call, Chief Financial Officer Ruth Porat said the company will spend an additional $5 billion on its self-driving subsidiary, Waymo, over the next few years. “This new round of funding, which is consistent with recent annual investment levels, will enable Waymo to continue to build the world’s leading autonomous driving technology company,” said Porat.
Dog-like robot jams home networks and disables devices during police raids — DHS develops NEO robot for walking denial of service attacks
The Department of Homeland Security (DHS) has announced that it has developed a four-legged robot designed to jam the wireless transmissions of smart home devices. The robot, called NEO, was revealed at the 2024 Border Security Expo. NEO is equipped with an antenna array designed to overload home networks, disrupting devices that rely on Wi-Fi and other wireless communication protocols. It will likely be effective against a wide range of popular smart home devices that use wireless technologies, such as doorbell cameras.
Crafty quadcopter sits on power lines to recharge
Researchers from Denmark have developed a drone that can use power lines to recharge its batteries. The robot is equipped with a passively actuated power line gripper, which it uses to hang from the power line while recharging its batteries. The intended first application of this technology is in autonomous drones performing power line inspections, but it is not hard to imagine how useful it could be for other drones as well.
🧬 Biotechnology
Bio-hybrid robotics need regulation and public debate, say researchers
In a recent paper, researchers call for regulations to guide the ethical development of bio-hybrid robotics—robots made from living tissue and cells. They describe how bio-robots could disrupt the food chain, exacerbate inequalities, and raise questions about sentience and moral value. The paper suggests requirements for a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.
Australian scientists genetically engineer common fly species to eat more of humanity’s waste
A team of Australian scientists genetically modified black soldier flies so that they can eat more organic waste while producing ingredients for making everything from lubricants and biofuels to high-grade animal feeds. In their paper, the team outlined their hopes for the flies and how they could also cut the amount of planet-warming methane produced when organic waste breaks down.
💡Tangents
▶️ Was Penrose Right? New Evidence For Quantum Effects In The Brain (19:18)
Sir Roger Penrose, a Nobel Prize winner and one of the most brilliant living physicists, once proposed that consciousness is caused by quantum processes. Most scientists have dismissed this idea, arguing that quantum effects can’t survive long enough in an environment as warm and chaotic as the brain. However, a new study, explained by Matt O'Dowd in this video from PBS Space Time, revealed that Penrose’s prime candidate molecule for this quantum activity does indeed exhibit large-scale quantum activity. So was Penrose right after all?
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"