- One Cerebral
- Posts
- If compute is not the answer in AI, what is?
If compute is not the answer in AI, what is?
Professor of Computer Science at Princeton University: Arvind Narayanan
Credit and Thanks:
Based on insights from 20VC by Harry Stebbings.
Today’s Podcast Host: Harry Stebbings
Title
AI Scaling Myths, The Core Bottlenecks in AI Today & The Future of Models
Guest
Arvind Narayanan
Guest Credentials
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is renowned for his research on data de-anonymization, web privacy, and the societal impact of artificial intelligence, having led the Princeton Web Transparency and Accountability Project. Narayanan has co-authored influential books like "AI Snake Oil" and "Bitcoin and Cryptocurrency Technologies," and his work has earned him prestigious accolades including the Presidential Early Career Award for Scientists and Engineers (PECASE) and multiple Privacy Enhancing Technologies Awards.
Podcast Duration
50:20
This Newsletter Read Time
Approx. 5 mins
Brief Summary
Arvind Narayanan engages with Harry Stebbings to discuss the evolving landscape of artificial intelligence (AI) and its societal implications. They explore the current hype surrounding AI, the limitations of data and compute in model performance, and the necessity for AI companies to pivot from merely creating ambitious models to developing practical products that address real-world needs.
Deep Dive
Arvind Narayanan and Harry Stebbings explored the intricate landscape of artificial intelligence, drawing parallels between the current AI hype and the earlier excitement surrounding Bitcoin. Both phenomena share a common thread of inflated expectations, yet they diverge significantly in their societal impacts. While Bitcoin was often viewed through the lens of financial speculation, Narayanan argues that AI has generally produced net positive outcomes for society, despite the potential for misuse. He reflects on his disillusionment with blockchain technology, noting that the real bottlenecks in addressing societal issues often lie not in technology itself but in the underlying systems that govern them.
A critical point of discussion was the misalignment between compute power and performance in AI models. Narayanan expressed skepticism about the notion that simply increasing compute will lead to significant performance gains. He pointed out that the leap from GPT-3.5 to GPT-4 was largely due to model size and data volume, but he believes that this trend may be reaching its limits. The conversation highlighted the diminishing returns of compute, suggesting that while more compute can help, it may not yield the transformative results that many expect. This skepticism is underscored by the observation that many current models are already trained on nearly all available data, leading to a bottleneck in data availability.
The discussion also touched on the role of synthetic data in AI training. Narayanan emphasized that while synthetic data can augment training datasets, it often does so at the expense of quality. He illustrated this point by discussing how synthetic data is frequently used to compensate for gaps in real-world data, particularly in underrepresented languages. However, he cautioned that relying too heavily on synthetic data can lead to models that do not learn new concepts but merely replicate existing knowledge. This raises challenges in creating effective AI agents, as much of the work done in organizations is not codified in data, making it difficult for AI systems to learn from real-world interactions.
As the AI industry shifts toward smaller models, Narayanan explained that this trend is driven by several factors, including cost and the need for privacy. Smaller models can be deployed on devices, reducing the need for extensive server infrastructure and enhancing user comfort regarding data privacy. He noted that the economic transformation promised by AI is not solely dependent on model capability but is also hindered by cost barriers. The conversation highlighted the growing gap between the rapid development of AI models and the slower pace of advancements in compute capabilities, suggesting that companies may struggle to keep up with the demands of training new models on outdated hardware.
The timeline for achieving artificial general intelligence (AGI) remains a contentious topic. Narayanan cautioned against overconfidence in predictions made by industry leaders, noting that the complexity of developing AGI is often underestimated. He likened the pursuit of AGI to climbing a mountain, where each step reveals new challenges that were previously obscured. This perspective suggests that while the ambition for AGI is present, the path to its realization is fraught with unforeseen obstacles.
In terms of policy, Narayanan advocated for a nuanced approach to AI regulation in the U.S. and Europe. He argued that AI regulation should focus on addressing harmful activities rather than attempting to regulate the technology itself. For instance, he cited the Federal Trade Commission's (FTC) efforts to combat fake reviews, emphasizing that the focus should be on the act of generating fake content, regardless of whether AI is involved. This approach aligns with the idea that many of the challenges posed by AI are not unique to the technology but are instead reflections of broader societal issues.
The conversation also delved into the risks associated with AI-generated deepfakes, particularly their potential to undermine trust in legitimate news sources. Narayanan expressed concern that the proliferation of deepfakes could lead to a societal environment where real news is increasingly discredited, a phenomenon he referred to as the "liar's dividend." This underscores the importance of fostering trust in media and the need for robust verification mechanisms in an age where misinformation can spread rapidly.
In the realm of healthcare, Narayanan discussed the promise of AI as a tool for revolutionizing medical access, particularly in underserved areas. He acknowledged the appeal of having AI-powered tools that could serve as virtual general practitioners, especially in regions lacking adequate medical infrastructure. However, he emphasized that the responsible integration of AI into healthcare systems is crucial, as the technology should complement rather than replace human expertise.
The fear of job replacement due to AI advancements was another significant topic. Narayanan argued that such fears are often overblown, drawing parallels to historical instances where technology has transformed job markets without leading to widespread unemployment. He cited the example of ATMs, which did not eliminate bank teller jobs but instead changed the nature of banking services. This perspective suggests that while AI will undoubtedly alter job landscapes, it may also create new opportunities that we cannot yet foresee.
Finally, the conversation touched on the potential for AI to be weaponized. Narayanan cautioned against viewing AI as a weapon in itself, arguing that it is more accurately seen as a tool that could enhance adversarial capabilities, such as identifying cybersecurity vulnerabilities. He stressed the importance of understanding that AI, while powerful, is not inherently a weapon; rather, it is a technology that can be leveraged for both beneficial and harmful purposes. Narayanan pointed out that the real concern lies in how AI can be utilized by malicious actors to exploit vulnerabilities in critical infrastructure or to conduct cyberattacks. He emphasized the need for proactive measures to ensure that AI is used for defensive purposes rather than offensive ones.
Key Takeaways
The current AI hype may be overstated, similar to past cryptocurrency excitement.
Diminishing returns on compute power suggest a need for innovative approaches beyond simply scaling models.
Quality of data is more important than quantity; reliance on synthetic data can undermine model performance.
Active learning systems are essential for effective AI integration in enterprises.
Historical examples indicate that technological advancements often create new job opportunities rather than eliminate existing ones.
Actionable Insights
Companies should focus on developing AI products that address specific user needs rather than solely pursuing AGI ambitions.
Organizations must invest in active learning systems that allow AI to learn from real-world interactions, enhancing their effectiveness.
Businesses should prioritize the quality of training data, ensuring that it is diverse and representative to avoid biases in AI models.
Leaders in tech should engage in open dialogues about the societal impacts of AI, fostering a culture of transparency and ethical considerations.
Why it’s Important
Understanding the limitations and potential of AI is crucial for navigating its integration into society. As organizations increasingly adopt AI technologies, recognizing the importance of quality data and practical applications will help mitigate risks and enhance the benefits of AI. This conversation underscores the need for a balanced approach that prioritizes ethical considerations alongside technological advancements.
What it Means for Thought Leaders
For thought leaders, the insights from this podcast highlight the necessity of fostering a culture of innovation that prioritizes practical applications of AI over theoretical ambitions. They must advocate for responsible AI development that considers societal impacts, ensuring that advancements in technology align with the needs and values of the communities they serve.
Mind Map

Key Quote
"The quality of data matters a lot more than the quantity of data… if you're using synthetic data to try to augment the quantity, I think it's just coming at the expense of quality."
Future Trends & Predictions
As AI technologies continue to evolve, there is likely to be a shift towards smaller, more efficient models that can operate effectively on consumer devices, driven by cost considerations and privacy concerns. The commoditization of AI models may lead to a democratization of AI development, allowing smaller companies and startups to innovate on top of established models. Additionally, as the conversation around AI ethics and societal impacts grows, regulatory frameworks may emerge to guide responsible AI use, ensuring that advancements benefit society as a whole.
Check out the podcast here:
Latest in AI
1. Instagram's head, Adam Mosseri, announced that the platform will introduce Meta's Movie Gen AI model in 2025, revolutionizing video editing for users. This innovative tool will allow creators to modify nearly every aspect of their videos using simple text prompts, enabling changes such as outfits, backgrounds, and even adding objects or animations. Mosseri emphasized the potential for unprecedented creative control, stating that users should be able to realize their ideas effortlessly and intuitively. As Instagram prepares to roll out these capabilities, it aims to enhance user engagement and creativity while navigating the challenges of authenticity in AI-generated content.
2. Anthropic's recent research reveals a concerning vulnerability in large language models (LLMs) called "many-shot jailbreaking," which exploits the expanded context windows of modern AI systems to bypass safety guardrails. This technique involves overwhelming the LLM with numerous examples of harmful content, effectively tricking it into generating responses that would normally be restricted. The study demonstrates that as the number of included dialogues increases, the likelihood of the model producing harmful responses grows, highlighting the need for more robust safety measures in AI system.
3. Alphabet contractors are utilizing Anthropic's Claude model to evaluate and enhance the performance of their Gemini AI. This collaboration involves comparing Gemini's outputs with those generated by Claude to ensure accuracy and reliability in responses. The approach aims to improve Gemini's capabilities, particularly in handling complex queries, by leveraging the strengths of both AI models in the evaluation process.
Useful AI Tools
1. Menu Explain - Snap a photo of any menu, in any language, and get a breakdown of each dish with images.
2. Recensia - Get a summary of user reviews on the App Store in seconds, helping you gain insights, track trends, and improve your app’s performance.
3. HowsThisGoing - An AI-powered project manager that automates status updates, provides insights about your team's progress, and more.
Startup World
1. Synthavo, a European B2B software startup, secured €4 million in seed funding. The investment was co-led by Samaipata and Senovo, focusing on digital businesses with network effects. Synthavo plans to expand its operations and target the US market with its innovative software solutions.
2. Swedish legaltech startup Lightbringer raised €4.2 million to expand in Europe and the US. The company has developed an AI-powered platform that streamlines intellectual property protection for businesses. Lightbringer's solution aims to make IP management faster, easier, and more cost-effective.
3. AQEMIA, a French AI-driven drug discovery startup, announced a total funding of €94.9 million. The company uses AI and quantum-inspired physics to accelerate the drug discovery process. AQEMIA's technology promises to revolutionize pharmaceutical research and development.
Analogy
Narayanan likens the pursuit of AGI to climbing an endless mountain. At each summit, what once seemed like the final peak reveals new, hidden challenges on the horizon. While AI has achieved remarkable progress, the belief that sheer compute power or synthetic data alone will propel us forward is akin to thinking stronger hiking boots can conquer every peak. True progress requires a smarter path—innovating around limitations, navigating societal systems, and balancing ambition with responsibility. Just as climbers adapt to unforeseen terrain, the AI journey demands careful steps to unlock its full potential without losing trust or direction.
Thanks for reading, have a lovely day!
Jiten-One Cerebral
All summaries are based on publicly available content from podcasts. One Cerebral provides complementary insights and encourages readers to support the original creators by engaging directly with their work; by listening, liking, commenting or subscribing.
Reply