announcements

Developer Week 2025: Building a Long-Term Relationship with AI

At DeveloperWeek 2025, AI took center stage—highlighting trust issues, security gaps, and transparency challenges as companies push for more reliable AI.

Mar 14, 20259 min read

Developer Week 2025

DeveloperWeek 2025 was an AI-lover’s dream, a cornucopia of AI-driven innovation emerging from small startups to established industry leaders. There were dancing robot dogs, a plethora of cyber trucks, and a robot so intent on shaking human hands that it alarmed a few passersby as it scuttled speedily toward them. Attendees were privy to demos of voice cloning, roundtables with people from a diverse array of expertise, an enthusiastic DJ, and a small inflatable slide that lay in bouncy disuse. Any spies would be alarmed at the number of “agents” being openly discussed around them.

There is no denying that AI is the future. Yet like any nascent relationship, there are looming hurdles that must be navigated on the road to a happily-ever-after.

Trust Issues

While many envision the ultimate AI solution as an autonomous agent capable of reasoning and handling complex tasks independently, today’s reality spans a wide range of autonomy. Most of the AI tools displayed at DeveloperWeek still require significant human input, with only a few pushing toward greater independence.

After attending several popular speaker sessions, it became clear that a central challenge in achieving true autonomy is trust: how can AI-based products be designed to deliver reliable, accurate outputs? Many of today’s successful AI tools—such as GitHub Copilot and Cursor.ai—are B2B products used by trained developers, inherently relying on human oversight for verification. Rules and procedures for proper usage can be created and followed systematically, reducing the risk of hallucinated code or other errors impacting production environments.


B2C products, on the other hand, have fewer means of imposing usage rules on customers. AI tools may work as expected with prompts provided for testing, but there is an infinite number of prompts that customers could inadvertently create. Attempting to circumvent things like hallucination and training data bias may superficially appear to act as guardrails, but they are hardly guaranteed.

Generative AI never doesn’t have an answer: This is wildly appealing in some use cases and terrifyingly misleading in others. The wild west of unreliable AI products, however, will not last forever; technology is improving at exponential rates, and people are learning how to interact with AI in more effective ways. In one of the most well-attended sessions, “Give Your LLM a Left Brain,” Neo4j’s Stephen Chin highlighted GraphRAG as a powerful tool to enhance the creativity of LLMs by incorporating actual data and helping models generate more logical and relevant outputs. Limiting AI expertise and function is also a simple way to enable more comprehensive output testing and promote greater transparency in acknowledging the model's limitations. The work to refine and balance these capabilities—much like connecting the left and right hemispheres of the brain—is already in motion.

All Talk?

Augmenting AI prompts with real data to enhance results–going from RAGs to riches, if you will–is not a new concept, and neither were many of the ideas bubbling around the booths. Although AI is still new in terms of its technical revolution, it is no longer shiny and novel to the wider world–we’ve been hearing about it since 2022. And while all sorts of flashy demos have been presented to show off the glittering promise of the AI-led future, where is that future now?

A slide from Auth0’s keynote, delivered by Shiv Ramji, was particularly apt for describing the lifetime of an AI product. Getting a product ready to demo is not the same as shipping it; many AI products are in the liminal stretch between demo and shipping, as illustrated below.

Auth0’s keynote

In short, this means that an explosion in AI technology is still imminent, with increasing numbers of ready-to-use, publicly available tools released regularly. As more companies push beyond Gartner’s "trough of disillusionment", we’re witnessing a shift from skepticism to real, tangible progress, signaling that AI’s potential is beginning to be fully realized in ways that were once thought far off.

Gartner Source: Gartner

Lack of Transparency

Part of AI’s sparkling allure is its ability to abstract away the nuisances of code and technicalities under the familiar veil of natural language. It’s this very abstraction, however, that turns AI into a black box that makes modification, testing, and usage more complicated. There is a growing gap between those who create AI models—the engineers and researchers developing the foundational architectures—and those who design AI-powered products—the product managers, UX designers, and business strategists shaping how AI is integrated into real-world applications. In one session, the presenter asked the packed audience if any of them had ever built an AI agent themselves, seeing as agents were the subject of the 25-minute session. Not a single hand was raised.

This audience may have been particularly bashful, but this was a clear indicator of the opaqueness that still surrounds the AI space. Everyone wants AI, but how many truly understand it—from the theory behind it to its actual development and proper usage? As no-code solutions become more common, this gap in practical knowledge will inevitably increase–and while this enables accessibility, another broadly beneficial aspect of the natural language abstraction, it also leaves room for oversights in development and usage.

Another consequence of the growing opaqueness of AI products is the simultaneous demand for increased visibility. Many companies at the event showcased products aimed at improving clarity through visualization, while others acknowledged that this need for visibility was a pressing concern. In some cases, companies leveraged the black box nature of AI to their advantage, using its opacity to conceal underlying deficiencies – like when it came to product security.

Insecurities

Security is often a shocking afterthought in the development of many AI products despite widespread overlapping security concerns across the industry. Companies working with solutions involving RAG highlighted Fine-Grained Authorization (FGA) and Role-Based Access Control (RBAC) as key focuses, while others using AI agents to call APIs pointed to issues around permissions and scope as their most pressing security concerns. To address these challenges, companies took several different approaches.

1. Implementing FGA or RBAC Internally

This approach, which was the most commonly mentioned in ad-hoc interviews during the event, involves companies handling security frameworks themselves. While this can be an effective way to ensure that sensitive data is protected, it raises an important question: why continue reinventing the wheel? Much like AI tools often build on the same foundational models, it seems inefficient for every company to address the same security challenge individually when much of the industry will ultimately tackle it in similar ways. Standardized solutions could save time and resources and enhance overall security across the board.

2. Offloading Security to Customers

Some companies initially reduce their security overhead by offloading the responsibility to their customers. This might seem like an attractive strategy early in product development, as it enables faster go-to-market timelines. However, this approach is unlikely to scale in the long term. As AI products mature, customers expect products to include robust, built-in security features, not to carry the burden themselves.

3. Implementing Guardrails to Limit AI Functionality

Another approach companies are taking involves creating restrictions on what AI can and cannot do. Some limit AI to read-only access or severely restrict its interaction with APIs. One company took the latter approach even further, disabling the AI’s ability to make API calls altogether. Instead, the AI would leave placeholders where API calls should be made, requiring a human to fill in the gaps. This “human-in-the-loop” method reflects a cautious approach to security, recognizing that while AI is powerful, there are still areas where human oversight is indispensable. It also underscores the ongoing debate around AI autonomy—how much trust can we place in an AI to perform tasks traditionally handled by humans? (Also, should AI developers be counted as legitimate attendees at developer conferences?)

4. Proceeding Without Considering Security

Finally, the most concerning approach—still alarmingly common—entails moving forward without proper security measures in place. As AI continues to be integrated into production environments, neglecting security becomes an even greater risk, particularly with the rise of malicious actors leveraging AI capabilities. Security is most effective when it’s built-in from the start, not added as an afterthought or band-aid to patch vulnerabilities later on.

An Exciting Future

So, AI has some issues, and so do we. The challenges we face today—trust issues, security concerns, and the need for transparency, along with others—are simply part of the process of building a long-term, reliable partnership with this incipient technology. DeveloperWeek 2025 highlighted the significant progress made in the span of just a few years but also underscored the importance of addressing these challenges as we move forward. As we refine our approach to AI development, it will be crucial to balance innovation with responsibility.

Security, in particular, remains one of the most pressing concerns. As AI systems become more capable and deeply integrated into business operations, ensuring secure authentication and access control will be critical to preventing vulnerabilities. Products like Auth0 are already tackling this challenge, offering insights into securing Generative AI applications with stronger identity and authorization frameworks. AI security is something that is most effective built-in from the start, enabling developers and businesses alike to balance AI innovation with robust safeguards.

Explore how you can integrate robust AI security strategies with Auth0: Auth for GenAI.

The future of AI holds tremendous promise, and with continued collaboration and thoughtful problem-solving, the potential for positive, transformative impact is within reach.

AI Resources