in

a Glimpse into the Future of the Tech Industry


Agentic AI has become the holy grail of the tech industry, largely outside of the view of the general public. OpenAI defines this idea as “AI systems that can pursue complex goals with limited direct supervision.” Essentially, we’re talking about artificial agents that can act on their own toward achieving goals.

Put simply, it’s the ideal AI personal assistant that can keep track of all your daily tasks for you, plan around changes in your calendar, and understand abstract requests like “prepare my meal plan and order groceries for the next month.” 

However, as appealing as it may sound, agentic AI raises a number of practical, ethical, and even moral questions. Let’s explore these artificial agents that are increasingly taking over the internet and our lives.

Agentic AI Explained

Agentic AI has a goal-driven and proactive nature. In short, it aims to automate a huge part of knowledge work in just a few clicks. Jeremy Nixon defined its practical difference from traditional AI as a ‘chaining’ capability – taking a sequence of actions in the face of a single request from the Machine Learning model.

For example, when you ask an AI agent to create a website for you, it needs to immediately generate a series of small goals and begin executing them:

  • Come up with a structure of the website and its various screens.
  • Write headlines and body content for whatever the website does.
  • Generate the HTML code and the backend in a chosen programming language.
  • Design the visuals, and fill the page with graphics, photos, etc.
  • Test the website on different devices and make sure it’s bug-free.

For an ideal agentic AI, all these actions should be performed in one request. Of course, there’s a lot of complexity involved – when you want to design a website, you probably expect back-and-forth, confirmation of visuals, copy, and so on. And that’s the sort of thing you already can do with the many generative AIs out there – Google’s Gemini, OpenAIs ChatGPT, Antrhopic’s Claude, etc. 

So, we’re simply discussing more advanced versions of generative AIs that already exist. An agentic AI would be just like another colleague you communicate with in Slack or Teams – it specializes in something and can go and accomplish complex tasks based on abstract instructions, then report back to you about the results.

Breaking Down Agentic AI

There are a few key differences between agentic AI and traditional systems we’re used to seeing in the tech industry. They are:

  • Task Channeling. It’s capable of taking complex abstract instructions and breaking them down into individual tasks, and then executing them.
  • Advanced Communication. It can process language, confirm expectations, discuss tasks, and have a degree of reasoning in decision-making.
  • Adaptability. Older AIs relied on a series of predefined tasks. Agentic AI, on the other hand, can change its behavior based on the situation.

Agentic AIs are based on large language models and access to massive amounts of data that help them understand connections and differences between concepts or even real-world objects. Most remarkably, these systems can extrapolate new information based on existing knowledge.

Risks of the Agentic AI

This level of autonomy has enormous benefits for businesses and consumers, however, it also comes with unique challenges that need to be addressed:

Bias

As exciting as it may be, agentic AI is trained on data from various internet sources. If the massive amount of hilariously bad replies from Google’s recent integration of AI into their search engine shows, it’s still mostly trained on the internet. And the internet has Reddit, X, and the Onion. There’s a strong need for curating AI interactions with various resources.

Hallucination

Generative AI is prone to making things up, and agentic AI is based on generative AI capabilities. This means that it, too, will be susceptible to hallucinations and odd behaviors. It’ll likely make up answers, fill gaps with randomly generated nonsense, or even learn to ‘lie’ about having done something when unable to properly interpret instructions.  

Obscurity

The nature of complex AI models makes their decision-making hard to understand. They make observations and inferences based on millions of parameters that simply can’t be tracked to their source. Average people struggle to understand agentic AI’s ‘mind,’ so it’s hard to trust it with complex tasks. 

These issues are foundational to any generative AI, whether it deals with language, images, music, or all of the above. However, the industry is swiftly coming up with ways of addressing these challenges:

Learning to Leverage Agentic AI

The role of guiding AI agents and curating their access to data, tasks, and objectives ultimately falls on humans. We’ve already come up with a few best practices in the field:

Transparency

Agentic AI has the strong advantage of communication, so we have to leverage it accordingly – ask for detailed explanations for why specific actions were taken and teach the model to perform better. Future AI systems will likely focus even more on clear explanations and descriptions of various steps involved in the process.

Oversight

Ultimately, humans need to curate the access of agentic AI to tasks and provide instructions. This doesn’t just mean engineers who create AI and work close to the metal, but even everyday users – it falls on us to take over whenever AI agents get stuck and see them through a responsible learning process.

Control

We can hardwire parameters and boundaries into how agentic AI ‘thinks’ to help guide their activities. In other words, if the agent is intended for a specific task, such as working as a designer or an assistant, we create specific boundaries that make it better at that specific task by not venturing outside the expected role. This would help both quality control and communication.

Data

We can improve the quality of data that our artificial agents take as part of their foundational models. This means properly annotated text or computer vision datasets, created by domain experts, such as Keymakr. We want our medical AI to be curated by medical experts, agricultural data curated by agronomists, and so on. This sounds common sense, however, there’s currently no real industry standard as the field is still evolving.

Overall, as autonomous as agentic AI can be, it still requires a high degree of human involvement to get things done in the real world.

The Internet of Machines

No doubt you’ve seen the endless stream of news about the infestation of bots on X and other social media platforms. While troubling and often annoying, we are witnessing early examples of agentic AI operating on, so far, rather simple instructions – try and sell something, build a following, or just spam specific topics in a group.

It’s not a flattering portrayal, however, this does offer a glimpse into a potential future where artificial agents carve out their own parts of the internet and operate there on behalf of the humans providing them tasks. Doing shopping for us, setting appointments, sending e-mails to agentic AIs employed by other people or companies – it’s already happening, but will no doubt feel like a sudden change when it grows in scale.

The Industry-Shaping Potential of Agentic AI

To unlock agentic AI’s full potential, we must design it to share our values and goals. Connecting AI with what humans care about is critical. It could lead to a future where individuals and businesses can take advantage of dozens of ‘independent contractors’ working on our behalf.

Getting there means exploring new ways of thinking. Here are some effects it may have:

  • Autonomous Coding. Code that writes its own code has been a pipe dream for ages, but agentic AI makes it possible. We can even create agents that supervise code and agents that plan architecture for code that needs to be written.
  • Autonomous HR. Artificial agents can assist in numerous tasks from managing hiring and interviews to payroll, updating employee databases, and so on. 
  • Autonomous Operations. From sales and marketing to business analytics, agentic AI has the ability to take over entire processes and effectively unify reports from multiple verticals.
  • Autonomous Maintenance. Deploying artificial agents to proactively monitor sites and order maintenance activities can be a game-changer in construction, maritime, and many other industries.

In short, startups should consider training artificial agents for their industry to be one of the largest value-adds they could be pursuing in the near future.

Alignment Research

The quest for Artificial General Intelligence (AGI) is striving to create AI as smart as humans. Even with the leaps in AI we’ve seen in recent years, true AGI needs big steps in both neuroscience and computer science. Collaborative and interdisciplinary research is required to ensure that agentic AI doesn’t cross moral boundaries – and this is doubly true for a possible AGI future.

This approach involves bringing many experts together. It spans AI, ethics, philosophy, economics, governance, and social sciences. Through their teamwork, we can create AI that respects human values and our well-being.

Alignment research is especially important because of…

Artificial Agents in the Physical World

Generative AI is already joining us in the real, physical world. The reveal of GPT-Vision and advances in Google’s Vision AI are just two examples of this trend.  

Combining Agentic AI and Computer Vision

AI Agents can learn to ‘see’ and understand the real world with the help of advanced computer vision algorithms. Moreover, advanced generative AI models already provide them with communication skills and the ability to independently perform tasks. So, it’s not a big leap to imagine robots and autonomous devices capable of interacting with us in physical space.

However, going beyond the internet is quite a difficult task.

The Struggle with Real-World Data

Sophisticated computer vision systems involve some of the most time-consuming and complex work in the industry. Teaching machines to understand 3D objects takes enormous amounts of data. Even the best systems in autonomous vehicles, surveillance, and other industries today are a far cry from things we as humans find simple and intuitive.

Access to accurate data is going to be crucial to the development of agentic AI beyond the narrow confines of online tasks. This involves hundreds of people working behind the scenes to create accurate datasets for models to process. While tools like Keylabs aim to streamline this process, we are still far from automatically annotating large amounts of training data at the speed that true autonomous AI would need.

So, we’re left with data collection, creation, and annotation – turning our collective human intelligence into knowledge for AI to absorb. 

However, agentic AI comes with some caveats:

  • It can infer a lot more data from a lot less due to autonomy. This means we have to focus on incredibly accurate small datasets that can help it get started on tasks.
  • It’s capable of communication and can inform us about gaps in data that stop it from distinguishing objects or carrying out tasks.
  • It’s able to set priorities and adapt, so purpose-built systems are less likely to shut down entirely due to a few missing values.

All this puts an even larger emphasis on human supervision and makes it the job of entrepreneurs to create domain-specific AI and curate its access to training datasets. There’s a demand for further improving the human-in-the-loop approach for annotating data – especially once our agentic AI becomes sophisticated enough to create its own training models.

It’s Closer Than You Probably Think

We tend to imagine that the future will be much like our present, which creates a massive blind spot. Changes in the market, politics, economy, and, of course, tech then seem to hit all at once as millions of humans adapt to new realities. 

The enormous growth of ChatGPT to 1 million users in 2022 over the course of just 5 days is one such shift. Hundreds of millions of people already use AI chatbots – and these chatbots are increasingly becoming autonomous agents. 

Agentic AI stands out because it can emulate thought, adapt, and even improve on its own. This gives it a power unlike anything we’ve created before in history. We have to start looking into the near future and get ahead of the trend. Soon enough, most of us will find ourselves living in the world of artificial agents populating the internet and, eventually, our physical space.


Chike Mgbemena: Professional Way to an App's Success - AI Time Journal

Chike Mgbemena: Professional Way to an App’s Success – AI Time Journal

ChatGPT 4 can exploit 87% of one-day vulnerabilities

ChatGPT 4 can exploit 87% of one-day vulnerabilities