AI Agents: The future of AI?

AI Agents: The future of AI?

AI Series: Mid-Series Special 3

ยท

5 min read

print("AI Agents: The future of AI?")

Note: AI agents here mean AI models(mostly LLMs) using agentic workflow not just traditional AI assistants.

A couple of weeks back I listened to Professor Andrew Ng, if you don't know Ng, he is like the Lil Wayne of AI ๐Ÿ˜, pacesetter. Ng gave a lecture about agentic workflows and how AI agents are going to revolutionize our AI model results.

Now prior to the above, Devin AI launched and was probably the first platform offering agentic workflow for software development. Also, the big boys, Google and Microsoft have also been experimenting with agentic workflows for AI models for some time now.

With all this happening I thought could this be the future of AI? Let's find out together ๐Ÿ‘ฝ.

A textbook definition of an AI agent is this: An AI agent is a computer program that can act autonomously in an environment to achieve specific goals. It can perceive the environment, reason about it, and take action to achieve its goals.

Now in the context of this chapter, AI agents are simply AI assistants that solve problems using an iterative approach. This means the AI agent can solve a problem by going over it repeatedly till it finds the solution.

Consider the image below:

This shows how a traditional LLM(Large Language Model) works compared to Agentic workflow.

In the first non-agentic instance also called zero-shot, the LLM takes prompts(instructions) one at a time and provides a result fast. A good example of this is ChatGPT, you can give it a prompt to write an essay like in the illustration image above and it gives you a written essay as requested.

While in Agentic workflow the LLM takes multiple prompts and takes a longer time to work on each with an iterative approach, that is by taking instruction one at a time. As we can see in the illustration image above, in agentic workflow you can give the model multiple prompts to write an essay, do research and review its draft, and so on. So basically in Agentic workflow the model can research and revise its work with an iterative approach. That is powerful!

Let's put that in context, using an agentic workflow a model can be given a target output and a problem, the model would solve the problem and cross-check if it's right, if not it does research, learns, and revises its solution. It'll do this continuously till it gets the target result ๐Ÿฅถ.

Consider the 2D plane below:

The above chart shows the accuracy of regular zero-shot LLM's GPT-3.5 and the higher version GPT-4. Compared to models built with Agentic workflow. We can see that using GPT-3.5 with Agentic workflow for example using Intervenor (Intervenor conducts an interactive code-repair process) gives over 75% accuracy compared to regular GPT-3.5's 48% and a better performance than even GPT-4 with 67%.

AI that can research and teach itself new things, does that sound familiar? Yes, sounds like AGI (Artificial General Intelligence) ๐Ÿ‘ฝ.

That's massive, right? So what bad side could this have?

Ethics and Alignment

Recently Ilya Sutskever and Jan Leike two prominent researchers working on Open AI's safety and alignment team resigned, sighting Open AI's non-compliance.

Google researchers working at Deepmind also brought up the need for alignment of AI technologies. So what does that mean?

The Deepmind researchers propose an updated, four-way concept of alignment for AI agents that considers the;

  1. AI assistant itself

  2. User

  3. Developer

  4. Society

An AI assistant is misaligned when it disproportionately favors one of these participants over another.

For example, an AI could be misaligned if it pursues its own goals at the expense of the user or society โ€” or if it's designed to disproportionately benefit the company that makes it.

AI Agents Platforms

There are many companies building AI with Agentic workflows right now, the image below shows some of them and their industries.

Eliza Effect

Let me end this piece with a little story. In 1966 Joseph Weizenbaum an MIT Professor created a chatbot (widely considered as the first chatbot) called Eliza. Joseph got people to chat with Eliza... you know text, and most people thought they were chatting with a real person. Eliza gave users open-ended questions modeled on a therapist's prompts, and they began avidly sharing their feelings.

The image below shows what Eliza looked like in 1966.

What do you think about the responses? I think it's pretty impressive ๐Ÿฅถ.

Now, Joseph and his family ran away from Germany, because... you know, Hitler ๐Ÿ˜ฌ. He spent the rest of his life sounding alarms over the dangers of AI in the hands of powerful companies and governments.

Now the chatbot Eliza created what is now known as the Eliza Effect. It plugs into potent myths and images that we've built around AI itself, from its name to its representations in culture to the grand promises businesses are making for it.

Some researchers still believe the Eliza Effect affects our judgment of AI today. "Artificial intelligence" sounds so much grander than "large language model" or "machine-learning-trained algorithm." The label deliberately emphasizes open-ended possibilities and downplays limits. It maintains the mystery and magic.

This story shows that whether we think AI is not as good as it seems, we still need to control it with alignment.

Materials

  1. You can watch Andrew Ng's lecture here.

  2. Check out Devin AI here.

  3. Read about the Google Deepmind researcher's report here.

The end

How was this chapter? What do you think of AI agents and how powerful they can be? How about the ethical problems they pose?

This was a good one ๐Ÿค—

We'll continue with our next chapter next week, see ya! ๐Ÿ‘ฝ

โฌ…๏ธ Previous Chapter

Next Chapter โžก๏ธ

ย