The AIhub espresso nook captures the musings of AI specialists over a brief dialog. This month we sort out the subject of agentic AI. Becoming a member of the dialog this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State College), Sabine Hauert (College of Bristol), Sarit Kraus (Bar-Ilan College), and Michael Littman (Brown College).
Sabine Hauert: Right now’s matter is agentic AI. What’s it? Why is it taking off? Sanmay, maybe you would kick off with what you observed at AAMAS [the Autonomous Agents and Multiagent Systems conference]?
Sanmay Das: It was very attention-grabbing as a result of clearly there’s abruptly been an infinite curiosity in what an agent is and within the improvement of agentic AI. Folks within the AAMAS neighborhood have been interested by what an agent is for at the least three a long time. Properly, longer truly, however the neighborhood itself dates again about three a long time within the type of these conferences. One of many very attention-grabbing questions was about why all people is rediscovering the wheel and rewriting these papers about what it means to be an agent, and the way we must always take into consideration these brokers. The best way wherein AI has progressed, within the sense that giant language fashions (LLMs) at the moment are the dominant paradigm, is sort of totally totally different from the way in which wherein individuals have thought of brokers within the AAMAS neighborhood. Clearly, there’s been a number of machine studying and reinforcement studying work, however there’s this historic custom of interested by reasoning and logic the place you’ll be able to even have express world fashions. Even once you’re doing sport principle, or MDPs, or their variants, you might have an express world mannequin that lets you specify the notion of how one can encode company. Whereas I feel that’s a part of the disconnect now – every little thing is a bit of bit black boxy and statistical. How do you then take into consideration what it means to be an agent? I feel when it comes to the underlying notion of what it means to be an agent, there’s quite a bit that may be learnt from what’s been carried out within the brokers neighborhood and in philosophy.
I additionally assume that there are some attention-grabbing ties to interested by emergent behaviors, and multi-agent simulation. But it surely’s a bit of little bit of a Wild West on the market and there are all of those papers saying we have to first outline what an agent is, which is unquestionably rediscovering the wheel. So, at AAMAS, there was a number of dialogue of stuff like that, but in addition questions on what this implies on this specific period, as a result of now we abruptly have these actually highly effective creatures that I feel no person within the AAMAS neighborhood noticed coming. Essentially we have to adapt what we’ve been doing in the neighborhood to take into consideration that these are totally different from how we thought clever brokers would emerge into this extra normal house the place they’ll play. We have to work out how we adapt the sorts of issues that we’ve discovered about negotiation, agent interplay, and agent intention, to this world. Rada Mihalcea gave a extremely attention-grabbing keynote speak interested by the pure language processing (NLP) facet of issues and the questions there.
Sabine: Do you are feeling prefer it was a brand new neighborhood becoming a member of the AAMAS neighborhood, or the AAMAS neighborhood that was changing?
Sanmay: Properly, there have been individuals who had been coming to AAMAS and seeing that the neighborhood has been engaged on this for a very long time. So studying one thing from that was undoubtedly the vibe that I received. However my guess is, in case you go to ICML or NeurIPS, that’s very a lot not the vibe.
Sarit Kraus: I feel they’re losing a while. I imply, overlook the “what’s an agent?”, however there have been many works from the agent neighborhood for a few years about coordination, collaboration, and many others. I heard about one current paper the place they reinvented Contract Nets. Contract Nets had been launched in 1980, and now there’s a paper about it. OK, it’s LLMs which are transferring duties from each other and signing contracts, but when they only learn the previous papers, it might save their time after which they may transfer to extra attention-grabbing analysis questions. Presently, they are saying with LLM brokers that you’ll want to divide the duty into sub brokers. My PhD was about constructing a Diplomacy participant, and in my design of the participant there have been brokers that every performed a distinct a part of a Diplomacy play – one was a strategic agent, one was a International Minister, and many others. And now they’re speaking about it once more.
Michael Littman: I completely agree with Sanmay and Sarit. The best way I give it some thought is that this: this notion of “let’s construct brokers now that we’ve got LLMs” to me feels a bit of bit like we’ve got a brand new programming language like Rust++, or no matter, and we will use it to jot down applications that we had been fighting earlier than. It’s true that new programming languages could make some issues simpler, which is nice, and LLMs give us a brand new, highly effective approach to create AI techniques, and that’s additionally nice. But it surely’s not clear that they remedy the challenges that the brokers neighborhood have been grappling with for thus lengthy. So, right here’s a concrete instance from an article that I learn yesterday. Claudius is a model of Claude and it was agentified to run a small on-line store. They gave it the power to speak with individuals, put up slack messages, order merchandise, set costs on issues, and other people had been truly doing financial exchanges with the system. On the finish of the day, it was horrible. Anyone talked it into shopping for tungsten cubes and promoting them within the retailer. It was simply nonsense. The Anthropic individuals considered the experiment as a win. They mentioned “ohh yeah, there have been undoubtedly issues, however they’re completely fixable”. And the fixes, to me, appeared like all they’d should do is remedy the issues that the brokers neighborhood has been making an attempt to unravel for the final couple of a long time. That’s all, after which we’ve received it excellent. And it’s not clear to me in any respect that simply making LLMs generically higher, or smarter, or higher reasoners abruptly makes all these sorts of brokers questions trivial as a result of I don’t assume they’re. I feel they’re arduous for a cause and I feel you need to grapple with the arduous questions to really remedy these issues. But it surely’s true that LLMs give us a brand new capability to create a system that may have a dialog. However then the system’s decision-making is simply actually, actually dangerous. And so I believed that was tremendous attention-grabbing. However we brokers researchers nonetheless have jobs, that’s the excellent news from all this.
Sabine: My bread and butter is to design brokers, in our case robots, that work collectively to reach at desired emergent properties and collective behaviors. From this swarm perspective, I really feel that over the previous 20 years we’ve got discovered a number of the mechanisms by which you attain consensus, the mechanisms by which you mechanically design agent behaviours utilizing machine studying to allow teams to realize a desired collective activity. We all know how one can make agent behaviours comprehensible, all that great things you need in an engineered system. However up till now, we’ve been profoundly missing the person brokers’ capability to work together with the world in a manner that provides you richness. So in my thoughts, there’s a very nice interface the place the brokers are extra succesful, to allow them to now do these native interactions that make them helpful. However we’ve got this entire overarching approach to systematically engineer collectives that I feel may make one of the best of each worlds. I don’t know at what level that interface occurs. I assume it comes partly from each neighborhood going a bit of bit in direction of the opposite facet. So from the swarm facet, we’re making an attempt visible language fashions (VLMs), we’re making an attempt to have our robots perceive utilizing LLMs their native world to speak with people and with one another and get a collective consciousness at a really native stage of what’s occurring. After which we use our swarm paradigms to have the ability to engineer what they do as a collective utilizing our previous analysis experience. I think about for many who are simply coming into this self-discipline they should begin from the LLMs and go up. I feel it’s a part of the method.
Tom Dietterich: I feel a number of it simply doesn’t have something to do with brokers in any respect, you’re writing pc applications. Folks discovered that in case you attempt to use a single LLM to do the entire thing, the context will get all tousled and the LLM begins having hassle deciphering it. In reality, these LLMs have a comparatively small short-term reminiscence that they’ll successfully use earlier than they begin getting interference among the many various things within the buffer. So the engineers break the system into a number of LLM calls and chain them collectively, and it’s not an agent, it’s simply a pc program. I don’t know what number of of you might have seen this technique referred to as DSPy (written by Omar Khattab)? It takes an express type of software program engineering perspective on issues. Mainly, you write a sort signature for every LLM module that claims “right here’s what it’s going to take as enter, right here’s what it’s going to provide as output”, you construct your system, after which DSPy mechanically tunes all of the prompts as a type of compiler part to get the system to do the proper factor. I need to query whether or not constructing techniques with LLMs as a software program engineering train will department off from the constructing of multi-agent techniques. As a result of just about all of the “agentic techniques” usually are not brokers within the sense that we’d name them that. They don’t have autonomy any greater than a daily pc program does.
Sabine: I’m wondering in regards to the anthropomorphization of this, as a result of now that you’ve got totally different brokers, they’re all doing a activity or a job, and rapidly you get articles speaking about how one can change a complete crew by a set of brokers. So we’re not changing particular person jobs, we’re now changing groups and I’m wondering if this terminology additionally doesn’t assist.
Sanmay: To be clear, this concept has existed at the least for the reason that early 90s, when there have been these “delicate bots” that had been mainly working Unix instructions and so they had been determining what to do themselves. It’s actually no totally different. What individuals imply after they’re speaking about brokers is giving a bit of code the chance to run its personal stuff and to have the ability to try this in service of some sort of a purpose.
I take into consideration this when it comes to financial brokers, as a result of that’s what I grew up (AKA, did my PhD) interested by. And, do I need an agent? I may take into consideration writing an agent that manages my (non-existent) inventory portfolio. If I had sufficient cash to have a inventory portfolio, I’d take into consideration writing an agent that manages that portfolio, and that’s an inexpensive notion of getting autonomy, proper? It has some purpose, which I set, after which it goes about making choices. If you concentrate on the sensor-actuator framework, its actuator is that it might probably make trades and it might probably take cash from my checking account so as to take action. So I feel that there’s one thing in getting again to the fundamental query of “how does this agent act on the planet?” after which what are the percepts that it’s receiving?
I fully agree with what you had been saying earlier about this query of whether or not the LLMs allow interactions to occur in numerous methods. If you happen to have a look at pre-LLMs, with these brokers that had been doing pricing, there’s this hilarious story of how some previous biology textbook ended up costing $17 million on Amazon as a result of there have been these two bots that had been doing the pricing of these books at two totally different used e book shops. Considered one of them was a barely higher-rated retailer than the opposite, so it might take no matter worth that the lower-rated retailer had and push it up by 10%. Then the lower-rated retailer was an undercutter and it might take the present highest worth and go to 99% of that worth. However this simply led to this spiral the place abruptly that e book price $17 million. That is precisely the sort of factor that’s going to occur on this world. However the factor that I’m truly considerably nervous about, and anthropomorphising, is how these brokers are going to resolve on their objectives.There’s a possibility for actually dangerous errors to return out of programming that wouldn’t be as dangerous in a extra constrained scenario.
Tom: Within the reinforcement studying literature, after all, there’s all this dialogue about reward hacking and so forth, however now we think about two brokers interacting with one another and hacking one another’s rewards successfully, so the entire dynamics blows up – individuals are simply not ready.
Sabine: The breakdown of the issue that Tom talked about, I feel there’s maybe an actual profit to having these brokers which are narrower and that because of this are maybe extra verifiable on the particular person stage, they perhaps have clearer objectives, they could be extra inexperienced as a result of we’d have the ability to constrain what space they function with. After which within the robotics world, we’ve been taking a look at collaborative consciousness the place slender brokers which are task-specific are conscious of different brokers and collectively they’ve some consciousness of what they’re meant to be doing general. And it’s fairly anti-AGI within the sense that you’ve got plenty of slender brokers once more. So a part of me is questioning, are we going again to heterogeneous task-specific brokers and the AGI is collective, maybe? And so this new wave, perhaps it’s anti-AGI – that will be attention-grabbing!
Tom: Properly, it’s nearly the one manner we will hope to show the correctness of the system, to have every part slender sufficient that we will truly cause about it. That’s an attention-grabbing paradox that I used to be lacking from Stuart Russell’s “What if we succeed?” chapter in his e book, which is what if we achieve constructing a broad-spectrum agent, how are we going to check it?
It does seem to be it might be nice to have some individuals from the brokers neighborhood converse on the machine studying conferences and attempt to do some diplomatic outreach. Or perhaps run some workshops at these conferences.
Sarit: I used to be all the time keen on human-agent interplay and the truth that LLMs have solved the language problem for me, I’m very excited. However the different drawback that has been talked about continues to be right here – you’ll want to combine methods and decision-making. So my mannequin is you might have LLM brokers which have instruments which are all kinds of algorithms that we developed and applied and there must be a number of of them. However the truth that anyone solved our pure language interplay, I feel that is actually, actually nice and good for the brokers neighborhood as nicely for the pc science neighborhood typically.
Sabine: And good for the people. It’s a superb level, the people are brokers as nicely in these techniques.
AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.
AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.