STAY IN TOUCH WITH THE LATEST TRENDS IN DESIGN
Intelligent agents can be used to address both our personal and professional needs. We have become accustomed to using personal agents for daily matters, to an extent that day-to-day living seems inconceivable without them.
For example, Google search is a personal agent trained to search terabytes of information to seek an answer for your inquiry. Google search saves you a remarkable amount of time going through links and sorting them out based on the probability of their correlation to the inquiry.
Another example is Siri, which may be informing us about last night’s soccer scores, crawling the web for pictures of hipster jeans, reminding us about today’s meetings or even making a call on our behalf.
The principal pattern of the personal agent is to react upon an inquiry – to initiate a quest upon a human search, meeting the demands and current expectations of what a personal agent should be.
However, the same expectations might not be applicable to professional agents. The reason? When one steps beyond the realm of personal and onto the professional environment, the level of the interactions is increasingly augmenting. Instead of a human to agent relationship – the personal sphere, there is a network of entangled human-to-human and human-to-agent interactions.
As a result, one can infer that building a professional agent, flexible and intelligent enough, operating seamlessly and independently once deployed into the network, is a rather challenging design and technological puzzle.
When it comes to professional agents, “x.ai” is the prime example of artificial intelligence at the cutting edge of technology. The premise is simple yet extremely effective. We’ve all experienced a sense of defeat and helplessness in scheduling a time to meet up with a friend, potential client, investor, etc..
The process of scheduling is often messy, chaotic and stressful. This is mainly because on a daily basis, we’re buried under a mass of choices. We often find ourselves indecisive to tackle and make a decision.
In such delicate and often stressful day-to-day situations, having an intelligent agent on our side to help us organize our calendar seems to be a divine gift from the skies.
The other brilliant aspect to “x.ai” is the proposed environment in which the solution is defined. “X.ai” agents, in my case Amy, attaches ‘CC’ to the conversation via email. Amy, by already mastering my calendar and preferences, can jump in and manage the conversation from a specific point onwards. Amy works with my prospect/lead to settle on a date that both can attend.
In order for Amy to be able to manage this task successfully, she needs to fully realize my preferences as her client first. Whether I have any unique preferences or whether there are any unique constraints that I’m applying to this particular meeting.
Second is to understand the guest’s preferences; whether they have any unique wishes that she should take into consideration.
Third is to remove me from the conversation and follow up with the lead, to understand their intents and preferences and carry on the conversation towards a conclusion.
Fourth is to assemble an invite based on all the information gathered during previous steps.
In order for Amy to be able to perform such tasks, she needs to understand the conversation and the intent that lies beneath it by using Natural Language Processing (NLP). Then, upon the full realization, she should be able to respond back with inputs that are valuable to the conversation and carry it towards a conclusion using this Natural Language Generation (NLG).
Sounds straightforward? Often it is otherwise. The human communication path is rather zigzag and chaotic. There are layers behind a human’s intent that a human agent could not be able to decode. Perhaps “Yes sure Wednesday might work. Can I get back to you by Tuesday afternoon?” looks like a riddle that the creators of ‘Da Vinci Code’ might entertain themselves with during their spare time.
Any intelligent agent will fail to cover all the cases. It is of high importance to design fallbacks on which the system can rely. In such cases, humans often use “I don’t know” and it means that we are not sure about the context of the conversation or the question at hand. Apple Siri is a vivid example of an agent failing to incorporate an elegant fallback plan (i.e. searching the web not for what I’m looking for!)
It’s early to speculate to what extent our workflows will get disrupted by the intelligent agents but we are seeing bits and pieces growing out of the soil. One thing to be certain of is that the advent of artificial intelligence guarantees a promised land where the definition of work will be constantly pivoted and revisited. What I’m certain of as of now is that human <> artificial intelligence dynamics will continue to evolve further on.