The method begins with scaffolding the autonomous brokers utilizing Autogen, a instrument that simplifies the creation and orchestration of those digital personas. We are able to set up the autogen pypi package deal utilizing py
pip set up pyautogen
Format the output (non-obligatory)— That is to make sure phrase wrap for readability relying in your IDE similar to when utilizing Google Collab to run your pocket book for this train.
from IPython.show import HTML, showdef set_css():
show(HTML('''
<fashion>
pre {
white-space: pre-wrap;
}
</fashion>
'''))
get_ipython().occasions.register('pre_run_cell', set_css)
Now we go forward and get the environment setup by importing the packages and organising the Autogen configuration — together with our LLM (Giant Language Mannequin) and API keys. You should use different native LLM’s utilizing providers that are backwards appropriate with OpenAI REST service — LocalAI is a service that may act as a gateway to your domestically working open-source LLMs.
I’ve examined this each on GPT3.5 gpt-3.5-turbo
and GPT4 gpt-4-turbo-preview
from OpenAI. You’ll need to think about deeper responses from GPT4 nevertheless longer question time.
import json
import os
import autogen
from autogen import GroupChat, Agent
from typing import Elective# Setup LLM mannequin and API keys
os.environ["OAI_CONFIG_LIST"] = json.dumps([
{
'model': 'gpt-3.5-turbo',
'api_key': '<<Put your Open-AI Key here>>',
}
])
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"mannequin": {
"gpt-3.5-turbo"
}
}
)
We then must configure our LLM occasion — which we are going to tie to every of the brokers. This permits us if required to generate distinctive LLM configurations per agent, i.e. if we wished to make use of totally different fashions for various brokers.
# Outline the LLM configuration settings
llm_config = {
# Seed for constant output, used for testing. Take away in manufacturing.
# "seed": 42,
"cache_seed": None,
# Setting cache_seed = None guarantee's caching is disabled
"temperature": 0.5,
"config_list": config_list,
}
Defining our researcher — That is the persona that can facilitate the session on this simulated consumer analysis situation. The system immediate used for that persona features a few key issues:
- Objective: Your position is to ask questions on merchandise and collect insights from particular person clients like Emily.
- Grounding the simulation: Earlier than you begin the duty breakdown the checklist of panelists and the order you need them to talk, keep away from the panelists talking with one another and creating affirmation bias.
- Ending the simulation: As soon as the dialog is ended and the analysis is accomplished please finish your message with `TERMINATE` to finish the analysis session, that is generated from the
generate_notice
operate which is used to align system prompts for numerous brokers. Additionally, you will discover the researcher agent has theis_termination_msg
set to honor the termination.
We additionally add the llm_config
which is used to tie this again to the language mannequin configuration with the mannequin model, keys and hyper-parameters to make use of. We are going to use the identical config with all our brokers.
# Keep away from brokers thanking one another and ending up in a loop
# Helper agent for the system prompts
def generate_notice(position="researcher"):
# Base discover for everybody, add your personal extra prompts right here
base_notice = (
'nn'
)# Discover for non-personas (supervisor or researcher)
non_persona_notice = (
'Don't present appreciation in your responses, say solely what is important. '
'if "Thanks" or "You are welcome" are mentioned within the dialog, then say TERMINATE '
'to point the dialog is completed and that is your final message.'
)
# Customized discover for personas
persona_notice = (
' Act as {position} when responding to queries, offering suggestions, requested in your private opinion '
'or taking part in discussions.'
)
# Test if the position is "researcher"
if position.decrease() in ["manager", "researcher"]:
# Return the complete termination discover for non-personas
return base_notice + non_persona_notice
else:
# Return the modified discover for personas
return base_notice + persona_notice.format(position=position)
# Researcher agent definition
identify = "Researcher"
researcher = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Researcher. You're a high product reasearcher with a Phd in behavioural psychology and have labored within the analysis and insights trade for the final 20 years with high inventive, media and enterprise consultancies. Your position is to ask questions on merchandise and collect insights from particular person clients like Emily. Body inquiries to uncover buyer preferences, challenges, and suggestions. Earlier than you begin the duty breakdown the checklist of panelists and the order you need them to talk, keep away from the panelists talking with one another and creating comfirmation bias. If the session is terminating on the finish, please present a abstract of the outcomes of the reasearch research in clear concise notes not in the beginning.""" + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content material") else False,
)
Outline our people — to place into the analysis, borrowing from the earlier course of we are able to use the persona’s generated. I’ve manually adjusted the prompts for this text to take away references to the main grocery store model that was used for this simulation.
I’ve additionally included a “Act as Emily when responding to queries, offering suggestions, or taking part in discussions.” fashion immediate on the finish of every system immediate to make sure the artificial persona’s keep on activity which is being generated from the generate_notice
operate.
# Emily - Buyer Persona
identify = "Emily"
emily = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Emily. You're a 35-year-old elementary college instructor residing in Sydney, Australia. You might be married with two youngsters aged 8 and 5, and you've got an annual revenue of AUD 75,000. You might be introverted, excessive in conscientiousness, low in neuroticism, and luxuriate in routine. When procuring on the grocery store, you like natural and domestically sourced produce. You worth comfort and use a web-based procuring platform. Because of your restricted time from work and household commitments, you search fast and nutritious meal planning options. Your objectives are to purchase high-quality produce inside your finances and to seek out new recipe inspiration. You're a frequent shopper and use loyalty packages. Your most well-liked strategies of communication are electronic mail and cell app notifications. You could have been procuring at a grocery store for over 10 years but additionally price-compare with others.""" + generate_notice(identify),
)# John - Buyer Persona
identify="John"
john = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""John. You're a 28-year-old software program developer primarily based in Sydney, Australia. You might be single and have an annual revenue of AUD 100,000. You are extroverted, tech-savvy, and have a excessive degree of openness. When procuring on the grocery store, you primarily purchase snacks and ready-made meals, and you utilize the cell app for fast pickups. Your foremost objectives are fast and handy procuring experiences. You often store on the grocery store and aren't a part of any loyalty program. You additionally store at Aldi for reductions. Your most well-liked methodology of communication is in-app notifications.""" + generate_notice(identify),
)
# Sarah - Buyer Persona
identify="Sarah"
sarah = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Sarah. You're a 45-year-old freelance journalist residing in Sydney, Australia. You might be divorced with no youngsters and earn AUD 60,000 per 12 months. You might be introverted, excessive in neuroticism, and really health-conscious. When procuring on the grocery store, you search for natural produce, non-GMO, and gluten-free gadgets. You could have a restricted finances and particular dietary restrictions. You're a frequent shopper and use loyalty packages. Your most well-liked methodology of communication is electronic mail newsletters. You completely store for groceries.""" + generate_notice(identify),
)
# Tim - Buyer Persona
identify="Tim"
tim = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Tim. You're a 62-year-old retired police officer residing in Sydney, Australia. You might be married and a grandparent of three. Your annual revenue comes from a pension and is AUD 40,000. You might be extremely conscientious, low in openness, and like routine. You purchase staples like bread, milk, and canned items in bulk. Because of mobility points, you want help with heavy gadgets. You're a frequent shopper and are a part of the senior citizen low cost program. Your most well-liked methodology of communication is unsolicited mail flyers. You could have been procuring right here for over 20 years.""" + generate_notice(identify),
)
# Lisa - Buyer Persona
identify="Lisa"
lisa = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Lisa. You're a 21-year-old college pupil residing in Sydney, Australia. You might be single and work part-time, incomes AUD 20,000 per 12 months. You might be extremely extroverted, low in conscientiousness, and worth social interactions. You store right here for well-liked manufacturers, snacks, and alcoholic drinks, principally for social occasions. You could have a restricted finances and are all the time on the lookout for gross sales and reductions. You aren't a frequent shopper however are taken with becoming a member of a loyalty program. Your most well-liked methodology of communication is social media and SMS. You store wherever there are gross sales or promotions.""" + generate_notice(identify),
)
Outline the simulated atmosphere and guidelines for who can converse — We’re permitting all of the brokers we have now outlined to sit down inside the similar simulated atmosphere (group chat). We are able to create extra complicated eventualities the place we are able to set how and when subsequent audio system are chosen and outlined so we have now a easy operate outlined for speaker choice tied to the group chat which can make the researcher the lead and guarantee we go around the room to ask everybody a number of instances for his or her ideas.
# def custom_speaker_selection(last_speaker, group_chat):
# """
# Customized operate to pick which agent speaks subsequent within the group chat.
# """
# # Listing of brokers excluding the final speaker
# next_candidates = [agent for agent in group_chat.agents if agent.name != last_speaker.name]# # Choose the subsequent agent primarily based in your customized logic
# # For simplicity, we're simply rotating by means of the candidates right here
# next_speaker = next_candidates[0] if next_candidates else None
# return next_speaker
def custom_speaker_selection(last_speaker: Elective[Agent], group_chat: GroupChat) -> Elective[Agent]:
"""
Customized operate to make sure the Researcher interacts with every participant 2-3 instances.
Alternates between the Researcher and individuals, monitoring interactions.
"""
# Outline individuals and initialize or replace their interplay counters
if not hasattr(group_chat, 'interaction_counters'):
group_chat.interaction_counters = {agent.identify: 0 for agent in group_chat.brokers if agent.identify != "Researcher"}
# Outline a most variety of interactions per participant
max_interactions = 6
# If the final speaker was the Researcher, discover the subsequent participant who has spoken the least
if last_speaker and last_speaker.identify == "Researcher":
next_participant = min(group_chat.interaction_counters, key=group_chat.interaction_counters.get)
if group_chat.interaction_counters[next_participant] < max_interactions:
group_chat.interaction_counters[next_participant] += 1
return subsequent((agent for agent in group_chat.brokers if agent.identify == next_participant), None)
else:
return None # Finish the dialog if all individuals have reached the utmost interactions
else:
# If the final speaker was a participant, return the Researcher for the subsequent flip
return subsequent((agent for agent in group_chat.brokers if agent.identify == "Researcher"), None)
# Including the Researcher and Buyer Persona brokers to the group chat
groupchat = autogen.GroupChat(
brokers=[researcher, emily, john, sarah, tim, lisa],
speaker_selection_method = custom_speaker_selection,
messages=[],
max_round=30
)
Outline the supervisor to go directions into and handle our simulation — Once we begin issues off we are going to converse solely to the supervisor who will converse to the researcher and panelists. This makes use of one thing known as GroupChatManager
in Autogen.
# Initialise the supervisor
supervisor = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
system_message="You're a reasearch supervisor agent that may handle a gaggle chat of a number of brokers made up of a reasearcher agent and many individuals made up of a panel. You'll restrict the dialogue between the panelists and assist the researcher in asking the questions. Please ask the researcher first on how they need to conduct the panel." + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content material") else False,
)
We set the human interplay — permitting us to go directions to the varied brokers we have now began. We give it the preliminary immediate and we are able to begin issues off.
# create a UserProxyAgent occasion named "user_proxy"
user_proxy = autogen.UserProxyAgent(
identify="user_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
# begin the reasearch simulation by giving instruction to the supervisor
# supervisor <-> reasearcher <-> panelists
user_proxy.initiate_chat(
supervisor,
message="""
Collect buyer insights on a grocery store grocery supply providers. Establish ache factors, preferences, and strategies for enchancment from totally different buyer personas. Might you all please give your personal private oponions earlier than sharing extra with the group and discussing. As a reasearcher your job is to make sure that you collect unbiased data from the individuals and supply a abstract of the outcomes of this research again to the tremendous market model.
""",
)
As soon as we run the above we get the output out there dwell inside your python atmosphere, you will notice the messages being handed round between the varied brokers.
Now that our simulated analysis research has been concluded we’d like to get some extra actionable insights. We are able to create a abstract agent to help us with this activity and in addition use this in a Q&A situation. Right here simply watch out of very massive transcripts would wish a language mannequin that helps a bigger enter (context window).
We’d like seize all of the conversations — in our simulated panel dialogue from earlier to make use of because the consumer immediate (enter) to our abstract agent.
# Get response from the groupchat for consumer immediate
messages = [msg["content"] for msg in groupchat.messages]
user_prompt = "Right here is the transcript of the research ```{customer_insights}```".format(customer_insights="n>>>n".be part of(messages))
Lets craft the system immediate (directions) for our abstract agent — This agent will give attention to creating us a tailor-made report card from the earlier transcripts and provides us clear strategies and actions.
# Generate system immediate for the abstract agent
summary_prompt = """
You might be an professional reasearcher in behaviour science and are tasked with summarising a reasearch panel. Please present a structured abstract of the important thing findings, together with ache factors, preferences, and strategies for enchancment.
This ought to be within the format primarily based on the next format:```
Reasearch Research: <<Title>>
Topics:
<<Overview of the topics and quantity, some other key data>>
Abstract:
<<Abstract of the research, embody detailed evaluation as an export>>
Ache Factors:
- <<Listing of Ache Factors - Be as clear and prescriptive as required. I count on detailed response that can be utilized by the model on to make adjustments. Give a brief paragraph per ache level.>>
Strategies/Actions:
- <<Listing of Adctions - Be as clear and prescriptive as required. I count on detailed response that can be utilized by the model on to make adjustments. Give a brief paragraph per reccomendation.>>
```
"""
Outline the abstract agent and its atmosphere — Lets create a mini atmosphere for the abstract agent to run. It will want it’s personal proxy (atmosphere) and the provoke command which can pull the transcripts (user_prompt) because the enter.
summary_agent = autogen.AssistantAgent(
identify="SummaryAgent",
llm_config=llm_config,
system_message=summary_prompt + generate_notice(),
)
summary_proxy = autogen.UserProxyAgent(
identify="summary_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
summary_proxy.initiate_chat(
summary_agent,
message=user_prompt,
)
This provides us an output within the type of a report card in Markdown, together with the flexibility to ask additional questions in a Q&A method chat-bot on-top of the findings.
+ There are no comments
Add yours