mirror of
https://github.com/Significant-Gravitas/Auto-GPT.git
synced 2025-01-08 11:57:32 +08:00
Merge branch 'master' into master
This commit is contained in:
commit
2165371d8e
33
.github/PULL_REQUEST_TEMPLATE.md
vendored
33
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,18 +1,33 @@
|
||||
### Background
|
||||
<!-- 📢 Announcement
|
||||
We've recently noticed an increase in pull requests focusing on combining multiple changes. While the intentions behind these PRs are appreciated, it's essential to maintain a clean and manageable git history. To ensure the quality of our repository, we kindly ask you to adhere to the following guidelines when submitting PRs:
|
||||
|
||||
<!-- Provide a brief overview of why this change is being made. Include any relevant context, prior discussions, or links to relevant issues. -->
|
||||
Focus on a single, specific change.
|
||||
Do not include any unrelated or "extra" modifications.
|
||||
Provide clear documentation and explanations of the changes made.
|
||||
Ensure diffs are limited to the intended lines — no applying preferred formatting styles or line endings (unless that's what the PR is about).
|
||||
For guidance on committing only the specific lines you have changed, refer to this helpful video: https://youtu.be/8-hSNHHbiZg
|
||||
|
||||
By following these guidelines, your PRs are more likely to be merged quickly after testing, as long as they align with the project's overall direction. -->
|
||||
|
||||
### Background
|
||||
<!-- Provide a concise overview of the rationale behind this change. Include relevant context, prior discussions, or links to related issues. Ensure that the change aligns with the project's overall direction. -->
|
||||
|
||||
### Changes
|
||||
<!-- Describe the specific, focused change made in this pull request. Detail the modifications clearly and avoid any unrelated or "extra" changes. -->
|
||||
|
||||
<!-- Describe the changes made in this pull request. Be specific and detailed. -->
|
||||
### Documentation
|
||||
<!-- Explain how your changes are documented, such as in-code comments or external documentation. Ensure that the documentation is clear, concise, and easy to understand. -->
|
||||
|
||||
### Test Plan
|
||||
<!-- Describe how you tested this functionality. Include steps to reproduce, relevant test cases, and any other pertinent information. -->
|
||||
|
||||
<!-- Explain how you tested this functionality. Include the steps to reproduce and any relevant test cases. -->
|
||||
### PR Quality Checklist
|
||||
- [ ] My pull request is atomic and focuses on a single change.
|
||||
- [ ] I have thouroughly tested my changes with multiple different prompts.
|
||||
- [ ] I have considered potential risks and mitigations for my changes.
|
||||
- [ ] I have documented my changes clearly and comprehensively.
|
||||
- [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as seperate Pull Reqests, they are the easiest to merge! -->
|
||||
|
||||
### Change Safety
|
||||
<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->
|
||||
|
||||
- [ ] I have added tests to cover my changes
|
||||
- [ ] I have considered potential risks and mitigations for my changes
|
||||
|
||||
<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. -->
|
||||
<!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guide lines. -->
|
||||
|
4
.gitignore
vendored
4
.gitignore
vendored
@ -7,6 +7,8 @@ package-lock.json
|
||||
auto_gpt_workspace/*
|
||||
*.mpeg
|
||||
.env
|
||||
venv/*
|
||||
outputs/*
|
||||
ai_settings.yaml
|
||||
.vscode
|
||||
.vscode
|
||||
auto-gpt.json
|
||||
|
@ -2,6 +2,7 @@ FROM python:3.11
|
||||
|
||||
WORKDIR /app
|
||||
COPY scripts/ /app
|
||||
COPY requirements.txt /app
|
||||
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
|
15
README.md
15
README.md
@ -81,7 +81,7 @@ git clone https://github.com/Torantulino/Auto-GPT.git
|
||||
2. Navigate to the project directory:
|
||||
*(Type this into your CMD window, you're aiming to navigate the CMD window to the repository you just downloaded)*
|
||||
```
|
||||
$ cd 'Auto-GPT'
|
||||
cd 'Auto-GPT'
|
||||
```
|
||||
|
||||
3. Install the required dependencies:
|
||||
@ -92,7 +92,7 @@ pip install -r requirements.txt
|
||||
|
||||
4. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.
|
||||
- Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.
|
||||
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
|
||||
- Obtain your ElevenLabs API key from: https://beta.elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
|
||||
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and provide the `OPENAI_API_BASE`, `OPENAI_API_VERSION` and `OPENAI_DEPLOYMENT_ID` values as explained here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section
|
||||
|
||||
## 🔧 Usage
|
||||
@ -114,7 +114,7 @@ python scripts/main.py --speak
|
||||
|
||||
## 🔍 Google API Keys Configuration
|
||||
|
||||
This section is optional, use the official google api if you are having issues with error 429 when running google search.
|
||||
This section is optional, use the official google api if you are having issues with error 429 when running a google search.
|
||||
To use the `google_official_search` command, you need to set up your Google API keys in your environment variables.
|
||||
|
||||
1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
|
||||
@ -185,7 +185,12 @@ are loaded for the agent at any given time.
|
||||
3. Find your API key and region under the default project in the left sidebar.
|
||||
|
||||
### Setting up environment variables
|
||||
For Windows Users:
|
||||
|
||||
Simply set them in the `.env` file.
|
||||
|
||||
Alternatively, you can set them from the command line (advanced):
|
||||
|
||||
For Windows Users:
|
||||
```
|
||||
setx PINECONE_API_KEY "YOUR_PINECONE_API_KEY"
|
||||
export PINECONE_ENV="Your pinecone region" # something like: us-east4-gcp
|
||||
@ -198,7 +203,6 @@ export PINECONE_ENV="Your pinecone region" # something like: us-east4-gcp
|
||||
|
||||
```
|
||||
|
||||
Or you can set them in the `.env` file.
|
||||
|
||||
## View Memory Usage
|
||||
|
||||
@ -222,6 +226,7 @@ If you don't have access to the GPT4 api, this mode will allow you to use Auto-G
|
||||
```
|
||||
python scripts/main.py --gpt3only
|
||||
```
|
||||
It is recommended to use a virtual machine for tasks that require high security measures to prevent any potential harm to the main computer's system and data.
|
||||
|
||||
## 🖼 Image Generation
|
||||
By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion, a [HuggingFace API Token](https://huggingface.co/settings/tokens) is required.
|
||||
|
@ -7,6 +7,7 @@ agents = {} # key, (task, full_message_history, model)
|
||||
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
|
||||
|
||||
def create_agent(task, prompt, model):
|
||||
"""Create a new agent and return its key"""
|
||||
global next_key
|
||||
global agents
|
||||
|
||||
@ -32,6 +33,7 @@ def create_agent(task, prompt, model):
|
||||
|
||||
|
||||
def message_agent(key, message):
|
||||
"""Send a message to an agent and return its response"""
|
||||
global agents
|
||||
|
||||
task, messages, model = agents[int(key)]
|
||||
@ -52,6 +54,7 @@ def message_agent(key, message):
|
||||
|
||||
|
||||
def list_agents():
|
||||
"""Return a list of all agents"""
|
||||
global agents
|
||||
|
||||
# Return a list of agent keys and their tasks
|
||||
@ -59,6 +62,7 @@ def list_agents():
|
||||
|
||||
|
||||
def delete_agent(key):
|
||||
"""Delete an agent and return True if successful, False otherwise"""
|
||||
global agents
|
||||
|
||||
try:
|
||||
|
@ -3,7 +3,9 @@ import data
|
||||
import os
|
||||
|
||||
class AIConfig:
|
||||
"""Class to store the AI's name, role, and goals."""
|
||||
def __init__(self, ai_name="", ai_role="", ai_goals=[]):
|
||||
"""Initialize the AIConfig class"""
|
||||
self.ai_name = ai_name
|
||||
self.ai_role = ai_role
|
||||
self.ai_goals = ai_goals
|
||||
@ -13,7 +15,7 @@ class AIConfig:
|
||||
|
||||
@classmethod
|
||||
def load(cls, config_file=SAVE_FILE):
|
||||
# Load variables from yaml file if it exists
|
||||
"""Load variables from yaml file if it exists, otherwise use defaults."""
|
||||
try:
|
||||
with open(config_file) as file:
|
||||
config_params = yaml.load(file, Loader=yaml.FullLoader)
|
||||
@ -27,11 +29,14 @@ class AIConfig:
|
||||
return cls(ai_name, ai_role, ai_goals)
|
||||
|
||||
def save(self, config_file=SAVE_FILE):
|
||||
"""Save variables to yaml file."""
|
||||
config = {"ai_name": self.ai_name, "ai_role": self.ai_role, "ai_goals": self.ai_goals}
|
||||
with open(config_file, "w") as file:
|
||||
yaml.dump(config, file)
|
||||
|
||||
|
||||
def construct_full_prompt(self):
|
||||
"""Construct the full prompt for the AI to use."""
|
||||
prompt_start = """Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications."""
|
||||
|
||||
# Construct full prompt
|
||||
|
@ -6,8 +6,8 @@ from json_parser import fix_and_parse_json
|
||||
cfg = Config()
|
||||
|
||||
# Evaluating code
|
||||
|
||||
def evaluate_code(code: str) -> List[str]:
|
||||
"""Evaluates the given code and returns a list of suggestions for improvements."""
|
||||
function_string = "def analyze_code(code: str) -> List[str]:"
|
||||
args = [code]
|
||||
description_string = """Analyzes the given code and returns a list of suggestions for improvements."""
|
||||
@ -18,8 +18,8 @@ def evaluate_code(code: str) -> List[str]:
|
||||
|
||||
|
||||
# Improving code
|
||||
|
||||
def improve_code(suggestions: List[str], code: str) -> str:
|
||||
"""Improves the provided code based on the suggestions provided, making no other changes."""
|
||||
function_string = (
|
||||
"def generate_improved_code(suggestions: List[str], code: str) -> str:"
|
||||
)
|
||||
@ -31,9 +31,8 @@ def improve_code(suggestions: List[str], code: str) -> str:
|
||||
|
||||
|
||||
# Writing tests
|
||||
|
||||
|
||||
def write_tests(code: str, focus: List[str]) -> str:
|
||||
"""Generates test cases for the existing code, focusing on specific areas if required."""
|
||||
function_string = (
|
||||
"def create_test_cases(code: str, focus: Optional[str] = None) -> str:"
|
||||
)
|
||||
|
@ -6,7 +6,15 @@ from llm_utils import create_chat_completion
|
||||
cfg = Config()
|
||||
|
||||
def scrape_text(url):
|
||||
response = requests.get(url, headers=cfg.user_agent_header)
|
||||
"""Scrape text from a webpage"""
|
||||
# Most basic check if the URL is valid:
|
||||
if not url.startswith('http'):
|
||||
return "Error: Invalid URL"
|
||||
|
||||
try:
|
||||
response = requests.get(url, headers=cfg.user_agent_header)
|
||||
except requests.exceptions.RequestException as e:
|
||||
return "Error: " + str(e)
|
||||
|
||||
# Check if the response contains an HTTP error
|
||||
if response.status_code >= 400:
|
||||
@ -26,6 +34,7 @@ def scrape_text(url):
|
||||
|
||||
|
||||
def extract_hyperlinks(soup):
|
||||
"""Extract hyperlinks from a BeautifulSoup object"""
|
||||
hyperlinks = []
|
||||
for link in soup.find_all('a', href=True):
|
||||
hyperlinks.append((link.text, link['href']))
|
||||
@ -33,6 +42,7 @@ def extract_hyperlinks(soup):
|
||||
|
||||
|
||||
def format_hyperlinks(hyperlinks):
|
||||
"""Format hyperlinks into a list of strings"""
|
||||
formatted_links = []
|
||||
for link_text, link_url in hyperlinks:
|
||||
formatted_links.append(f"{link_text} ({link_url})")
|
||||
@ -40,6 +50,7 @@ def format_hyperlinks(hyperlinks):
|
||||
|
||||
|
||||
def scrape_links(url):
|
||||
"""Scrape links from a webpage"""
|
||||
response = requests.get(url, headers=cfg.user_agent_header)
|
||||
|
||||
# Check if the response contains an HTTP error
|
||||
@ -57,6 +68,7 @@ def scrape_links(url):
|
||||
|
||||
|
||||
def split_text(text, max_length=8192):
|
||||
"""Split text into chunks of a maximum length"""
|
||||
paragraphs = text.split("\n")
|
||||
current_length = 0
|
||||
current_chunk = []
|
||||
@ -75,12 +87,14 @@ def split_text(text, max_length=8192):
|
||||
|
||||
|
||||
def create_message(chunk, question):
|
||||
"""Create a message for the user to summarize a chunk of text"""
|
||||
return {
|
||||
"role": "user",
|
||||
"content": f"\"\"\"{chunk}\"\"\" Using the above text, please answer the following question: \"{question}\" -- if the question cannot be answered using the text, please summarize the text."
|
||||
}
|
||||
|
||||
def summarize_text(text, question):
|
||||
"""Summarize text using the LLM model"""
|
||||
if not text:
|
||||
return "Error: No text to summarize"
|
||||
|
||||
|
@ -1,11 +1,14 @@
|
||||
from config import Config
|
||||
|
||||
cfg = Config()
|
||||
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
# This is a magic function that can do anything with no-code. See
|
||||
# https://github.com/Torantulino/AI-Functions for more info.
|
||||
def call_ai_function(function, args, description, model=cfg.smart_llm_model):
|
||||
def call_ai_function(function, args, description, model=None):
|
||||
"""Call an AI function"""
|
||||
if model is None:
|
||||
model = cfg.smart_llm_model
|
||||
# For each arg, if any are None, convert to "None":
|
||||
args = [str(arg) if arg is not None else "None" for arg in args]
|
||||
# parse args to comma seperated string
|
||||
|
@ -3,11 +3,9 @@ import openai
|
||||
from dotenv import load_dotenv
|
||||
from config import Config
|
||||
import token_counter
|
||||
|
||||
cfg = Config()
|
||||
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
cfg = Config()
|
||||
|
||||
def create_chat_message(role, content):
|
||||
"""
|
||||
@ -48,6 +46,7 @@ def chat_with_ai(
|
||||
permanent_memory,
|
||||
token_limit,
|
||||
debug=False):
|
||||
"""Interact with the OpenAI API, sending the prompt, user input, message history, and permanent memory."""
|
||||
while True:
|
||||
try:
|
||||
"""
|
||||
@ -66,7 +65,7 @@ def chat_with_ai(
|
||||
model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
|
||||
# Reserve 1000 tokens for the response
|
||||
if debug:
|
||||
print(f"Token limit: {token_limit}")
|
||||
print(f"Token limit: {token_limit}")
|
||||
send_token_limit = token_limit - 1000
|
||||
|
||||
relevant_memory = permanent_memory.get_relevant(str(full_message_history[-5:]), 10)
|
||||
|
@ -25,6 +25,7 @@ def is_valid_int(value):
|
||||
return False
|
||||
|
||||
def get_command(response):
|
||||
"""Parse the response and return the command name and arguments"""
|
||||
try:
|
||||
response_json = fix_and_parse_json(response)
|
||||
|
||||
@ -53,6 +54,7 @@ def get_command(response):
|
||||
|
||||
|
||||
def execute_command(command_name, arguments):
|
||||
"""Execute the command and return the result"""
|
||||
memory = get_memory(cfg)
|
||||
|
||||
try:
|
||||
@ -106,6 +108,8 @@ def execute_command(command_name, arguments):
|
||||
return execute_python_file(arguments["file"])
|
||||
elif command_name == "generate_image":
|
||||
return generate_image(arguments["prompt"])
|
||||
elif command_name == "do_nothing":
|
||||
return "No action performed."
|
||||
elif command_name == "task_complete":
|
||||
shutdown()
|
||||
else:
|
||||
@ -116,11 +120,13 @@ def execute_command(command_name, arguments):
|
||||
|
||||
|
||||
def get_datetime():
|
||||
"""Return the current date and time"""
|
||||
return "Current date and time: " + \
|
||||
datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
|
||||
def google_search(query, num_results=8):
|
||||
"""Return the results of a google search"""
|
||||
search_results = []
|
||||
for j in ddg(query, max_results=num_results):
|
||||
search_results.append(j)
|
||||
@ -128,6 +134,7 @@ def google_search(query, num_results=8):
|
||||
return json.dumps(search_results, ensure_ascii=False, indent=4)
|
||||
|
||||
def google_official_search(query, num_results=8):
|
||||
"""Return the results of a google search using the official Google API"""
|
||||
from googleapiclient.discovery import build
|
||||
from googleapiclient.errors import HttpError
|
||||
import json
|
||||
@ -163,6 +170,7 @@ def google_official_search(query, num_results=8):
|
||||
return search_results_links
|
||||
|
||||
def browse_website(url, question):
|
||||
"""Browse a website and return the summary and links"""
|
||||
summary = get_text_summary(url, question)
|
||||
links = get_hyperlinks(url)
|
||||
|
||||
@ -176,23 +184,27 @@ def browse_website(url, question):
|
||||
|
||||
|
||||
def get_text_summary(url, question):
|
||||
"""Return the results of a google search"""
|
||||
text = browse.scrape_text(url)
|
||||
summary = browse.summarize_text(text, question)
|
||||
return """ "Result" : """ + summary
|
||||
|
||||
|
||||
def get_hyperlinks(url):
|
||||
"""Return the results of a google search"""
|
||||
link_list = browse.scrape_links(url)
|
||||
return link_list
|
||||
|
||||
|
||||
def commit_memory(string):
|
||||
"""Commit a string to memory"""
|
||||
_text = f"""Committing memory with string "{string}" """
|
||||
mem.permanent_memory.append(string)
|
||||
return _text
|
||||
|
||||
|
||||
def delete_memory(key):
|
||||
"""Delete a memory with a given key"""
|
||||
if key >= 0 and key < len(mem.permanent_memory):
|
||||
_text = "Deleting memory with key " + str(key)
|
||||
del mem.permanent_memory[key]
|
||||
@ -204,6 +216,7 @@ def delete_memory(key):
|
||||
|
||||
|
||||
def overwrite_memory(key, string):
|
||||
"""Overwrite a memory with a given key and string"""
|
||||
# Check if the key is a valid integer
|
||||
if is_valid_int(key):
|
||||
key_int = int(key)
|
||||
@ -230,11 +243,13 @@ def overwrite_memory(key, string):
|
||||
|
||||
|
||||
def shutdown():
|
||||
"""Shut down the program"""
|
||||
print("Shutting down...")
|
||||
quit()
|
||||
|
||||
|
||||
def start_agent(name, task, prompt, model=cfg.fast_llm_model):
|
||||
"""Start an agent with a given name, task, and prompt"""
|
||||
global cfg
|
||||
|
||||
# Remove underscores from name
|
||||
@ -258,6 +273,7 @@ def start_agent(name, task, prompt, model=cfg.fast_llm_model):
|
||||
|
||||
|
||||
def message_agent(key, message):
|
||||
"""Message an agent with a given key and message"""
|
||||
global cfg
|
||||
|
||||
# Check if the key is a valid integer
|
||||
@ -276,11 +292,13 @@ def message_agent(key, message):
|
||||
|
||||
|
||||
def list_agents():
|
||||
"""List all agents"""
|
||||
return agents.list_agents()
|
||||
|
||||
|
||||
def delete_agent(key):
|
||||
"""Delete an agent with a given key"""
|
||||
result = agents.delete_agent(key)
|
||||
if not result:
|
||||
return f"Agent {key} does not exist."
|
||||
return f"Agent {key} deleted."
|
||||
return f"Agent {key} deleted."
|
||||
|
@ -14,6 +14,7 @@ class Singleton(abc.ABCMeta, type):
|
||||
_instances = {}
|
||||
|
||||
def __call__(cls, *args, **kwargs):
|
||||
"""Call method for the singleton metaclass."""
|
||||
if cls not in cls._instances:
|
||||
cls._instances[cls] = super(
|
||||
Singleton, cls).__call__(
|
||||
@ -31,6 +32,7 @@ class Config(metaclass=Singleton):
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the Config class"""
|
||||
self.debug = False
|
||||
self.continuous_mode = False
|
||||
self.speak_mode = False
|
||||
@ -77,40 +79,53 @@ class Config(metaclass=Singleton):
|
||||
openai.api_key = self.openai_api_key
|
||||
|
||||
def set_continuous_mode(self, value: bool):
|
||||
"""Set the continuous mode value."""
|
||||
self.continuous_mode = value
|
||||
|
||||
def set_speak_mode(self, value: bool):
|
||||
"""Set the speak mode value."""
|
||||
self.speak_mode = value
|
||||
|
||||
def set_fast_llm_model(self, value: str):
|
||||
"""Set the fast LLM model value."""
|
||||
self.fast_llm_model = value
|
||||
|
||||
def set_smart_llm_model(self, value: str):
|
||||
"""Set the smart LLM model value."""
|
||||
self.smart_llm_model = value
|
||||
|
||||
def set_fast_token_limit(self, value: int):
|
||||
"""Set the fast token limit value."""
|
||||
self.fast_token_limit = value
|
||||
|
||||
def set_smart_token_limit(self, value: int):
|
||||
"""Set the smart token limit value."""
|
||||
self.smart_token_limit = value
|
||||
|
||||
def set_openai_api_key(self, value: str):
|
||||
"""Set the OpenAI API key value."""
|
||||
self.openai_api_key = value
|
||||
|
||||
|
||||
def set_elevenlabs_api_key(self, value: str):
|
||||
"""Set the ElevenLabs API key value."""
|
||||
self.elevenlabs_api_key = value
|
||||
|
||||
|
||||
def set_google_api_key(self, value: str):
|
||||
"""Set the Google API key value."""
|
||||
self.google_api_key = value
|
||||
|
||||
|
||||
def set_custom_search_engine_id(self, value: str):
|
||||
"""Set the custom search engine id value."""
|
||||
self.custom_search_engine_id = value
|
||||
|
||||
def set_pinecone_api_key(self, value: str):
|
||||
"""Set the Pinecone API key value."""
|
||||
self.pinecone_api_key = value
|
||||
|
||||
def set_pinecone_region(self, value: str):
|
||||
"""Set the Pinecone region value."""
|
||||
self.pinecone_region = value
|
||||
|
||||
def set_debug_mode(self, value: bool):
|
||||
"""Set the debug mode value."""
|
||||
self.debug = value
|
||||
|
@ -2,6 +2,7 @@ import os
|
||||
from pathlib import Path
|
||||
|
||||
def load_prompt():
|
||||
"""Load the prompt from data/prompt.txt"""
|
||||
try:
|
||||
# get directory of this file:
|
||||
file_dir = Path(__file__).parent
|
||||
|
@ -24,6 +24,7 @@ COMMANDS:
|
||||
18. Execute Python File: "execute_python_file", args: "file": "<file>"
|
||||
19. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"
|
||||
20. Generate Image: "generate_image", args: "prompt": "<prompt>"
|
||||
21. Do Nothing: "do_nothing", args: ""
|
||||
|
||||
RESOURCES:
|
||||
|
||||
|
@ -3,6 +3,7 @@ import os
|
||||
|
||||
|
||||
def execute_python_file(file):
|
||||
"""Execute a Python file in a Docker container and return the output"""
|
||||
workspace_folder = "auto_gpt_workspace"
|
||||
|
||||
print (f"Executing file '{file}' in workspace '{workspace_folder}'")
|
||||
|
@ -4,11 +4,13 @@ import os.path
|
||||
# Set a dedicated folder for file I/O
|
||||
working_directory = "auto_gpt_workspace"
|
||||
|
||||
# Create the directory if it doesn't exist
|
||||
if not os.path.exists(working_directory):
|
||||
os.makedirs(working_directory)
|
||||
|
||||
|
||||
def safe_join(base, *paths):
|
||||
"""Join one or more path components intelligently."""
|
||||
new_path = os.path.join(base, *paths)
|
||||
norm_new_path = os.path.normpath(new_path)
|
||||
|
||||
@ -19,6 +21,7 @@ def safe_join(base, *paths):
|
||||
|
||||
|
||||
def read_file(filename):
|
||||
"""Read a file and return the contents"""
|
||||
try:
|
||||
filepath = safe_join(working_directory, filename)
|
||||
with open(filepath, "r") as f:
|
||||
@ -29,6 +32,7 @@ def read_file(filename):
|
||||
|
||||
|
||||
def write_to_file(filename, text):
|
||||
"""Write text to a file"""
|
||||
try:
|
||||
filepath = safe_join(working_directory, filename)
|
||||
directory = os.path.dirname(filepath)
|
||||
@ -42,6 +46,7 @@ def write_to_file(filename, text):
|
||||
|
||||
|
||||
def append_to_file(filename, text):
|
||||
"""Append text to a file"""
|
||||
try:
|
||||
filepath = safe_join(working_directory, filename)
|
||||
with open(filepath, "a") as f:
|
||||
@ -52,6 +57,7 @@ def append_to_file(filename, text):
|
||||
|
||||
|
||||
def delete_file(filename):
|
||||
"""Delete a file"""
|
||||
try:
|
||||
filepath = safe_join(working_directory, filename)
|
||||
os.remove(filepath)
|
||||
|
@ -1,11 +1,13 @@
|
||||
import json
|
||||
from typing import Any, Dict, Union
|
||||
from call_ai_function import call_ai_function
|
||||
from config import Config
|
||||
from json_utils import correct_json
|
||||
|
||||
cfg = Config()
|
||||
|
||||
def fix_and_parse_json(json_str: str, try_to_fix_with_gpt: bool = True):
|
||||
json_schema = """
|
||||
{
|
||||
JSON_SCHEMA = """
|
||||
{
|
||||
"command": {
|
||||
"name": "command name",
|
||||
"args":{
|
||||
@ -20,44 +22,70 @@ def fix_and_parse_json(json_str: str, try_to_fix_with_gpt: bool = True):
|
||||
"criticism": "constructive self-criticism",
|
||||
"speak": "thoughts summary to say to user"
|
||||
}
|
||||
}
|
||||
"""
|
||||
}
|
||||
"""
|
||||
|
||||
|
||||
def fix_and_parse_json(
|
||||
json_str: str,
|
||||
try_to_fix_with_gpt: bool = True
|
||||
) -> Union[str, Dict[Any, Any]]:
|
||||
"""Fix and parse JSON string"""
|
||||
try:
|
||||
json_str = json_str.replace('\t', '')
|
||||
return json.loads(json_str)
|
||||
except Exception as e:
|
||||
# Let's do something manually - sometimes GPT responds with something BEFORE the braces:
|
||||
# "I'm sorry, I don't understand. Please try again."{"text": "I'm sorry, I don't understand. Please try again.", "confidence": 0.0}
|
||||
# So let's try to find the first brace and then parse the rest of the string
|
||||
except json.JSONDecodeError as _: # noqa: F841
|
||||
json_str = correct_json(json_str)
|
||||
try:
|
||||
brace_index = json_str.index("{")
|
||||
json_str = json_str[brace_index:]
|
||||
last_brace_index = json_str.rindex("}")
|
||||
json_str = json_str[:last_brace_index+1]
|
||||
return json.loads(json_str)
|
||||
except Exception as e:
|
||||
if try_to_fix_with_gpt:
|
||||
print(f"Warning: Failed to parse AI output, attempting to fix.\n If you see this warning frequently, it's likely that your prompt is confusing the AI. Try changing it up slightly.")
|
||||
return json.loads(json_str)
|
||||
except json.JSONDecodeError as _: # noqa: F841
|
||||
pass
|
||||
# Let's do something manually:
|
||||
# sometimes GPT responds with something BEFORE the braces:
|
||||
# "I'm sorry, I don't understand. Please try again."
|
||||
# {"text": "I'm sorry, I don't understand. Please try again.",
|
||||
# "confidence": 0.0}
|
||||
# So let's try to find the first brace and then parse the rest
|
||||
# of the string
|
||||
try:
|
||||
brace_index = json_str.index("{")
|
||||
json_str = json_str[brace_index:]
|
||||
last_brace_index = json_str.rindex("}")
|
||||
json_str = json_str[:last_brace_index+1]
|
||||
return json.loads(json_str)
|
||||
except json.JSONDecodeError as e: # noqa: F841
|
||||
if try_to_fix_with_gpt:
|
||||
print("Warning: Failed to parse AI output, attempting to fix."
|
||||
"\n If you see this warning frequently, it's likely that"
|
||||
" your prompt is confusing the AI. Try changing it up"
|
||||
" slightly.")
|
||||
# Now try to fix this up using the ai_functions
|
||||
ai_fixed_json = fix_json(json_str, json_schema, cfg.debug)
|
||||
ai_fixed_json = fix_json(json_str, JSON_SCHEMA, cfg.debug)
|
||||
if ai_fixed_json != "failed":
|
||||
return json.loads(ai_fixed_json)
|
||||
return json.loads(ai_fixed_json)
|
||||
else:
|
||||
print(f"Failed to fix ai output, telling the AI.") # This allows the AI to react to the error message, which usually results in it correcting its ways.
|
||||
return json_str
|
||||
else:
|
||||
# This allows the AI to react to the error message,
|
||||
# which usually results in it correcting its ways.
|
||||
print("Failed to fix ai output, telling the AI.")
|
||||
return json_str
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
||||
|
||||
def fix_json(json_str: str, schema: str, debug=False) -> str:
|
||||
"""Fix the given JSON string to make it parseable and fully complient with the provided schema."""
|
||||
# Try to fix the JSON using gpt:
|
||||
function_string = "def fix_json(json_str: str, schema:str=None) -> str:"
|
||||
args = [f"'''{json_str}'''", f"'''{schema}'''"]
|
||||
description_string = """Fixes the provided JSON string to make it parseable and fully complient with the provided schema.\n If an object or field specifed in the schema isn't contained within the correct JSON, it is ommited.\n This function is brilliant at guessing when the format is incorrect."""
|
||||
description_string = "Fixes the provided JSON string to make it parseable"\
|
||||
" and fully complient with the provided schema.\n If an object or"\
|
||||
" field specified in the schema isn't contained within the correct"\
|
||||
" JSON, it is ommited.\n This function is brilliant at guessing"\
|
||||
" when the format is incorrect."
|
||||
|
||||
# If it doesn't already start with a "`", add one:
|
||||
if not json_str.startswith("`"):
|
||||
json_str = "```json\n" + json_str + "\n```"
|
||||
json_str = "```json\n" + json_str + "\n```"
|
||||
result_string = call_ai_function(
|
||||
function_string, args, description_string, model=cfg.fast_llm_model
|
||||
)
|
||||
@ -67,12 +95,13 @@ def fix_json(json_str: str, schema: str, debug=False) -> str:
|
||||
print("-----------")
|
||||
print(f"Fixed JSON: {result_string}")
|
||||
print("----------- END OF FIX ATTEMPT ----------------")
|
||||
|
||||
try:
|
||||
json.loads(result_string) # just check the validity
|
||||
json.loads(result_string) # just check the validity
|
||||
return result_string
|
||||
except:
|
||||
except: # noqa: E722
|
||||
# Get the call stack:
|
||||
# import traceback
|
||||
# call_stack = traceback.format_exc()
|
||||
# print(f"Failed to fix JSON: '{json_str}' "+call_stack)
|
||||
return "failed"
|
||||
return "failed"
|
||||
|
127
scripts/json_utils.py
Normal file
127
scripts/json_utils.py
Normal file
@ -0,0 +1,127 @@
|
||||
import re
|
||||
import json
|
||||
from config import Config
|
||||
|
||||
cfg = Config()
|
||||
|
||||
|
||||
def extract_char_position(error_message: str) -> int:
|
||||
"""Extract the character position from the JSONDecodeError message.
|
||||
|
||||
Args:
|
||||
error_message (str): The error message from the JSONDecodeError
|
||||
exception.
|
||||
|
||||
Returns:
|
||||
int: The character position.
|
||||
"""
|
||||
import re
|
||||
|
||||
char_pattern = re.compile(r'\(char (\d+)\)')
|
||||
if match := char_pattern.search(error_message):
|
||||
return int(match[1])
|
||||
else:
|
||||
raise ValueError("Character position not found in the error message.")
|
||||
|
||||
|
||||
def add_quotes_to_property_names(json_string: str) -> str:
|
||||
"""
|
||||
Add quotes to property names in a JSON string.
|
||||
|
||||
Args:
|
||||
json_string (str): The JSON string.
|
||||
|
||||
Returns:
|
||||
str: The JSON string with quotes added to property names.
|
||||
"""
|
||||
|
||||
def replace_func(match):
|
||||
return f'"{match.group(1)}":'
|
||||
|
||||
property_name_pattern = re.compile(r'(\w+):')
|
||||
corrected_json_string = property_name_pattern.sub(
|
||||
replace_func,
|
||||
json_string)
|
||||
|
||||
try:
|
||||
json.loads(corrected_json_string)
|
||||
return corrected_json_string
|
||||
except json.JSONDecodeError as e:
|
||||
raise e
|
||||
|
||||
|
||||
def balance_braces(json_string: str) -> str:
|
||||
"""
|
||||
Balance the braces in a JSON string.
|
||||
|
||||
Args:
|
||||
json_string (str): The JSON string.
|
||||
|
||||
Returns:
|
||||
str: The JSON string with braces balanced.
|
||||
"""
|
||||
|
||||
open_braces_count = json_string.count('{')
|
||||
close_braces_count = json_string.count('}')
|
||||
|
||||
while open_braces_count > close_braces_count:
|
||||
json_string += '}'
|
||||
close_braces_count += 1
|
||||
|
||||
while close_braces_count > open_braces_count:
|
||||
json_string = json_string.rstrip('}')
|
||||
close_braces_count -= 1
|
||||
|
||||
try:
|
||||
json.loads(json_string)
|
||||
return json_string
|
||||
except json.JSONDecodeError as e:
|
||||
raise e
|
||||
|
||||
|
||||
def fix_invalid_escape(json_str: str, error_message: str) -> str:
|
||||
while error_message.startswith('Invalid \\escape'):
|
||||
bad_escape_location = extract_char_position(error_message)
|
||||
json_str = json_str[:bad_escape_location] + \
|
||||
json_str[bad_escape_location + 1:]
|
||||
try:
|
||||
json.loads(json_str)
|
||||
return json_str
|
||||
except json.JSONDecodeError as e:
|
||||
if cfg.debug:
|
||||
print('json loads error - fix invalid escape', e)
|
||||
error_message = str(e)
|
||||
return json_str
|
||||
|
||||
|
||||
def correct_json(json_str: str) -> str:
|
||||
"""
|
||||
Correct common JSON errors.
|
||||
|
||||
Args:
|
||||
json_str (str): The JSON string.
|
||||
"""
|
||||
|
||||
try:
|
||||
if cfg.debug:
|
||||
print("json", json_str)
|
||||
json.loads(json_str)
|
||||
return json_str
|
||||
except json.JSONDecodeError as e:
|
||||
if cfg.debug:
|
||||
print('json loads error', e)
|
||||
error_message = str(e)
|
||||
if error_message.startswith('Invalid \\escape'):
|
||||
json_str = fix_invalid_escape(json_str, error_message)
|
||||
if error_message.startswith('Expecting property name enclosed in double quotes'):
|
||||
json_str = add_quotes_to_property_names(json_str)
|
||||
try:
|
||||
json.loads(json_str)
|
||||
return json_str
|
||||
except json.JSONDecodeError as e:
|
||||
if cfg.debug:
|
||||
print('json loads error - add quotes', e)
|
||||
error_message = str(e)
|
||||
if balanced_str := balance_braces(json_str):
|
||||
return balanced_str
|
||||
return json_str
|
@ -6,6 +6,7 @@ openai.api_key = cfg.openai_api_key
|
||||
|
||||
# Overly simple abstraction until we create something better
|
||||
def create_chat_completion(messages, model=None, temperature=None, max_tokens=None)->str:
|
||||
"""Create a chat completion using the OpenAI API"""
|
||||
if cfg.use_azure:
|
||||
response = openai.ChatCompletion.create(
|
||||
deployment_id=cfg.openai_deployment_id,
|
||||
|
@ -17,6 +17,16 @@ import traceback
|
||||
import yaml
|
||||
import argparse
|
||||
|
||||
def check_openai_api_key():
|
||||
"""Check if the OpenAI API key is set in config.py or as an environment variable."""
|
||||
if not cfg.openai_api_key:
|
||||
print(
|
||||
Fore.RED +
|
||||
"Please set your OpenAI API key in config.py or as an environment variable."
|
||||
)
|
||||
print("You can get your key from https://beta.openai.com/account/api-keys")
|
||||
exit(1)
|
||||
|
||||
|
||||
def print_to_console(
|
||||
title,
|
||||
@ -25,6 +35,7 @@ def print_to_console(
|
||||
speak_text=False,
|
||||
min_typing_speed=0.05,
|
||||
max_typing_speed=0.01):
|
||||
"""Prints text to the console with a typing effect"""
|
||||
global cfg
|
||||
if speak_text and cfg.speak_mode:
|
||||
speak.say_text(f"{title}. {content}")
|
||||
@ -46,6 +57,7 @@ def print_to_console(
|
||||
|
||||
|
||||
def print_assistant_thoughts(assistant_reply):
|
||||
"""Prints the assistant's thoughts to the console"""
|
||||
global ai_name
|
||||
global cfg
|
||||
try:
|
||||
@ -105,7 +117,7 @@ def print_assistant_thoughts(assistant_reply):
|
||||
|
||||
|
||||
def load_variables(config_file="config.yaml"):
|
||||
# Load variables from yaml file if it exists
|
||||
"""Load variables from yaml file if it exists, otherwise prompt the user for input"""
|
||||
try:
|
||||
with open(config_file) as file:
|
||||
config = yaml.load(file, Loader=yaml.FullLoader)
|
||||
@ -159,6 +171,7 @@ def load_variables(config_file="config.yaml"):
|
||||
|
||||
|
||||
def construct_prompt():
|
||||
"""Construct the prompt for the AI to respond to"""
|
||||
config = AIConfig.load()
|
||||
if config.ai_name:
|
||||
print_to_console(
|
||||
@ -187,6 +200,7 @@ Continue (y/n): """)
|
||||
|
||||
|
||||
def prompt_user():
|
||||
"""Prompt the user for input"""
|
||||
ai_name = ""
|
||||
# Construct the prompt
|
||||
print_to_console(
|
||||
@ -239,6 +253,7 @@ def prompt_user():
|
||||
return config
|
||||
|
||||
def parse_arguments():
|
||||
"""Parses the arguments passed to the script"""
|
||||
global cfg
|
||||
cfg.set_continuous_mode(False)
|
||||
cfg.set_speak_mode(False)
|
||||
@ -272,7 +287,7 @@ def parse_arguments():
|
||||
|
||||
|
||||
# TODO: fill in llm values here
|
||||
|
||||
check_openai_api_key()
|
||||
cfg = Config()
|
||||
parse_arguments()
|
||||
ai_name = ""
|
||||
|
@ -15,6 +15,7 @@ tts_headers = {
|
||||
}
|
||||
|
||||
def eleven_labs_speech(text, voice_index=0):
|
||||
"""Speak text using elevenlabs.io's API"""
|
||||
tts_url = "https://api.elevenlabs.io/v1/text-to-speech/{voice_id}".format(
|
||||
voice_id=voices[voice_index])
|
||||
formatted_message = {"text": text}
|
||||
|
@ -5,7 +5,9 @@ import time
|
||||
|
||||
|
||||
class Spinner:
|
||||
"""A simple spinner class"""
|
||||
def __init__(self, message="Loading...", delay=0.1):
|
||||
"""Initialize the spinner class"""
|
||||
self.spinner = itertools.cycle(['-', '/', '|', '\\'])
|
||||
self.delay = delay
|
||||
self.message = message
|
||||
@ -13,6 +15,7 @@ class Spinner:
|
||||
self.spinner_thread = None
|
||||
|
||||
def spin(self):
|
||||
"""Spin the spinner"""
|
||||
while self.running:
|
||||
sys.stdout.write(next(self.spinner) + " " + self.message + "\r")
|
||||
sys.stdout.flush()
|
||||
@ -20,11 +23,13 @@ class Spinner:
|
||||
sys.stdout.write('\b' * (len(self.message) + 2))
|
||||
|
||||
def __enter__(self):
|
||||
"""Start the spinner"""
|
||||
self.running = True
|
||||
self.spinner_thread = threading.Thread(target=self.spin)
|
||||
self.spinner_thread.start()
|
||||
|
||||
def __exit__(self, exc_type, exc_value, exc_traceback):
|
||||
"""Stop the spinner"""
|
||||
self.running = False
|
||||
self.spinner_thread.join()
|
||||
sys.stdout.write('\r' + ' ' * (len(self.message) + 2) + '\r')
|
||||
|
99
tests/test_browse_scrape_text.py
Normal file
99
tests/test_browse_scrape_text.py
Normal file
@ -0,0 +1,99 @@
|
||||
|
||||
# Generated by CodiumAI
|
||||
|
||||
import requests
|
||||
import pytest
|
||||
|
||||
from scripts.browse import scrape_text
|
||||
|
||||
"""
|
||||
Code Analysis
|
||||
|
||||
Objective:
|
||||
The objective of the "scrape_text" function is to scrape the text content from a given URL and return it as a string, after removing any unwanted HTML tags and scripts.
|
||||
|
||||
Inputs:
|
||||
- url: a string representing the URL of the webpage to be scraped.
|
||||
|
||||
Flow:
|
||||
1. Send a GET request to the given URL using the requests library and the user agent header from the config file.
|
||||
2. Check if the response contains an HTTP error. If it does, return an error message.
|
||||
3. Use BeautifulSoup to parse the HTML content of the response and extract all script and style tags.
|
||||
4. Get the text content of the remaining HTML using the get_text() method of BeautifulSoup.
|
||||
5. Split the text into lines and then into chunks, removing any extra whitespace.
|
||||
6. Join the chunks into a single string with newline characters between them.
|
||||
7. Return the cleaned text.
|
||||
|
||||
Outputs:
|
||||
- A string representing the cleaned text content of the webpage.
|
||||
|
||||
Additional aspects:
|
||||
- The function uses the requests library and BeautifulSoup to handle the HTTP request and HTML parsing, respectively.
|
||||
- The function removes script and style tags from the HTML to avoid including unwanted content in the text output.
|
||||
- The function uses a generator expression to split the text into lines and chunks, which can improve performance for large amounts of text.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
class TestScrapeText:
|
||||
|
||||
# Tests that scrape_text() returns the expected text when given a valid URL.
|
||||
def test_scrape_text_with_valid_url(self, mocker):
|
||||
# Mock the requests.get() method to return a response with expected text
|
||||
expected_text = "This is some sample text"
|
||||
mock_response = mocker.Mock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.text = f"<html><body><div><p style='color: blue;'>{expected_text}</p></div></body></html>"
|
||||
mocker.patch("requests.get", return_value=mock_response)
|
||||
|
||||
# Call the function with a valid URL and assert that it returns the expected text
|
||||
url = "http://www.example.com"
|
||||
assert scrape_text(url) == expected_text
|
||||
|
||||
# Tests that the function returns an error message when an invalid or unreachable url is provided.
|
||||
def test_invalid_url(self, mocker):
|
||||
# Mock the requests.get() method to raise an exception
|
||||
mocker.patch("requests.get", side_effect=requests.exceptions.RequestException)
|
||||
|
||||
# Call the function with an invalid URL and assert that it returns an error message
|
||||
url = "http://www.invalidurl.com"
|
||||
error_message = scrape_text(url)
|
||||
assert "Error:" in error_message
|
||||
|
||||
# Tests that the function returns an empty string when the html page contains no text to be scraped.
|
||||
def test_no_text(self, mocker):
|
||||
# Mock the requests.get() method to return a response with no text
|
||||
mock_response = mocker.Mock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.text = "<html><body></body></html>"
|
||||
mocker.patch("requests.get", return_value=mock_response)
|
||||
|
||||
# Call the function with a valid URL and assert that it returns an empty string
|
||||
url = "http://www.example.com"
|
||||
assert scrape_text(url) == ""
|
||||
|
||||
# Tests that the function returns an error message when the response status code is an http error (>=400).
|
||||
def test_http_error(self, mocker):
|
||||
# Mock the requests.get() method to return a response with a 404 status code
|
||||
mocker.patch('requests.get', return_value=mocker.Mock(status_code=404))
|
||||
|
||||
# Call the function with a URL
|
||||
result = scrape_text("https://www.example.com")
|
||||
|
||||
# Check that the function returns an error message
|
||||
assert result == "Error: HTTP 404 error"
|
||||
|
||||
# Tests that scrape_text() properly handles HTML tags.
|
||||
def test_scrape_text_with_html_tags(self, mocker):
|
||||
# Create a mock response object with HTML containing tags
|
||||
html = "<html><body><p>This is <b>bold</b> text.</p></body></html>"
|
||||
mock_response = mocker.Mock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.text = html
|
||||
mocker.patch("requests.get", return_value=mock_response)
|
||||
|
||||
# Call the function with a URL
|
||||
result = scrape_text("https://www.example.com")
|
||||
|
||||
# Check that the function properly handles HTML tags
|
||||
assert result == "This is bold text."
|
Loading…
Reference in New Issue
Block a user