Use modern material theme for docs (#5035)

* Use modern material theme for docs

* Update mkdocs.yml

Added search plugin

Co-authored-by: James Collins <collijk@uw.edu>

* Updating mkdocs material theme config per recommendations to enable all markdown options

* Updated highlight extension settings
and codeblocks throughout the docs to align with mkdocs-material recommendations.

codehilite is deprecated in favor of the highlight extension:
https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#highlight

---------

Co-authored-by: lc0rp <2609411+lc0rp@users.noreply.github.com>
Co-authored-by: James Collins <collijk@uw.edu>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
This commit is contained in:
Luke 2023-08-01 13:17:33 -04:00 committed by GitHub
parent fc6255296a
commit 7cd407b7b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 291 additions and 143 deletions

View File

@ -0,0 +1,16 @@
window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.typesetPromise()
})

View File

@ -0,0 +1,6 @@
document$.subscribe(function () {
var tables = document.querySelectorAll("article table:not([class])")
tables.forEach(function (table) {
new Tablesort(table)
})
})

View File

@ -3,9 +3,10 @@
**Status**: Current level to beat: level 3
**Command to try**:
```
```shell
pytest -s tests/challenges/memory/test_memory_challenge_b.py --level=3
``
```
## Description

View File

@ -3,9 +3,10 @@
**Status**: Current level to beat: level 1
**Command to try**:
```
```shell
pytest -s tests/challenges/memory/test_memory_challenge_c.py --level=2
``
```
## Description
@ -20,13 +21,23 @@ The challenge involves a series of text files containing instructions and silly
- `instructions_1.txt`
The current phrase is "The purple elephant danced on a rainbow while eating a taco.".\n[NOISE intended to confuse the agent]
Read the file instructions_2.txt using the read_file command.
The current phrase is
```
The purple elephant danced on a rainbow while eating a taco.\n[NOISE intended to confuse the agent]
```
Read the file `instructions_2.txt` using the read_file command.
- `instructions_2.txt`
The current phrase is "The sneaky toaster stole my socks and ran away to Hawaii.".\n[NOISE intended to confuse the agent]
The current phrase is
```
The sneaky toaster stole my socks and ran away to Hawaii.\n[NOISE intended to confuse the agent]
```
Read the file instructions_3.txt using the read_file command.
@ -34,7 +45,12 @@ Read the file instructions_3.txt using the read_file command.
- `instructions_n.txt`
The current phrase is "My pet rock sings better than Beyoncé on Tuesdays."
The current phrase is
```
My pet rock sings better than Beyoncé on Tuesdays.
```
Write all the phrases into the file output.txt. The file has not been created yet. After that, use the task_complete command.

View File

@ -1,11 +1,12 @@
# Memory Challenge C
# Memory Challenge D
**Status**: Current level to beat: level 1
**Command to try**:
```
```shell
pytest -s tests/challenges/memory/test_memory_challenge_d.py --level=1
``
```
## Description
@ -30,13 +31,16 @@ The test runs for levels up to the maximum level that the AI has successfully be
- `instructions_1.txt`
"Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).",
```
Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).
```
- `instructions_2.txt`
"Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.",
```
Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.
```
...and so on.
@ -44,6 +48,7 @@ The test runs for levels up to the maximum level that the AI has successfully be
The expected believes of every characters are given in a list:
```json
expected_beliefs = {
1: {
'Sally': {
@ -68,7 +73,7 @@ expected_beliefs = {
'A': 'sofa', # Because Anne told him to tell Sally so
}
},...
```
## Objective

View File

@ -7,7 +7,8 @@
## DALL-e
In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
``` ini
```ini
# IMAGE_PROVIDER=dalle # this is the default
```
@ -23,7 +24,8 @@ To use text-to-image models from Hugging Face, you need a Hugging Face API token
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
Once you have an API token, uncomment and adjust these variables in your `.env`:
``` ini
```ini
IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN=your-huggingface-api-token
```
@ -39,7 +41,8 @@ Further optional configuration:
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
``` ini
```ini
IMAGE_PROVIDER=sdwebui
```
@ -54,6 +57,7 @@ Further optional configuration:
| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
## Selenium
``` shell
```shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```

View File

@ -51,17 +51,19 @@ Links to memory backends
1. Launch Redis container
:::shell
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```shell
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```
3. Set the following settings in `.env`
:::ini
MEMORY_BACKEND=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=<PASSWORD>
```shell
MEMORY_BACKEND=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=<PASSWORD>
```
Replace `<PASSWORD>` by your password, omitting the angled brackets (<>).
Optional configuration:
@ -157,7 +159,7 @@ To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip insta
Install the Weaviate client before usage.
``` shell
```shell
$ pip install weaviate-client
```
@ -165,7 +167,7 @@ $ pip install weaviate-client
In your `.env` file set the following:
``` ini
```ini
MEMORY_BACKEND=weaviate
WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance
WEAVIATE_PORT="8080"
@ -195,7 +197,7 @@ View memory usage by using the `--debug` flag :)
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
``` shell
```shell
$ python data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]

View File

@ -2,7 +2,7 @@
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
``` shell
```shell
python -m autogpt --speak
```

BIN
docs/imgs/Auto_GPT_Logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -36,40 +36,43 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
1. Make sure you have Docker installed, see [requirements](#requirements)
2. Create a project directory for Auto-GPT
:::shell
mkdir Auto-GPT
cd Auto-GPT
```shell
mkdir Auto-GPT
cd Auto-GPT
```
3. In the project directory, create a file called `docker-compose.yml` with the following contents:
:::yaml
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt
env_file:
- .env
profiles: ["exclude-from-up"]
volumes:
- ./auto_gpt_workspace:/app/auto_gpt_workspace
- ./data:/app/data
## allow auto-gpt to write logs to disk
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
#- type: bind
# source: ./azure.yaml
# target: /app/azure.yaml
#- type: bind
# source: ./ai_settings.yaml
# target: /app/ai_settings.yaml
```yaml
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt
env_file:
- .env
profiles: ["exclude-from-up"]
volumes:
- ./auto_gpt_workspace:/app/auto_gpt_workspace
- ./data:/app/data
## allow auto-gpt to write logs to disk
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
#- type: bind
# source: ./azure.yaml
# target: /app/azure.yaml
#- type: bind
# source: ./ai_settings.yaml
# target: /app/ai_settings.yaml
```
4. Create the necessary [configuration](#configuration) files. If needed, you can find
templates in the [repository].
5. Pull the latest image from [Docker Hub]
:::shell
docker pull significantgravitas/auto-gpt
```shell
docker pull significantgravitas/auto-gpt
```
6. Continue to [Run with Docker](#run-with-docker)
@ -92,14 +95,15 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
1. Clone the repository
:::shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
```shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
```
2. Navigate to the directory where you downloaded the repository
:::shell
cd Auto-GPT
```shell
cd Auto-GPT
```
### Set up without Git/Docker
@ -139,12 +143,13 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
Example:
:::yaml
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own deployment Name
azure_model_map:
fast_llm_deployment_id: "<auto-gpt-deployment>"
...
```yaml
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own deployment Name
azure_model_map:
fast_llm_deployment_id: "<auto-gpt-deployment>"
...
```
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
@ -164,7 +169,9 @@ Easiest is to use `docker compose`.
Important: Docker Compose version 1.29.0 or later is required to use version 3.9 of the Compose file format.
You can check the version of Docker Compose installed on your system by running the following command:
docker compose version
```shell
docker compose version
```
This will display the version of Docker Compose that is currently installed on your system.
@ -174,13 +181,15 @@ Once you have a recent version of Docker Compose, run the commands below in your
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
:::shell
docker compose build auto-gpt
```shell
docker compose build auto-gpt
```
2. Run Auto-GPT
:::shell
docker compose run --rm auto-gpt
```shell
docker compose run --rm auto-gpt
```
By default, this will also start and attach a Redis memory backend. If you do not
want this, comment or remove the `depends: - redis` and `redis:` sections from
@ -189,12 +198,14 @@ Once you have a recent version of Docker Compose, run the commands below in your
For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup).
You can pass extra arguments, e.g. running with `--gpt3only` and `--continuous`:
``` shell
```shell
docker compose run --rm auto-gpt --gpt3only --continuous
```
If you dare, you can also build and run it with "vanilla" docker commands:
``` shell
```shell
docker build -t auto-gpt .
docker run -it --env-file=.env -v $PWD:/app auto-gpt
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
@ -218,7 +229,7 @@ docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuou
Create a virtual environment to run in.
``` shell
```shell
python -m venv venvAutoGPT
source venvAutoGPT/bin/activate
pip3 install --upgrade pip
@ -232,13 +243,15 @@ packages and launch Auto-GPT.
- On Linux/MacOS:
:::shell
./run.sh
```shell
./run.sh
```
- On Windows:
:::shell
.\run.bat
```shell
.\run.bat
```
If this gives errors, make sure you have a compatible Python version installed. See also
the [requirements](./installation.md#requirements).

View File

@ -8,7 +8,7 @@ Activity, Error, and Debug logs are located in `./logs`
To print out debug logs:
``` shell
```shell
./run.sh --debug # on Linux / macOS
.\run.bat --debug # on Windows

View File

@ -2,12 +2,13 @@
To run all tests, use the following command:
``` shell
```shell
pytest
```
If `pytest` is not found:
``` shell
```shell
python -m pytest
```
@ -15,18 +16,21 @@ python -m pytest
- To run without integration tests:
:::shell
pytest --without-integration
```shell
pytest --without-integration
```
- To run without *slow* integration tests:
:::shell
pytest --without-slow-integration
```shell
pytest --without-slow-integration
```
- To run tests and see coverage:
:::shell
pytest --cov=autogpt --without-integration --without-slow-integration
```shell
pytest --cov=autogpt --without-integration --without-slow-integration
```
## Running the linter
@ -36,11 +40,12 @@ See the [flake8 rules](https://www.flake8rules.com/) for more information.
To run the linter:
``` shell
```shell
flake8 .
```
Or:
``` shell
```shell
python -m flake8 .
```

View File

@ -3,7 +3,7 @@
## Command Line Arguments
Running with `--help` lists all the possible command line arguments you can pass:
``` shell
```shell
./run.sh --help # on Linux / macOS
.\run.bat --help # on Windows
@ -13,9 +13,10 @@ Running with `--help` lists all the possible command line arguments you can pass
For use with Docker, replace the script in the examples with
`docker compose run --rm auto-gpt`:
:::shell
docker compose run --rm auto-gpt --help
docker compose run --rm auto-gpt --ai-settings <filename>
```shell
docker compose run --rm auto-gpt --help
docker compose run --rm auto-gpt --ai-settings <filename>
```
!!! note
Replace anything in angled brackets (<>) to a value you want to specify
@ -23,18 +24,22 @@ Running with `--help` lists all the possible command line arguments you can pass
Here are some common arguments you can use when running Auto-GPT:
* Run Auto-GPT with a different AI Settings file
``` shell
./run.sh --ai-settings <filename>
```
```shell
./run.sh --ai-settings <filename>
```
* Run Auto-GPT with a different Prompt Settings file
``` shell
./run.sh --prompt-settings <filename>
```
```shell
./run.sh --prompt-settings <filename>
```
* Specify a memory backend
:::shell
./run.sh --use-memory <memory-backend>
```shell
./run.sh --use-memory <memory-backend>
```
!!! note
There are shorthands for some of these flags, for example `-m` for `--use-memory`.
@ -44,7 +49,7 @@ Here are some common arguments you can use when running Auto-GPT:
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
``` shell
```shell
./run.sh --speak
```
@ -55,9 +60,10 @@ Continuous mode is NOT recommended.
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize.
Use at your own risk.
``` shell
```shell
./run.sh --continuous
```
To exit the program, press ++ctrl+c++
### ♻️ Self-Feedback Mode ⚠️
@ -68,7 +74,7 @@ Running Self-Feedback will **INCREASE** token use and thus cost more. This featu
If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
``` shell
```shell
./run.sh --gpt3only
```
@ -79,7 +85,7 @@ You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4.
This may give your bot increased intelligence.
``` shell
```shell
./run.sh --gpt4only
```
@ -97,7 +103,7 @@ Activity, Error, and Debug logs are located in `./logs`
To print out debug logs:
``` shell
```shell
./run.sh --debug # on Linux / macOS
.\run.bat --debug # on Windows

View File

@ -7,39 +7,110 @@ nav:
- Usage: usage.md
- Plugins: plugins.md
- Configuration:
- Options: configuration/options.md
- Search: configuration/search.md
- Memory: configuration/memory.md
- Voice: configuration/voice.md
- Image Generation: configuration/imagegen.md
- Options: configuration/options.md
- Search: configuration/search.md
- Memory: configuration/memory.md
- Voice: configuration/voice.md
- Image Generation: configuration/imagegen.md
- Help us improve Auto-GPT:
- Share your debug logs with us: share-your-logs.md
- Contribution guide: contributing.md
- Running tests: testing.md
- Code of Conduct: code-of-conduct.md
- Share your debug logs with us: share-your-logs.md
- Contribution guide: contributing.md
- Running tests: testing.md
- Code of Conduct: code-of-conduct.md
- Challenges:
- Introduction: challenges/introduction.md
- List of Challenges:
- Memory:
- Introduction: challenges/memory/introduction.md
- Memory Challenge A: challenges/memory/challenge_a.md
- Memory Challenge B: challenges/memory/challenge_b.md
- Memory Challenge C: challenges/memory/challenge_c.md
- Memory Challenge D: challenges/memory/challenge_d.md
- Information retrieval:
- Introduction: challenges/information_retrieval/introduction.md
- Information Retrieval Challenge A: challenges/information_retrieval/challenge_a.md
- Information Retrieval Challenge B: challenges/information_retrieval/challenge_b.md
- Introduction: challenges/introduction.md
- List of Challenges:
- Memory:
- Introduction: challenges/memory/introduction.md
- Memory Challenge A: challenges/memory/challenge_a.md
- Memory Challenge B: challenges/memory/challenge_b.md
- Memory Challenge C: challenges/memory/challenge_c.md
- Memory Challenge D: challenges/memory/challenge_d.md
- Information retrieval:
- Introduction: challenges/information_retrieval/introduction.md
- Information Retrieval Challenge A: challenges/information_retrieval/challenge_a.md
- Information Retrieval Challenge B: challenges/information_retrieval/challenge_b.md
- Submit a Challenge: challenges/submit.md
- Beat a Challenge: challenges/beat.md
- License: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/LICENSE
theme: readthedocs
theme:
name: material
icon:
logo: material/book-open-variant
favicon: imgs/Auto_GPT_Logo.png
features:
- navigation.sections
- toc.follow
- navigation.top
- content.code.copy
palette:
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
toggle:
icon: material/weather-night
name: Switch to dark mode
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
toggle:
icon: material/weather-sunny
name: Switch to light mode
markdown_extensions:
admonition:
codehilite:
pymdownx.keys:
# Python Markdown
- abbr
- admonition
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
- tables
# Python Markdown Extensions
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.critic
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
auto_append:
- includes/abbreviations.md
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
plugins:
- table-reader
- search
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
- _javascript/tablesort.js
- _javascript/mathjax.js
- https://polyfill.io/v3/polyfill.min.js?features=es6
- https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js

View File

@ -48,6 +48,8 @@ isort
gitpython==3.1.31
auto-gpt-plugin-template @ git+https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template@0.1.0
mkdocs
mkdocs-material
mkdocs-table-reader-plugin
pymdown-extensions
mypy
types-Markdown

View File

@ -3,6 +3,7 @@ Test cases for the config class, which handles the configuration settings
for the AI and ensures it behaves as a singleton.
"""
import os
from typing import Any
from unittest import mock
from unittest.mock import patch
@ -13,7 +14,7 @@ from autogpt.config import Config, ConfigBuilder
from autogpt.workspace.workspace import Workspace
def test_initial_values(config: Config):
def test_initial_values(config: Config) -> None:
"""
Test if the initial values of the config class attributes are set correctly.
"""
@ -24,7 +25,7 @@ def test_initial_values(config: Config):
assert config.smart_llm == "gpt-4-0314"
def test_set_continuous_mode(config: Config):
def test_set_continuous_mode(config: Config) -> None:
"""
Test if the set_continuous_mode() method updates the continuous_mode attribute.
"""
@ -38,7 +39,7 @@ def test_set_continuous_mode(config: Config):
config.continuous_mode = continuous_mode
def test_set_speak_mode(config: Config):
def test_set_speak_mode(config: Config) -> None:
"""
Test if the set_speak_mode() method updates the speak_mode attribute.
"""
@ -52,7 +53,7 @@ def test_set_speak_mode(config: Config):
config.speak_mode = speak_mode
def test_set_fast_llm(config: Config):
def test_set_fast_llm(config: Config) -> None:
"""
Test if the set_fast_llm() method updates the fast_llm attribute.
"""
@ -66,7 +67,7 @@ def test_set_fast_llm(config: Config):
config.fast_llm = fast_llm
def test_set_smart_llm(config: Config):
def test_set_smart_llm(config: Config) -> None:
"""
Test if the set_smart_llm() method updates the smart_llm attribute.
"""
@ -80,7 +81,7 @@ def test_set_smart_llm(config: Config):
config.smart_llm = smart_llm
def test_set_debug_mode(config: Config):
def test_set_debug_mode(config: Config) -> None:
"""
Test if the set_debug_mode() method updates the debug_mode attribute.
"""
@ -95,7 +96,7 @@ def test_set_debug_mode(config: Config):
@patch("openai.Model.list")
def test_smart_and_fast_llms_set_to_gpt4(mock_list_models, config: Config):
def test_smart_and_fast_llms_set_to_gpt4(mock_list_models: Any, config: Config) -> None:
"""
Test if models update to gpt-3.5-turbo if both are set to gpt-4.
"""
@ -132,7 +133,7 @@ def test_smart_and_fast_llms_set_to_gpt4(mock_list_models, config: Config):
config.smart_llm = smart_llm
def test_missing_azure_config(workspace: Workspace):
def test_missing_azure_config(workspace: Workspace) -> None:
config_file = workspace.get_path("azure_config.yaml")
with pytest.raises(FileNotFoundError):
ConfigBuilder.load_azure_config(str(config_file))