# 🌎 Overview
AutoGPT Store Version 2 expands on the Pre-Store by enhancing agent
discovery, providing richer content presentation, and introducing new
user engagement features. The focus is on creating a visually appealing
and interactive marketplace that allows users to explore and evaluate
agents through images, videos, and detailed descriptions.
### Vision
To create a visually compelling and interactive open-source marketplace
for autonomous AI agents, where users can easily discover, evaluate, and
interact with agents through media-rich listings, ratings, and version
history.
### Objectives
📊 Incorporate visuals (icons, images, videos) into agent listings.
⭐ Introduce a rating system and agent run count.
🔄 Provide version history and update logs from creators.
🔍 Improve user experience with advanced search and filtering features.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Bently <tomnoon9@gmail.com>
Co-authored-by: Aarushi <aarushik93@gmail.com>
- Resolves#8948
### Changes 🏗️
- Parallelize frontend test job into a per-browser matrix
- Speed up "Free Disk Space" step by disabling removal of large system
packages
- Resolves#8884
We need to prevent breaking updates to dependency version requirements
of `autogpt_libs`.
`autogpt_libs/pytroject.toml` and `backend/poetry.lock` are loosely
coupled, and to ensure they stay in sync we need an extra check.
For now I'm also reverting the breaking update of #8787, otherwise this
added CI check will immediately fail.
### Changes
- ci(backend): Add `poetry.lock` check
- Revert "build(deps): bump pydantic from 2.9.2 to 2.10.2 in
/autogpt_platform/autogpt_libs in the production-dependencies group
across 1 directory (#8787)"
- Resolves#8859
- Follow-up to #8751
### Changes
- Add `autogpt_libs` to the backend CI path filter
- Add `ruff format` step for `autogpt_libs` to `linter.py` and
`pre-commit` config
- Run `poetry run format` with the new setup
Dependabot's commit messages are bulky and don't use our commit message
scopes. Although not fully customizable, this partially fixes it.
### Changes 🏗️
- Fix dependabot commit message scopes
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* ci(frontend,backend,classic): update branch from develop to dev
* ci(frontend, infra): enable ci on other tools
* Update classic-autogpt-docker-ci.yml
* fix: don't error if the folder exists
* fix: drop bad test
* Revert "fix: drop bad test"
This reverts commit c478d3cf4c.
* fix: turn off the correct test 👀
* fix: remove more
* Discard changes to .github/workflows/classic-autogpt-ci.yml
* Update classic-autogpt-docker-ci.yml
* Update classic-autogpt-docker-release.yml
* Update classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-forge-ci.yml
* Discard changes to .github/workflows/classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-python-checks.yml
* Discard changes to .github/workflows/repo-pr-label.yml
* Discard changes to .github/workflows/platform-backend-ci.yml
* Update classic-benchmark-ci.yml
* Update classic-frontend-ci.yml
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* add prod migrations
* prod migrations without check
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* ci: create dependabot
* ci: target the dev branch for dependabot
* ci: group prs
* ci: group updates
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* Create repo-pr-enforce-base-branch.yml
* fix quotes
* test
* fix github token
* fix trigger and CLI config
* change back trigger because otherwise I can't test it
* fix the fix
* fix repo selection
* fix perms?
* fix quotes and newlines escaping in message
* Update repo-pr-enforce-base-branch.yml
* grrr escape sequences in bash
* test
* clean up
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* fix(market): agent pagination and search errors
* fix(frontend): search was not paginated
* fix: linting
* feat(market): linting ci
* fix(ci): branch limit name
- ci(frontend): Ensure CI fails if `yarn.lock` is inconsistent with `package.json`
- dx(frontend): Add Prettier check to `lint` script in `package.json`
- dx(frontend): Add `packageManager` to `package.json` for Corepack support
- build(frontend): Use `yarn` consistently in the Dockerfile
## Config
- For Supabase, the back end needs `SUPABASE_URL`, `SUPABASE_SERVICE_ROLE_KEY`, and `SUPABASE_JWT_SECRET`
- For the GitHub integration to work, the back end needs `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- For integrations OAuth flows to work in local development, the back end needs `FRONTEND_BASE_URL` to generate login URLs with accurate redirect URLs
## REST API
- Tweak output of OAuth `/login` endpoint: add `state_token` separately in response
- Add `POST /integrations/{provider}/credentials` (for API keys)
- Add `DELETE /integrations/{provider}/credentials/{cred_id}`
## Back end
- Add Supabase support to `AppService`
- Add `FRONTEND_BASE_URL` config option, mainly for local development use
### `autogpt_libs.supabase_integration_credentials_store`
- Add `CredentialsType` alias
- Add `.bearer()` helper methods to `APIKeyCredentials` and `OAuth2Credentials`
### Blocks
- Add `CredentialsField(..) -> CredentialsMetaInput`
## Front end
### UI components
- `CredentialsInput` for use on `CustomNode`: allows user to add/select credentials for a service.
- `APIKeyCredentialsModal`: a dialog for creating API keys
- `OAuth2FlowWaitingModal`: a dialog to indicate that the application is waiting for the user to log in to the 3rd party service in the provided pop-up window
- `NodeCredentialsInput`: wrapper for `CredentialsInput` with the "usual" interface of node input components
- New icons: `IconKey`, `IconKeyPlus`, `IconUser`, `IconUserPlus`
### Data model
- `CredentialsProvider`: introduces the app-level `CredentialsProvidersContext`, which acts as an application-wide store and cache for credentials metadata.
- `useCredentials` for use on `CustomNode`: uses `CredentialsProvidersContext` and provides node-specific credential data and provider-specific data/functions
- `/auth/integrations/oauth_callback` route to close the loop to the `CredentialsInput` after a user completes sign-in to the external service
- Add `BlockIOCredentialsSubSchema`
### API client
- Add `isAuthenticated` method
- Add methods for integration OAuth flow: `oAuthLogin`, `oAuthCallback`
- Add CRD methods for credentials: `createAPIKeyCredentials`, `listCredentials`, `getCredentials`, `deleteCredentials`
- Add mirrored types `CredentialsMetaResponse`, `CredentialsMetaInput`, `OAuth2Credentials`, `APIKeyCredentials`
- Add GitHub blocks + "DEVELOPER_TOOLS" category
- Add `**kwargs` to `Block.run(..)` signature to support additional kwargs
- Add support for loading blocks from nested modules (e.g. `blocks/github/issues.py`)
#### Executor
- Add strict support for `credentials` fields on blocks
- Fetch credentials for graph execution and pass them down through to the node execution
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
* Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
* `rnd/autogpt_builder` -> `autogpt_platform/frontend`
* `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
* update pr template wording
* add what and how
* Update .github/PULL_REQUEST_TEMPLATE.md
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
- feat(builder): Add "Stop Run" buttons to monitor and builder
- Implement additional state management in `useAgentGraph` hook
- Add "stop" request mechanism
- Implement execution status tracking using WebSockets
- Add `isSaving`, `isRunning`, `isStopping` outputs
- Add `requestStopRun` method
- Rename `requestSaveRun` to `requestSaveAndRun` for clarity
- Add needed functionality for the above to `AutoGPTServerAPI` client
- Add `stopGraphExecution` method
- Add support for multiple handlers per WebSocket method
- Fix parsing of timestamps in `execution_event` WebSocket messages
- Add `IconSquare` from Lucide to `@/components/ui/icons`
- feat(server): Add `POST /graphs/{graph_id}/executions/{graph_exec_id}/stop` route
- Add `stop_graph_run` method to `AgentServer`
- feat(server): Add `cancel_execution` method to `ExecutionManager`
- Replace node executor `ProcessPoolExecutor` by `multiprocessing.Pool` (which has a `terminate()` method)
- Remove now unnecessary `Executor.wait_future(..)` method
- Add `get_graph_execution(..)` in `.data.execution`
- fix(server): Reduce number of node executors to 5 per graph executor
This is necessary because `multiprocessing.Pool` spawns its workers on init, instead of based on demand like `ProcessPoolExecutor` does
- dx(server): Improve debug logging in `ExecutionManager`
- ci(server): Add debug logging mode to CI Pytest step
### Other improvements
Server:
- Improve output type of `ExecutionManager.add_execution(..)`
- Renamed a few things in `.server.rest_api` for consistency
Front end:
- Improved typing in `AutoGPTServerAPI` client
* move migrations, update networking and dockignore
* update docs
* remove sqlite from ci
* remove schema linting checks
* fix formatting
* remove schema linting
* add test script
* formatting and linting
* stop pg not down
* seperate test db
* diff port
* remove duplicate