mirror of
https://github.com/anasty17/mirror-leech-telegram-bot.git
synced 2025-01-08 12:07:33 +08:00
Stabilising
- [NEW] Ability to seed specific torrent usnig qbittorrent - [yt-dlp] fix file name is too long - [yt-dlp] get playlist size faster - [yt-dlp] fix audio playlist size - [yt-dlp] fix subtitles not uploaded to drive - [status] improve status refresh - [google] fix file name is too long - [google] retry at incompleteRead error - [leech] fix splliting for videos that contain chapters - [gdtot] REMOVED - Other minor fixes Signed-off-by: anasty17 <e.anastayyar@gmail.com>
This commit is contained in:
parent
8e61b8f115
commit
8f480838fe
188
README.md
188
README.md
@ -3,34 +3,35 @@ This is a Telegram Bot written in Python for mirroring files on the Internet to
|
||||
# Features:
|
||||
|
||||
## By [Anas](https://github.com/anasty17)
|
||||
- qBittorrent
|
||||
- Select files from Torrent before downloading using qbittorrent
|
||||
- Leech (splitting, thumbnail for each user, setting as document or as media for each user)
|
||||
- Stop duplicates for all tasks except yt-dlp tasks
|
||||
- Zip/Unzip G-Drive links
|
||||
- Counting files/folders from Google Drive link
|
||||
- View Link button, extra button to open file index link in broswer instead of direct download
|
||||
- Status Pages for unlimited tasks
|
||||
- Clone status
|
||||
- Search in multiple Drive folder/TeamDrive
|
||||
- Recursive Search (only with `root` or TeamDrive ID, folder ids will be listed with non-recursive method)
|
||||
- Multi-TD list by token.pickle if exists
|
||||
- Extract rar, zip and 7z splits with or without password
|
||||
- Zip file/folder with or without password
|
||||
- Use Token.pickle if file not found with Service Account for all Gdrive functions
|
||||
- Random Service Account at startup
|
||||
- Mirror/Leech/Watch/Clone/Count/Del by reply
|
||||
- YT-DLP quality buttons
|
||||
- qBittorrent.
|
||||
- Select files from Torrent before downloading using qbittorrent.
|
||||
- Leech (splitting, thumbnail for each user, setting as document or as media for each user).
|
||||
- Stop duplicates for all tasks except yt-dlp tasks.
|
||||
- Zip/Unzip G-Drive links.
|
||||
- Counting files/folders from Google Drive link.
|
||||
- View Link button, extra button to open file index link in broswer instead of direct download.
|
||||
- Status Pages for unlimited tasks.
|
||||
- Clone status.
|
||||
- Search in multiple Drive folder/TeamDrive.
|
||||
- Recursive Search (only with `root` or TeamDrive ID, folder ids will be listed with non-recursive method).
|
||||
- Multi-TD list by token.pickle if exists.
|
||||
- Extract rar, zip and 7z splits with or without password.
|
||||
- Zip file/folder with or without password.
|
||||
- Use Token.pickle if file not found with Service Account for all Gdrive functions.
|
||||
- Random Service Account at startup.
|
||||
- Mirror/Leech/Watch/Clone/Count/Del by reply.
|
||||
- YT-DLP quality buttons.
|
||||
- Search on torrents with Torrent Search API or with variable plugins using qBittorrent search engine
|
||||
- Docker image support for linux `amd64, arm64, arm/v7, arm/v6, s390x, arm64/v8` (**Note**: Use `anasty17/mltb:arm64` for `arm64/v8` or oracle)
|
||||
- Update bot at startup and with restart command using `UPSTREAM_REPO`
|
||||
- Qbittorrent seed until reaching specific ratio or time
|
||||
- Rss feed and filter. Based on this repository [rss-chan](https://github.com/hyPnOtICDo0g/rss-chan)
|
||||
- Save leech settings including thumbnails in database
|
||||
- Mirror/Leech/Clone multi links/files with one command
|
||||
- Extensions Filter for the files to be uploaded/cloned
|
||||
- Docker image support for linux `amd64, arm64, arm/v7, arm/v6, arm64/v8` (**Note**: Use `anasty17/mltb:arm64` for `arm64/v8` or oracle).
|
||||
- Update bot at startup and with restart command using `UPSTREAM_REPO`.
|
||||
- Qbittorrent seed until reaching specific ratio or time.
|
||||
- Rss feed and filter. Based on this repository [rss-chan](https://github.com/hyPnOtICDo0g/rss-chan).
|
||||
- Save leech settings including thumbnails in database.
|
||||
- Mirror/Leech/Clone multi links/files with one command.
|
||||
- Extensions Filter for the files to be uploaded/cloned.
|
||||
- Incomplete task notifier to get incomplete task messages after restart, works with database.
|
||||
- Many bugs have been fixed
|
||||
- Almost all repository functions have been improved.
|
||||
- Many bugs have been fixed.
|
||||
|
||||
## From Other Repositories
|
||||
- Mirror direct download links, Torrent, and Telegram files to Google Drive
|
||||
@ -58,10 +59,6 @@ This is a Telegram Bot written in Python for mirroring files on the Internet to
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Tutorial Video from A to Z:
|
||||
- Thanks to [Wiszky](https://github.com/vishnoe115)
|
||||
<p><a href="https://www.youtube.com/watch?v=gFQWJ4ftt48"> <img src="https://img.shields.io/badge/See%20Video-black?style=for-the-badge&logo=YouTube" width="160""/></a></p>
|
||||
|
||||
### 1. Installing requirements
|
||||
|
||||
- Clone this repo:
|
||||
@ -72,11 +69,8 @@ git clone https://github.com/anasty17/mirror-leech-telegram-bot mirrorbot/ && cd
|
||||
```
|
||||
sudo apt install python3 python3-pip
|
||||
```
|
||||
Install Docker by following the [official Docker docs](https://docs.docker.com/engine/install/debian/) or by commands below.
|
||||
```
|
||||
sudo apt install snapd
|
||||
sudo snap install docker
|
||||
```
|
||||
Install Docker by following the [official Docker docs](https://docs.docker.com/engine/install/debian/)
|
||||
|
||||
- For Arch and it's derivatives:
|
||||
```
|
||||
sudo pacman -S docker python
|
||||
@ -101,81 +95,79 @@ Fill up rest of the fields. Meaning of each field is discussed below:
|
||||
|
||||
**1. Required Fields**
|
||||
|
||||
- `BOT_TOKEN`: The Telegram Bot Token that you got from [@BotFather](https://t.me/BotFather)
|
||||
- `GDRIVE_FOLDER_ID`: This is the Folder/TeamDrive ID of the Google Drive Folder to which you want to upload all the mirrors.
|
||||
- `BOT_TOKEN`: The Telegram Bot Token that you got from [@BotFather](https://t.me/BotFather). `Str`
|
||||
- `GDRIVE_FOLDER_ID`: This is the Folder/TeamDrive ID of the Google Drive Folder or `root` to which you want to upload all the mirrors. `Str`
|
||||
- `OWNER_ID`: The Telegram User ID (not username) of the Owner of the bot. `Int`
|
||||
- `DOWNLOAD_DIR`: The path to the local folder where the downloads should be downloaded to.
|
||||
- `DOWNLOAD_DIR`: The path to the local folder where the downloads should be downloaded to. `Str`
|
||||
- `DOWNLOAD_STATUS_UPDATE_INTERVAL`: Time in seconds after which the progress/status message will be updated. Recommended `10` seconds at least. `Int`
|
||||
- `AUTO_DELETE_MESSAGE_DURATION`: Interval of time (in seconds), after which the bot deletes it's message and command message which is expected to be viewed instantly. **NOTE**: Set to `-1` to disable auto message deletion. `Int`
|
||||
- `IS_TEAM_DRIVE`: Set `True` if uploading to TeamDrive. Default is `False`. `Bool`
|
||||
- `TELEGRAM_API`: This is to authenticate your Telegram account for downloading Telegram files. You can get this from https://my.telegram.org. `Int`
|
||||
- `TELEGRAM_HASH`: This is to authenticate your Telegram account for downloading Telegram files. You can get this from https://my.telegram.org.
|
||||
- `TELEGRAM_HASH`: This is to authenticate your Telegram account for downloading Telegram files. You can get this from https://my.telegram.org. `Str`
|
||||
|
||||
**2. Optional Fields**
|
||||
- `DATABASE_URL`: Your SQL Database URL. Follow this [Generate Database](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#generate-database) to generate database. Data will be saved in Database: auth and sudo users, leech settings including thumbnails for each user, rss data and incomplete tasks. **NOTE**: If deploying on heroku and using heroku postgresql delete this variable from **config.env** file. **DATABASE_URL** will be grabbed from heroku variables.
|
||||
- `AUTHORIZED_CHATS`: Fill user_id and chat_id of groups/users you want to authorize. Separate them by space.
|
||||
- `SUDO_USERS`: Fill user_id of users whom you want to give sudo permission. Separate them by space.
|
||||
- `IS_TEAM_DRIVE`: Set `True` if uploading to TeamDrive. Default is `False`. `Bool`
|
||||
- `DATABASE_URL`: Your SQL Database URL. Follow this [Generate Database](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#generate-database) to generate database. Data will be saved in Database: auth and sudo users, leech settings including thumbnails for each user, rss data and incomplete tasks. **NOTE**: If deploying on heroku and using heroku postgresql delete this variable from **config.env** file. **DATABASE_URL** will be grabbed from heroku variables. `Str`
|
||||
- `AUTHORIZED_CHATS`: Fill user_id and chat_id of groups/users you want to authorize. Separate them by space. `Str`
|
||||
- `SUDO_USERS`: Fill user_id of users whom you want to give sudo permission. Separate them by space. `Str`
|
||||
- `IGNORE_PENDING_REQUESTS`: Ignore pending requests after restart. Default is `False`. `Bool`
|
||||
- `USE_SERVICE_ACCOUNTS`: Whether to use Service Accounts or not. For this to work see [Using Service Accounts](https://github.com/anasty17/mirror-leech-telegram-bot#generate-service-accounts-what-is-service-account) section below. Default is `False`. `Bool`
|
||||
- `INDEX_URL`: Refer to https://gitlab.com/ParveenBhadooOfficial/Google-Drive-Index.
|
||||
- `STATUS_LIMIT`: Limit the no. of tasks shown in status message with buttons. **NOTE**: Recommended limit is `4` tasks.
|
||||
- `INDEX_URL`: Refer to https://gitlab.com/ParveenBhadooOfficial/Google-Drive-Index. `Str`
|
||||
- `STATUS_LIMIT`: Limit the no. of tasks shown in status message with buttons. **NOTE**: Recommended limit is `4` tasks. `Str`
|
||||
- `STOP_DUPLICATE`: Bot will check file in Drive, if it is present in Drive, downloading or cloning will be stopped. (**NOTE**: File will be checked using filename not file hash, so this feature is not perfect yet). Default is `False`. `Bool`
|
||||
- `CMD_INDEX`: commands index number. This number will added at the end all commands.
|
||||
- `UPTOBOX_TOKEN`: Uptobox token to mirror uptobox links. Get it from [Uptobox Premium Account](https://uptobox.com/my_account).
|
||||
- `TORRENT_TIMEOUT`: Timeout of dead torrents downloading with qBittorrent and Aria2c in seconds.
|
||||
- `EXTENSION_FILTER`: File extensions that won't upload/clone. Separate them by space.
|
||||
- `CMD_INDEX`: commands index number. This number will added at the end all commands. `Str`
|
||||
- `TORRENT_TIMEOUT`: Timeout of dead torrents downloading with qBittorrent and Aria2c in seconds. `Str`
|
||||
- `EXTENSION_FILTER`: File extensions that won't upload/clone. Separate them by space. `Str`
|
||||
- `INCOMPLETE_TASK_NOTIFIER`: Get incomplete task messages after restart. Require database and (supergroup or channel). Default is `False`. `Bool`
|
||||
- `UPTOBOX_TOKEN`: Uptobox token to mirror uptobox links. Get it from [Uptobox Premium Account](https://uptobox.com/my_account).
|
||||
|
||||
### Update
|
||||
- `UPSTREAM_REPO`: Your github repository link, if your repo is private add `https://username:{githubtoken}@github.com/{username}/{reponame}` format. Get token from [Github settings](https://github.com/settings/tokens). So you can update your bot from filled repository on each restart. **NOTE**: Any change in docker or requirements you need to deploy/build again with updated repo to take effect. DON'T delete .gitignore file. For more information read [THIS](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#upstream-repo-recommended).
|
||||
- `UPSTREAM_BRANCH`: Upstream branch for update. Default is `master`.
|
||||
- `UPSTREAM_REPO`: Your github repository link, if your repo is private add `https://username:{githubtoken}@github.com/{username}/{reponame}` format. Get token from [Github settings](https://github.com/settings/tokens). So you can update your bot from filled repository on each restart. `Str`.
|
||||
- **NOTE**: Any change in docker or requirements you need to deploy/build again with updated repo to take effect. DON'T delete .gitignore file. For more information read [THIS](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#upstream-repo-recommended).
|
||||
- `UPSTREAM_BRANCH`: Upstream branch for update. Default is `master`. `Str`
|
||||
|
||||
### Leech
|
||||
- `TG_SPLIT_SIZE`: Size of split in bytes. Default is `2GB`.
|
||||
- `TG_SPLIT_SIZE`: Size of split in bytes. Default is `2GB`. `Str`
|
||||
- `AS_DOCUMENT`: Default type of Telegram file upload. Default is `False` mean as media. `Bool`
|
||||
- `EQUAL_SPLITS`: Split files larger than **TG_SPLIT_SIZE** into equal parts size (Not working with zip cmd). Default is `False`. `Bool`
|
||||
- `CUSTOM_FILENAME`: Add custom word to leeched file name.
|
||||
- `CUSTOM_FILENAME`: Add custom word to leeched file name. `Str`
|
||||
|
||||
### qBittorrent
|
||||
- `BASE_URL_OF_BOT`: Valid BASE URL where the bot is deployed to use qbittorrent web selection. Format of URL should be `http://myip`, where `myip` is the IP/Domain(public) of your bot or if you have chosen port other than `80` so write it in this format `http://myip:port` (`http` and not `https`). This Var is optional on VPS and required for Heroku specially to avoid app sleeping/idling. For Heroku fill `https://yourappname.herokuapp.com`. Still got idling? You can use http://cron-job.org to ping your Heroku app.
|
||||
- `SERVER_PORT`: Only For VPS even if `IS_VPS` is `False`, which is the **BASE_URL_OF_BOT** Port.
|
||||
- `BASE_URL_OF_BOT`: Valid BASE URL where the bot is deployed to use qbittorrent web selection. Format of URL should be `http://myip`, where `myip` is the IP/Domain(public) of your bot or if you have chosen port other than `80` so write it in this format `http://myip:port` (`http` and not `https`). This Var is optional on VPS and required for Heroku specially to avoid app sleeping/idling. For Heroku fill `https://yourappname.herokuapp.com`. Still got idling? You can use http://cron-job.org to ping your Heroku app. `Str`
|
||||
- `SERVER_PORT`: Only For VPS, which is the **BASE_URL_OF_BOT** Port. `Str`
|
||||
- `WEB_PINCODE`: If empty or `False` means no more pincode required while qbit web selection. `Bool`
|
||||
- `QB_SEED`: QB torrent will be seeded after and while uploading until reaching specific ratio or time, edit `MaxRatio` or `GlobalMaxSeedingMinutes` or both from qbittorrent.conf (`-1` means no limit, but u can cancel manually by gid). **NOTE**: 1. Don't change `MaxRatioAction`, 2. Only works with `/qbmirror` and `/qbzipmirror`. Default is `False`. `Bool`
|
||||
- `QB_SEED`: QB torrent will be seeded after and while uploading until reaching specific ratio or time, edit `MaxRatio` or `GlobalMaxSeedingMinutes` or both from qbittorrent.conf (`-1` means no limit, but u can cancel manually by gid). **NOTE**: 1. Don't change `MaxRatioAction`, 2. Only works with `/qbmirror` and `/qbzipmirror`. Also you can use this feature for specific torrent while using the bot and leave this variable empty. Default is `False`. `Bool`
|
||||
- **Qbittorrent NOTE**: If your facing ram exceeded issue then set limit for `MaxConnecs`, decrease `AsyncIOThreadsCount` in qbittorrent config and set limit of `DiskWriteCacheSize` to `32`.
|
||||
|
||||
### RSS
|
||||
- `RSS_DELAY`: Time in seconds for rss refresh interval. Recommended `900` second at least. Default is `900` in sec.
|
||||
- `RSS_COMMAND`: Choose command for the desired action.
|
||||
- `RSS_CHAT_ID`: Chat ID where rss links will be sent. If using channel then add channel id.
|
||||
- `USER_SESSION_STRING`: To send rss links from your telegram account. Instead of adding bot to channel then linking the channel to group to get rss link since bot will not read command from itself or other bot. To generate session string use this command `python3 generate_string_session.py` after mounting repo folder for sure.
|
||||
- `RSS_DELAY`: Time in seconds for rss refresh interval. Recommended `900` second at least. Default is `900` in sec. `Str`
|
||||
- `RSS_COMMAND`: Choose command for the desired action. `Str`
|
||||
- `RSS_CHAT_ID`: Chat ID where rss links will be sent. If using channel then add channel id. `Str`
|
||||
- `USER_SESSION_STRING`: To send rss links from your telegram account. Instead of adding bot to channel then linking the channel to group to get rss link since bot will not read command from itself or other bot. To generate session string use this command `python3 generate_string_session.py` after mounting repo folder for sure. `Str`
|
||||
- **RSS NOTE**: `DATABASE_URL` and `RSS_CHAT_ID` is required, otherwise all rss commands will not work. You must use bot in group. You can add the bot to a channel and link this channel to group so messages sent by bot to channel will be forwarded to group without using `USER_STRING_SESSION`.
|
||||
|
||||
### Private Files
|
||||
- `ACCOUNTS_ZIP_URL`: Only if you want to load your Service Account externally from an Index Link or by any direct download link NOT webpage link. Archive the accounts folder to ZIP file. Fill this with the direct download link of zip file. If index need authentication so add direct download as shown below:
|
||||
- `ACCOUNTS_ZIP_URL`: Only if you want to load your Service Account externally from an Index Link or by any direct download link NOT webpage link. Archive the accounts folder to ZIP file. Fill this with the direct download link of zip file. `Str`. If index need authentication so add direct download as shown below:
|
||||
- `https://username:password@example.workers.dev/...`
|
||||
- `TOKEN_PICKLE_URL`: Only if you want to load your **token.pickle** externally from an Index Link. Fill this with the direct link of that file.
|
||||
- `MULTI_SEARCH_URL`: Check `drive_folder` setup [here](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#multi-search-ids). Write **drive_folder** file [here](https://gist.github.com/). Open the raw file of that gist, it's URL will be your required variable. Should be in this form after removing commit id: https://gist.githubusercontent.com/username/gist-id/raw/drive_folder
|
||||
- `YT_COOKIES_URL`: Youtube authentication cookies. Check setup [Here](https://github.com/ytdl-org/youtube-dl#how-do-i-pass-cookies-to-youtube-dl). Use gist raw link and remove commit id from the link, so you can edit it from gists only.
|
||||
- `NETRC_URL`: To create .netrc file contains authentication for aria2c and yt-dlp. Use gist raw link and remove commit id from the link, so you can edit it from gists only. **NOTE**: After editing .nterc you need to restart the docker or if deployed on heroku so restart dyno in case your edits related to aria2c authentication.
|
||||
- `TOKEN_PICKLE_URL`: Only if you want to load your **token.pickle** externally from an Index Link. Fill this with the direct link of that file. `Str`
|
||||
- `MULTI_SEARCH_URL`: Check `drive_folder` setup [here](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#multi-search-ids). Write **drive_folder** file [here](https://gist.github.com/). Open the raw file of that gist, it's URL will be your required variable. Should be in this form after removing commit id: https://gist.githubusercontent.com/username/gist-id/raw/drive_folder. `Str`
|
||||
- `YT_COOKIES_URL`: Youtube authentication cookies. Check setup [Here](https://github.com/ytdl-org/youtube-dl#how-do-i-pass-cookies-to-youtube-dl). Use gist raw link and remove commit id from the link, so you can edit it from gists only. `Str`
|
||||
- `NETRC_URL`: To create .netrc file contains authentication for aria2c and yt-dlp. Use gist raw link and remove commit id from the link, so you can edit it from gists only. **NOTE**: After editing .nterc you need to restart the docker or if deployed on heroku so restart dyno in case your edits related to aria2c authentication. `Str`
|
||||
- **NOTE**: All above url variables used incase you want edit them in future easily without deploying again or if you want to deploy from public fork. If deploying using cli or private fork you can leave these variables empty add token.pickle, accounts folder, drive_folder, .netrc and cookies.txt directly to root but you can't update them without rebuild OR simply leave all above variables and use private UPSTREAM_REPO.
|
||||
|
||||
### MEGA
|
||||
- `MEGA_API_KEY`: Mega.nz API key to mirror mega.nz links. Get it from [Mega SDK Page](https://mega.nz/sdk)
|
||||
- `MEGA_EMAIL_ID`: E-Mail ID used to sign up on mega.nz for using premium account.
|
||||
- `MEGA_PASSWORD`: Password for mega.nz account.
|
||||
|
||||
### GDTOT
|
||||
- `CRYPT`: Cookie for gdtot google drive link generator. Follow these [steps](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#gdtot-cookies).
|
||||
- `MEGA_API_KEY`: Mega.nz API key to mirror mega.nz links. Get it from [Mega SDK Page](https://mega.nz/sdk). `Str`
|
||||
- `MEGA_EMAIL_ID`: E-Mail ID used to sign up on mega.nz for using premium account. `Str`
|
||||
- `MEGA_PASSWORD`: Password for mega.nz account. `Str`
|
||||
|
||||
### Buttons
|
||||
- `VIEW_LINK`: View Link button to open file Index Link in browser instead of direct download link, you can figure out if it's compatible with your Index code or not, open any video from you Index and check if its URL ends with `?a=view`, if yes make it `True`. Compatible with [BhadooIndex](https://gitlab.com/ParveenBhadooOfficial/Google-Drive-Index) Code. Default is `False`. `Bool`
|
||||
|
||||
### Torrent Search
|
||||
- `SEARCH_API_LINK`: Search api app link. Get your api from deploying this [repository](https://github.com/Ryuk-me/Torrent-Api-py).
|
||||
- `SEARCH_API_LINK`: Search api app link. Get your api from deploying this [repository](https://github.com/Ryuk-me/Torrent-Api-py). `Str`
|
||||
- Supported Sites:
|
||||
>1337x, Piratebay, Nyaasi, Torlock, Torrent Galaxy, Zooqle, Kickass, Bitsearch, MagnetDL, Libgen, YTS, Limetorrent, TorrentFunk, Glodls, TorrentProject and YourBittorrent
|
||||
- `SEARCH_LIMIT`: Search limit for search api, limit for each site and not overall result limit. Default is zero (Default api limit for each site).
|
||||
- `SEARCH_PLUGINS`: List of qBittorrent search plugins (github raw links). I have added some plugins, you can remove/add plugins as you want. Main Source: [qBittorrent Search Plugins (Official/Unofficial)](https://github.com/qbittorrent/search-plugins/wiki/Unofficial-search-plugins).
|
||||
- `SEARCH_LIMIT`: Search limit for search api, limit for each site and not overall result limit. Default is zero (Default api limit for each site). `Str`
|
||||
- `SEARCH_PLUGINS`: List of qBittorrent search plugins (github raw links). I have added some plugins, you can remove/add plugins as you want. Main Source: [qBittorrent Search Plugins (Official/Unofficial)](https://github.com/qbittorrent/search-plugins/wiki/Unofficial-search-plugins). `Str`
|
||||
|
||||
------
|
||||
|
||||
@ -216,26 +208,16 @@ sudo docker image prune -a
|
||||
```
|
||||
4. Check the number of processing units of your machine with `nproc` cmd and times it by 4, then edit `AsyncIOThreadsCount` in qBittorrent.conf.
|
||||
5. Use `anasty17/mltb:arm64` for oracle or arm64/v8.
|
||||
- Tutorial Video for Deploying on Oracle VPS:
|
||||
- Thanks to [Wiszky](https://github.com/vishnoe115)
|
||||
- No need to use sudo su, you can also use sudo before each cmd!
|
||||
<p><a href="https://youtu.be/IzUG7U7v4U4?t=968"> <img src="https://img.shields.io/badge/See%20Video-black?style=for-the-badge&logo=YouTube" width="160""/></a></p>
|
||||
6. You can add `CONFIG_FILE_URL` variable using docker and docker-compose, google it.
|
||||
|
||||
------
|
||||
|
||||
### Deploying on VPS Using Docker
|
||||
|
||||
- Start Docker daemon (skip if already running), if installed by snap then use 2nd command:
|
||||
- Start Docker daemon (SKIP if already running):
|
||||
```
|
||||
sudo dockerd
|
||||
```
|
||||
```
|
||||
sudo snap start docker
|
||||
```
|
||||
- **Note**: If not started or not starting, run the command below then try to start.
|
||||
```
|
||||
sudo apt install docker.io
|
||||
```
|
||||
- Build Docker image:
|
||||
```
|
||||
sudo docker build . -t mirror-bot
|
||||
@ -447,8 +429,8 @@ python3 add_to_team_drive.py -d SharedTeamDriveSrcID
|
||||
## Multi Search IDs
|
||||
To use list from multi TD/folder. Run driveid.py in your terminal and follow it. It will generate **drive_folder** file or u can simply create `drive_folder` file in working directory and fill it, check below format:
|
||||
```
|
||||
MyTdName folderID/tdID IndexLink(if available)
|
||||
MyTdName2 folderID/tdID IndexLink(if available)
|
||||
DriveName folderID/tdID or `root` IndexLink(if available)
|
||||
DriveName folderID/tdID or `root` IndexLink(if available)
|
||||
```
|
||||
-----
|
||||
|
||||
@ -476,33 +458,3 @@ machine example.workers.dev password index_password
|
||||
Where host is the name of extractor (eg. instagram, Twitch). Multiple accounts of different hosts can be added each separated by a new line.
|
||||
|
||||
-----
|
||||
|
||||
## Gdtot Cookies
|
||||
To Clone or Leech gdtot link follow these steps:
|
||||
1. Login/Register to [gdtot](https://new.gdtot.top).
|
||||
2. Copy this script and paste it in browser address bar.
|
||||
- **Note**: After pasting it check at the beginning of the script in broswer address bar if `javascript:` exists or not, if not so write it as shown below.
|
||||
```javascript
|
||||
javascript:(function () {
|
||||
const input = document.createElement('input');
|
||||
COOKIE = JSON.parse(JSON.stringify({cookie : document.cookie}));
|
||||
input.value = COOKIE['cookie'].split('crypt=')[1];
|
||||
document.body.appendChild(input);
|
||||
input.focus();
|
||||
input.select();
|
||||
var result = document.execCommand('copy');
|
||||
document.body.removeChild(input);
|
||||
if(result)
|
||||
alert('Crypt copied to clipboard');
|
||||
else
|
||||
prompt('Failed to copy Crypt. Manually copy below Crypt\n\n', input.value);
|
||||
})();
|
||||
```
|
||||
- After pressing enter your browser will prompt a alert.
|
||||
3. Now you'll get Crypt value in your clipboard
|
||||
```
|
||||
NGxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxWdSVT0%3D
|
||||
```
|
||||
4. From this you have to paste value for **CRYPT** in config.env file.
|
||||
|
||||
-----
|
||||
|
@ -81,6 +81,7 @@ def restart(update, context):
|
||||
restart_message = sendMessage("Restarting...", context.bot, update.message)
|
||||
if Interval:
|
||||
Interval[0].cancel()
|
||||
Interval.clear()
|
||||
clean_all()
|
||||
srun(["pkill", "-f", "gunicorn|aria2c|qbittorrent-nox"])
|
||||
srun(["python3", "update.py"])
|
||||
@ -110,7 +111,7 @@ help_string_telegraph = f'''<br>
|
||||
<br><br>
|
||||
<b>/{BotCommands.UnzipMirrorCommand}</b> [download_url][magnet_link]: Start mirroring and upload the file/folder extracted from any archive extension
|
||||
<br><br>
|
||||
<b>/{BotCommands.QbMirrorCommand}</b> [magnet_link][torrent_file][torrent_file_url]: Start Mirroring using qBittorrent, Use <b>/{BotCommands.QbMirrorCommand} s</b> to select files before downloading
|
||||
<b>/{BotCommands.QbMirrorCommand}</b> [magnet_link][torrent_file][torrent_file_url]: Start Mirroring using qBittorrent, Use <b>/{BotCommands.QbMirrorCommand} s</b> to select files before downloading and use <b>/{BotCommands.QbMirrorCommand} d</b> to seed specific torrent
|
||||
<br><br>
|
||||
<b>/{BotCommands.QbZipMirrorCommand}</b> [magnet_link][torrent_file][torrent_file_url]: Start mirroring using qBittorrent and upload the file/folder compressed with zip extension
|
||||
<br><br>
|
||||
|
@ -8,8 +8,8 @@ from requests import head as rhead
|
||||
from urllib.request import urlopen
|
||||
from telegram import InlineKeyboardMarkup
|
||||
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
from bot import download_dict, download_dict_lock, STATUS_LIMIT, botStartTime, DOWNLOAD_DIR
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
from bot.helper.telegram_helper.button_build import ButtonMaker
|
||||
|
||||
MAGNET_REGEX = r"magnet:\?xt=urn:btih:[a-zA-Z0-9]*"
|
||||
@ -234,10 +234,6 @@ def is_url(url: str):
|
||||
def is_gdrive_link(url: str):
|
||||
return "drive.google.com" in url
|
||||
|
||||
def is_gdtot_link(url: str):
|
||||
url = re_match(r'https?://.+\.gdtot\.\S+', url)
|
||||
return bool(url)
|
||||
|
||||
def is_mega_link(url: str):
|
||||
return "mega.nz" in url or "mega.co.nz" in url
|
||||
|
||||
|
@ -117,20 +117,20 @@ def split_file(path, size, file_, dirpath, split_size, start_time=0, i=1, inLoop
|
||||
split_size = ceil(size/parts) + 1000
|
||||
if file_.upper().endswith(VIDEO_SUFFIXES):
|
||||
base_name, extension = ospath.splitext(file_)
|
||||
split_size = split_size - 3000000
|
||||
split_size = split_size - 5000000
|
||||
while i <= parts :
|
||||
parted_name = "{}.part{}{}".format(str(base_name), str(i).zfill(3), str(extension))
|
||||
out_path = ospath.join(dirpath, parted_name)
|
||||
srun(["ffmpeg", "-hide_banner", "-loglevel", "error", "-ss", str(start_time),
|
||||
"-i", path, "-fs", str(split_size), "-map", "0", "-c", "copy", out_path])
|
||||
"-i", path, "-fs", str(split_size), "-map", "0", "-map_chapters", "-1", "-c", "copy", out_path])
|
||||
out_size = get_path_size(out_path)
|
||||
if out_size > 2097152000:
|
||||
dif = out_size - 2097152000
|
||||
split_size = split_size - dif + 3000000
|
||||
split_size = split_size - dif + 5000000
|
||||
osremove(out_path)
|
||||
return split_file(path, size, file_, dirpath, split_size, start_time, i, True)
|
||||
lpd = get_media_info(out_path)[0]
|
||||
if lpd <= 4 or out_size < 1000000:
|
||||
if lpd <= 4:
|
||||
osremove(out_path)
|
||||
break
|
||||
start_time += lpd - 3
|
||||
|
@ -18,9 +18,8 @@ from cfscrape import create_scraper
|
||||
from bs4 import BeautifulSoup
|
||||
from base64 import standard_b64encode
|
||||
|
||||
from bot import LOGGER, UPTOBOX_TOKEN, CRYPT
|
||||
from bot import LOGGER, UPTOBOX_TOKEN
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
from bot.helper.ext_utils.bot_utils import is_gdtot_link
|
||||
from bot.helper.ext_utils.exceptions import DirectDownloadLinkException
|
||||
|
||||
fmed_list = ['fembed.net', 'fembed.com', 'femax20.com', 'fcdn.stream', 'feurl.com', 'layarkacaxxi.icu',
|
||||
@ -67,8 +66,6 @@ def direct_link_generator(link: str):
|
||||
return solidfiles(link)
|
||||
elif 'krakenfiles.com' in link:
|
||||
return krakenfiles(link)
|
||||
elif is_gdtot_link(link):
|
||||
return gdtot(link)
|
||||
elif any(x in link for x in fmed_list):
|
||||
return fembed(link)
|
||||
elif any(x in link for x in ['sbembed.com', 'watchsb.com', 'streamsb.net', 'sbplay.org']):
|
||||
@ -249,8 +246,8 @@ def racaty(url: str) -> str:
|
||||
soup = BeautifulSoup(r.text, "lxml")
|
||||
op = soup.find("input", {"name": "op"})["value"]
|
||||
ids = soup.find("input", {"name": "id"})["value"]
|
||||
rpost = scraper.post(url, data = {"op": op, "id": ids})
|
||||
rsoup = BeautifulSoup(rpost.text, "lxml")
|
||||
rapost = scraper.post(url, data = {"op": op, "id": ids})
|
||||
rsoup = BeautifulSoup(rapost.text, "lxml")
|
||||
dl_url = rsoup.find("a", {"id": "uniqueExpirylink"})["href"].replace(" ", "%20")
|
||||
return dl_url
|
||||
|
||||
@ -362,23 +359,3 @@ def krakenfiles(page_link: str) -> str:
|
||||
else:
|
||||
raise DirectDownloadLinkException(
|
||||
f"Failed to acquire download URL from kraken for : {page_link}")
|
||||
|
||||
def gdtot(url: str) -> str:
|
||||
""" Gdtot google drive link generator
|
||||
By https://github.com/xcscxr """
|
||||
|
||||
if CRYPT is None:
|
||||
raise DirectDownloadLinkException("ERROR: CRYPT cookie not provided")
|
||||
|
||||
match = re_findall(r'https?://(.+)\.gdtot\.(.+)\/\S+\/\S+', url)[0]
|
||||
|
||||
with rsession() as client:
|
||||
client.cookies.update({'crypt': CRYPT})
|
||||
client.get(url)
|
||||
res = client.get(f"https://{match[0]}.gdtot.{match[1]}/dld?id={url.split('/')[-1]}")
|
||||
matches = re_findall('gd=(.*?)&', res.text)
|
||||
try:
|
||||
decoded_id = b64decode(str(matches[0])).decode('utf-8')
|
||||
except:
|
||||
raise DirectDownloadLinkException("ERROR: Try in your broswer, mostly file not found or user limit exceeded!")
|
||||
return f'https://drive.google.com/open?id={decoded_id}'
|
||||
|
@ -8,7 +8,7 @@ from bot.helper.telegram_helper.message_utils import sendMessage, sendStatusMess
|
||||
from bot.helper.ext_utils.fs_utils import get_base_name
|
||||
|
||||
|
||||
def add_gd_download(link, listener, is_gdtot):
|
||||
def add_gd_download(link, listener):
|
||||
res, size, name, files = GoogleDriveHelper().helper(link)
|
||||
if res != "":
|
||||
return sendMessage(res, listener.bot, listener.message)
|
||||
@ -35,5 +35,3 @@ def add_gd_download(link, listener, is_gdtot):
|
||||
listener.onDownloadStart()
|
||||
sendStatusMessage(listener.message, listener.bot)
|
||||
drive.download(link)
|
||||
if is_gdtot:
|
||||
drive.deletefile(link)
|
||||
|
@ -7,7 +7,7 @@ from re import search as re_search
|
||||
from telegram import InlineKeyboardMarkup
|
||||
from telegram.ext import CallbackQueryHandler
|
||||
|
||||
from bot import download_dict, download_dict_lock, BASE_URL, dispatcher, get_client, STOP_DUPLICATE, WEB_PINCODE, QB_SEED, TORRENT_TIMEOUT, LOGGER
|
||||
from bot import download_dict, download_dict_lock, BASE_URL, dispatcher, get_client, STOP_DUPLICATE, WEB_PINCODE, TORRENT_TIMEOUT, LOGGER
|
||||
from bot.helper.mirror_utils.status_utils.qbit_download_status import QbDownloadStatus
|
||||
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
|
||||
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, deleteMessage, sendStatusMessage, update_all_messages
|
||||
@ -20,16 +20,16 @@ class QbDownloader:
|
||||
POLLING_INTERVAL = 3
|
||||
|
||||
def __init__(self, listener):
|
||||
self.__listener = listener
|
||||
self.__path = ''
|
||||
self.__name = ''
|
||||
self.select = False
|
||||
self.is_seeding = False
|
||||
self.client = None
|
||||
self.ext_hash = ''
|
||||
self.__periodic = None
|
||||
self.__listener = listener
|
||||
self.__path = ''
|
||||
self.__name = ''
|
||||
self.__stalled_time = time()
|
||||
self.__uploaded = False
|
||||
self.is_seeding = False
|
||||
self.__dupChecked = False
|
||||
self.__rechecked = False
|
||||
|
||||
@ -160,14 +160,14 @@ class QbDownloader:
|
||||
elif (tor_info.state.lower().endswith("up") or tor_info.state == "uploading") and \
|
||||
not self.__uploaded and len(listdir(self.__path)) != 0:
|
||||
self.__uploaded = True
|
||||
if not QB_SEED:
|
||||
if not self.__listener.seed:
|
||||
self.client.torrents_pause(torrent_hashes=self.ext_hash)
|
||||
if self.select:
|
||||
clean_unwanted(self.__path)
|
||||
self.__listener.onDownloadComplete()
|
||||
if QB_SEED and not self.__listener.isLeech and not self.__listener.extract:
|
||||
if self.__listener.seed and not self.__listener.isLeech and not self.__listener.extract:
|
||||
with download_dict_lock:
|
||||
if self.__listener.uid not in list(download_dict.keys()):
|
||||
if self.__listener.uid not in download_dict:
|
||||
self.client.torrents_delete(torrent_hashes=self.ext_hash, delete_files=True)
|
||||
self.client.auth_log_out()
|
||||
self.__periodic.cancel()
|
||||
@ -180,7 +180,7 @@ class QbDownloader:
|
||||
self.client.torrents_delete(torrent_hashes=self.ext_hash, delete_files=True)
|
||||
self.client.auth_log_out()
|
||||
self.__periodic.cancel()
|
||||
elif tor_info.state == 'pausedUP' and QB_SEED:
|
||||
elif tor_info.state == 'pausedUP' and self.__listener.seed:
|
||||
self.__listener.onUploadError(f"Seeding stopped with Ratio: {round(tor_info.ratio, 3)} and Time: {get_readable_time(tor_info.seeding_time)}")
|
||||
self.client.torrents_delete(torrent_hashes=self.ext_hash, delete_files=True)
|
||||
self.client.auth_log_out()
|
||||
|
@ -19,13 +19,15 @@ class MyLogger:
|
||||
|
||||
def debug(self, msg):
|
||||
# Hack to fix changing extension
|
||||
match = re_search(r'.Merger..Merging formats into..(.*?).$', msg) # To mkv
|
||||
if not match and not self.obj.is_playlist:
|
||||
match = re_search(r'.ExtractAudio..Destination..(.*?)$', msg) # To mp3
|
||||
if match and not self.obj.is_playlist:
|
||||
newname = match.group(1)
|
||||
newname = newname.split("/")[-1]
|
||||
self.obj.name = newname
|
||||
if not self.obj.is_playlist:
|
||||
match = re_search(r'.Merger..Merging formats into..(.*?).$', msg) # To mkv
|
||||
if not match:
|
||||
match = re_search(r'.ExtractAudio..Destination..(.*?)$', msg) # To mp3
|
||||
if match:
|
||||
LOGGER.info(msg)
|
||||
newname = match.group(1)
|
||||
newname = newname.rsplit("/", 1)[-1]
|
||||
self.obj.name = newname
|
||||
|
||||
@staticmethod
|
||||
def warning(msg):
|
||||
@ -55,9 +57,12 @@ class YoutubeDLHelper:
|
||||
self.opts = {'progress_hooks': [self.__onDownloadProgress],
|
||||
'logger': MyLogger(self),
|
||||
'usenetrc': True,
|
||||
'embedsubtitles': True,
|
||||
'prefer_ffmpeg': True,
|
||||
'cookiefile': 'cookies.txt'}
|
||||
'cookiefile': 'cookies.txt',
|
||||
'allow_multiple_video_streams': True,
|
||||
'allow_multiple_audio_streams': True,
|
||||
'trim_file_name': 200,
|
||||
'extract_flat': True}
|
||||
|
||||
@property
|
||||
def download_speed(self):
|
||||
@ -122,19 +127,18 @@ class YoutubeDLHelper:
|
||||
return self.__onDownloadError(str(e))
|
||||
if 'entries' in result:
|
||||
for v in result['entries']:
|
||||
try:
|
||||
if 'filesize_approx' in v:
|
||||
self.size += v['filesize_approx']
|
||||
except:
|
||||
pass
|
||||
self.is_playlist = True
|
||||
elif 'filesize' in v:
|
||||
self.size += v['filesize']
|
||||
if name == "":
|
||||
self.name = str(realName).split(f" [{result['id'].replace('*', '_')}]")[0]
|
||||
self.name = realName.split(f" [{result['id'].replace('*', '_')}]")[0]
|
||||
else:
|
||||
self.name = name
|
||||
else:
|
||||
ext = realName.split('.')[-1]
|
||||
if name == "":
|
||||
newname = str(realName).split(f" [{result['id'].replace('*', '_')}]")
|
||||
newname = realName.split(f" [{result['id'].replace('*', '_')}]")
|
||||
if len(newname) > 1:
|
||||
self.name = newname[0] + '.' + ext
|
||||
else:
|
||||
@ -176,7 +180,12 @@ class YoutubeDLHelper:
|
||||
if self.__is_cancelled:
|
||||
return
|
||||
if not self.is_playlist:
|
||||
self.opts['outtmpl'] = f"{path}/{self.name}"
|
||||
if args is None:
|
||||
self.opts['outtmpl'] = f"{path}/{self.name}"
|
||||
else:
|
||||
folder_name = self.name.rsplit('.', 1)[0]
|
||||
self.opts['outtmpl'] = f"{path}/{folder_name}/{self.name}"
|
||||
self.name = folder_name
|
||||
else:
|
||||
self.opts['outtmpl'] = f"{path}/{self.name}/%(title)s.%(ext)s"
|
||||
self.__download(link)
|
||||
|
@ -10,7 +10,7 @@ from urllib.parse import parse_qs, urlparse
|
||||
from random import randrange
|
||||
from google.oauth2 import service_account
|
||||
from googleapiclient.discovery import build
|
||||
from googleapiclient.errors import HttpError
|
||||
from googleapiclient.errors import HttpError, Error as GCError
|
||||
from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload
|
||||
from telegram import InlineKeyboardMarkup
|
||||
from tenacity import retry, wait_exponential, stop_after_attempt, retry_if_exception_type, RetryError
|
||||
@ -143,7 +143,7 @@ class GoogleDriveHelper:
|
||||
self.__service = self.__authorize()
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=retry_if_exception_type(HttpError))
|
||||
retry=retry_if_exception_type(GCError))
|
||||
def __set_permission(self, drive_id):
|
||||
permissions = {
|
||||
'role': 'reader',
|
||||
@ -155,7 +155,7 @@ class GoogleDriveHelper:
|
||||
body=permissions).execute()
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=(retry_if_exception_type(HttpError) | retry_if_exception_type(IOError)))
|
||||
retry=(retry_if_exception_type(GCError) | retry_if_exception_type(IOError)))
|
||||
def __upload_file(self, file_path, file_name, mime_type, parent_id):
|
||||
# File body description
|
||||
file_metadata = {
|
||||
@ -250,8 +250,6 @@ class GoogleDriveHelper:
|
||||
if isinstance(err, RetryError):
|
||||
LOGGER.info(f"Total Attempts: {err.last_attempt.attempt_number}")
|
||||
err = err.last_attempt.exception()
|
||||
exception_name = err.__class__.__name__
|
||||
LOGGER.error(f"{err}. Exception Name: {exception_name}")
|
||||
self.__listener.onUploadError(str(err))
|
||||
self.is_errored = True
|
||||
finally:
|
||||
@ -267,7 +265,7 @@ class GoogleDriveHelper:
|
||||
self.__listener.onUploadComplete(link, size, self.__total_files, self.__total_folders, mime_type, self.name)
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=retry_if_exception_type(HttpError))
|
||||
retry=retry_if_exception_type(GCError))
|
||||
def __copyFile(self, file_id, dest_id):
|
||||
body = {
|
||||
'parents': [dest_id]
|
||||
@ -300,13 +298,13 @@ class GoogleDriveHelper:
|
||||
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=retry_if_exception_type(HttpError))
|
||||
retry=retry_if_exception_type(GCError))
|
||||
def __getFileMetadata(self, file_id):
|
||||
return self.__service.files().get(supportsAllDrives=True, fileId=file_id,
|
||||
fields="name,id,mimeType,size").execute()
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=retry_if_exception_type(HttpError))
|
||||
retry=retry_if_exception_type(GCError))
|
||||
def __getFilesByFolderId(self, folder_id):
|
||||
page_token = None
|
||||
files = []
|
||||
@ -381,8 +379,6 @@ class GoogleDriveHelper:
|
||||
LOGGER.info(f"Total Attempts: {err.last_attempt.attempt_number}")
|
||||
err = err.last_attempt.exception()
|
||||
err = str(err).replace('>', '').replace('<', '')
|
||||
exception_name = err.__class__.__name__
|
||||
LOGGER.error(f"{err}. Exception Name: {exception_name}")
|
||||
if "User rate limit exceeded" in str(err):
|
||||
msg = "User rate limit exceeded."
|
||||
elif "File not found" in str(err):
|
||||
@ -415,7 +411,7 @@ class GoogleDriveHelper:
|
||||
break
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=retry_if_exception_type(HttpError))
|
||||
retry=retry_if_exception_type(GCError))
|
||||
def __create_directory(self, directory_name, parent_id):
|
||||
file_metadata = {
|
||||
"name": directory_name,
|
||||
@ -705,8 +701,6 @@ class GoogleDriveHelper:
|
||||
LOGGER.info(f"Total Attempts: {err.last_attempt.attempt_number}")
|
||||
err = err.last_attempt.exception()
|
||||
err = str(err).replace('>', '').replace('<', '')
|
||||
exception_name = err.__class__.__name__
|
||||
LOGGER.error(f"{err}. Exception Name: {exception_name}")
|
||||
if "File not found" in str(err):
|
||||
token_service = self.__alt_authorize()
|
||||
if token_service is not None:
|
||||
@ -763,8 +757,6 @@ class GoogleDriveHelper:
|
||||
LOGGER.info(f"Total Attempts: {err.last_attempt.attempt_number}")
|
||||
err = err.last_attempt.exception()
|
||||
err = str(err).replace('>', '').replace('<', '')
|
||||
exception_name = err.__class__.__name__
|
||||
LOGGER.error(f"{err}. Exception Name: {exception_name}")
|
||||
if "File not found" in str(err):
|
||||
token_service = self.__alt_authorize()
|
||||
if token_service is not None:
|
||||
@ -793,8 +785,6 @@ class GoogleDriveHelper:
|
||||
LOGGER.info(f"Total Attempts: {err.last_attempt.attempt_number}")
|
||||
err = err.last_attempt.exception()
|
||||
err = str(err).replace('>', '').replace('<', '')
|
||||
exception_name = err.__class__.__name__
|
||||
LOGGER.error(f"{err}. Exception Name: {exception_name}")
|
||||
if "downloadQuotaExceeded" in str(err):
|
||||
err = "Download Quota Exceeded."
|
||||
elif "File not found" in str(err):
|
||||
@ -837,10 +827,15 @@ class GoogleDriveHelper:
|
||||
break
|
||||
|
||||
@retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3),
|
||||
retry=(retry_if_exception_type(HttpError) | retry_if_exception_type(IOError)))
|
||||
retry=(retry_if_exception_type(GCError) | retry_if_exception_type(IOError)))
|
||||
def __download_file(self, file_id, path, filename, mime_type):
|
||||
request = self.__service.files().get_media(fileId=file_id)
|
||||
filename = filename.replace('/', '')
|
||||
if len(filename.encode()) > 255:
|
||||
ext = ospath.splitext(filename)[1]
|
||||
filename = filename[:245] + ext
|
||||
if self.name.endswith(ext):
|
||||
self.name = filename
|
||||
fh = FileIO('{}{}'.format(path, filename), 'wb')
|
||||
downloader = MediaIoBaseDownload(fh, request, chunksize=50 * 1024 * 1024)
|
||||
done = False
|
||||
|
@ -14,7 +14,7 @@ getLogger("pyrogram").setLevel(WARNING)
|
||||
|
||||
VIDEO_SUFFIXES = ("MKV", "MP4", "MOV", "WMV", "3GP", "MPG", "WEBM", "AVI", "FLV", "M4V", "GIF")
|
||||
AUDIO_SUFFIXES = ("MP3", "M4A", "M4B", "FLAC", "WAV", "AIF", "OGG", "AAC", "DTS", "MID", "AMR", "MKA")
|
||||
IMAGE_SUFFIXES = ("JPG", "JPX", "PNG", "WEBP", "CR2", "TIF", "BMP", "JXR", "PSD", "ICO", "HEIC", "JPEG")
|
||||
IMAGE_SUFFIXES = ("JPG", "JPX", "PNG", "CR2", "TIF", "BMP", "JXR", "PSD", "ICO", "HEIC", "JPEG")
|
||||
|
||||
|
||||
class TgUploader:
|
||||
@ -29,7 +29,6 @@ class TgUploader:
|
||||
self.__is_cancelled = False
|
||||
self.__as_doc = AS_DOCUMENT
|
||||
self.__thumb = f"Thumbnails/{listener.message.from_user.id}.jpg"
|
||||
self.__sent_msg = None
|
||||
self.__msgs_dict = {}
|
||||
self.__corrupted = 0
|
||||
self.__resource_lock = RLock()
|
||||
|
@ -1,4 +1,4 @@
|
||||
from time import sleep
|
||||
from time import sleep, time
|
||||
from telegram import InlineKeyboardMarkup
|
||||
from telegram.message import Message
|
||||
from telegram.error import RetryAfter
|
||||
@ -47,7 +47,7 @@ def editMessage(text: str, message: Message, reply_markup=None):
|
||||
return editMessage(text, message, reply_markup)
|
||||
except Exception as e:
|
||||
LOGGER.error(str(e))
|
||||
return
|
||||
return str(e)
|
||||
|
||||
def sendRss(text: str, bot):
|
||||
if rss_session is None:
|
||||
@ -97,39 +97,43 @@ def auto_delete_message(bot, cmd_message: Message, bot_message: Message):
|
||||
|
||||
def delete_all_messages():
|
||||
with status_reply_dict_lock:
|
||||
for message in list(status_reply_dict.values()):
|
||||
for data in list(status_reply_dict.values()):
|
||||
try:
|
||||
deleteMessage(bot, message)
|
||||
del status_reply_dict[message.chat.id]
|
||||
deleteMessage(bot, data[0])
|
||||
del status_reply_dict[data[0].chat.id]
|
||||
except Exception as e:
|
||||
LOGGER.error(str(e))
|
||||
|
||||
def update_all_messages():
|
||||
def update_all_messages(force=False):
|
||||
with status_reply_dict_lock:
|
||||
if not force and (not status_reply_dict or not Interval or time() - list(status_reply_dict.values())[0][1] < 2):
|
||||
return
|
||||
|
||||
msg, buttons = get_readable_message()
|
||||
with status_reply_dict_lock:
|
||||
for chat_id in list(status_reply_dict.keys()):
|
||||
if status_reply_dict[chat_id] and msg != status_reply_dict[chat_id].text:
|
||||
for chat_id in status_reply_dict:
|
||||
if status_reply_dict[chat_id] and msg != status_reply_dict[chat_id][0].text:
|
||||
if buttons == "":
|
||||
editMessage(msg, status_reply_dict[chat_id])
|
||||
rmsg = editMessage(msg, status_reply_dict[chat_id][0])
|
||||
else:
|
||||
editMessage(msg, status_reply_dict[chat_id], buttons)
|
||||
status_reply_dict[chat_id].text = msg
|
||||
rmsg = editMessage(msg, status_reply_dict[chat_id][0], buttons)
|
||||
if rmsg == "Message to edit not found":
|
||||
del status_reply_dict[chat_id]
|
||||
return
|
||||
status_reply_dict[chat_id][0].text = msg
|
||||
status_reply_dict[chat_id][1] = time()
|
||||
|
||||
def sendStatusMessage(msg, bot):
|
||||
if len(Interval) == 0:
|
||||
Interval.append(setInterval(DOWNLOAD_STATUS_UPDATE_INTERVAL, update_all_messages))
|
||||
progress, buttons = get_readable_message()
|
||||
with status_reply_dict_lock:
|
||||
if msg.chat.id in list(status_reply_dict.keys()):
|
||||
try:
|
||||
message = status_reply_dict[msg.chat.id]
|
||||
deleteMessage(bot, message)
|
||||
del status_reply_dict[msg.chat.id]
|
||||
except Exception as e:
|
||||
LOGGER.error(str(e))
|
||||
del status_reply_dict[msg.chat.id]
|
||||
if msg.chat.id in status_reply_dict:
|
||||
message = status_reply_dict[msg.chat.id][0]
|
||||
deleteMessage(bot, message)
|
||||
del status_reply_dict[msg.chat.id]
|
||||
if buttons == "":
|
||||
message = sendMessage(progress, bot, msg)
|
||||
else:
|
||||
message = sendMarkup(progress, bot, msg, buttons)
|
||||
status_reply_dict[msg.chat.id] = message
|
||||
status_reply_dict[msg.chat.id] = [message, time()]
|
||||
if not Interval:
|
||||
Interval.append(setInterval(DOWNLOAD_STATUS_UPDATE_INTERVAL, update_all_messages))
|
||||
|
@ -2,7 +2,7 @@ from telegram import InlineKeyboardMarkup
|
||||
from telegram.ext import CommandHandler, CallbackQueryHandler
|
||||
from time import sleep
|
||||
|
||||
from bot import download_dict, dispatcher, download_dict_lock, QB_SEED, SUDO_USERS, OWNER_ID
|
||||
from bot import download_dict, dispatcher, download_dict_lock, SUDO_USERS, OWNER_ID
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
from bot.helper.telegram_helper.filters import CustomFilters
|
||||
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup
|
||||
@ -59,8 +59,7 @@ def cancell_all_buttons(update, context):
|
||||
buttons = button_build.ButtonMaker()
|
||||
buttons.sbutton("Downloading", "canall down")
|
||||
buttons.sbutton("Uploading", "canall up")
|
||||
if QB_SEED:
|
||||
buttons.sbutton("Seeding", "canall seed")
|
||||
buttons.sbutton("Seeding", "canall seed")
|
||||
buttons.sbutton("Cloning", "canall clone")
|
||||
buttons.sbutton("All", "canall all")
|
||||
button = InlineKeyboardMarkup(buttons.build_menu(2))
|
||||
|
@ -10,18 +10,16 @@ from bot.helper.telegram_helper.filters import CustomFilters
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
from bot.helper.mirror_utils.status_utils.clone_status import CloneStatus
|
||||
from bot import dispatcher, LOGGER, STOP_DUPLICATE, download_dict, download_dict_lock, Interval
|
||||
from bot.helper.ext_utils.bot_utils import is_gdrive_link, is_gdtot_link, new_thread
|
||||
from bot.helper.mirror_utils.download_utils.direct_link_generator import gdtot
|
||||
from bot.helper.ext_utils.exceptions import DirectDownloadLinkException
|
||||
from bot.helper.ext_utils.bot_utils import is_gdrive_link, new_thread
|
||||
|
||||
|
||||
def _clone(message, bot, multi=0):
|
||||
args = message.text.split(maxsplit=1)
|
||||
args = message.text.split()
|
||||
reply_to = message.reply_to_message
|
||||
link = ''
|
||||
if len(args) > 1:
|
||||
link = args[1].strip()
|
||||
if link.isdigit():
|
||||
if link.strip().isdigit():
|
||||
multi = int(link)
|
||||
link = ''
|
||||
elif message.from_user.username:
|
||||
@ -30,20 +28,11 @@ def _clone(message, bot, multi=0):
|
||||
tag = message.from_user.mention_html(message.from_user.first_name)
|
||||
if reply_to:
|
||||
if len(link) == 0:
|
||||
link = reply_to.text.strip()
|
||||
link = reply_to.text.split(maxsplit=1)[0].strip()
|
||||
if reply_to.from_user.username:
|
||||
tag = f"@{reply_to.from_user.username}"
|
||||
else:
|
||||
tag = reply_to.from_user.mention_html(reply_to.from_user.first_name)
|
||||
is_gdtot = is_gdtot_link(link)
|
||||
if is_gdtot:
|
||||
try:
|
||||
msg = sendMessage(f"Processing: <code>{link}</code>", bot, message)
|
||||
link = gdtot(link)
|
||||
deleteMessage(bot, msg)
|
||||
except DirectDownloadLinkException as e:
|
||||
deleteMessage(bot, msg)
|
||||
return sendMessage(str(e), bot, message)
|
||||
if is_gdrive_link(link):
|
||||
gd = GoogleDriveHelper()
|
||||
res, size, name, files = gd.helper(link)
|
||||
@ -93,8 +82,6 @@ def _clone(message, bot, multi=0):
|
||||
else:
|
||||
sendMarkup(result + cc, bot, message, button)
|
||||
LOGGER.info(f'Cloning Done: {name}')
|
||||
if is_gdtot:
|
||||
gd.deletefile(link)
|
||||
else:
|
||||
sendMessage('Send Gdrive or gdtot link along with command or by replying to the link by command', bot, message)
|
||||
|
||||
|
@ -19,7 +19,7 @@ def countNode(update, context):
|
||||
tag = update.message.from_user.mention_html(update.message.from_user.first_name)
|
||||
if reply_to:
|
||||
if len(link) == 0:
|
||||
link = reply_to.text.strip()
|
||||
link = reply_to.text.split(maxsplit=1)[0].strip()
|
||||
if reply_to.from_user.username:
|
||||
tag = f"@{reply_to.from_user.username}"
|
||||
else:
|
||||
|
@ -14,7 +14,7 @@ def deletefile(update, context):
|
||||
if len(context.args) == 1:
|
||||
link = context.args[0]
|
||||
elif reply_to:
|
||||
link = reply_to.text
|
||||
link = reply_to.text.split(maxsplit=1)[0].strip()
|
||||
else:
|
||||
link = ''
|
||||
if is_gdrive_link(link):
|
||||
|
@ -13,7 +13,7 @@ from telegram import InlineKeyboardMarkup
|
||||
|
||||
from bot import Interval, INDEX_URL, VIEW_LINK, aria2, QB_SEED, dispatcher, DOWNLOAD_DIR, \
|
||||
download_dict, download_dict_lock, TG_SPLIT_SIZE, LOGGER, DB_URI, INCOMPLETE_TASK_NOTIFIER
|
||||
from bot.helper.ext_utils.bot_utils import is_url, is_magnet, is_gdtot_link, is_mega_link, is_gdrive_link, get_content_type
|
||||
from bot.helper.ext_utils.bot_utils import is_url, is_magnet, is_mega_link, is_gdrive_link, get_content_type
|
||||
from bot.helper.ext_utils.fs_utils import get_base_name, get_path_size, split_file, clean_download
|
||||
from bot.helper.ext_utils.exceptions import DirectDownloadLinkException, NotSupportedExtractionArchive
|
||||
from bot.helper.mirror_utils.download_utils.aria2_download import add_aria2c_download
|
||||
@ -37,7 +37,7 @@ from bot.helper.ext_utils.db_handler import DbManger
|
||||
|
||||
|
||||
class MirrorListener:
|
||||
def __init__(self, bot, message, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None, tag=None):
|
||||
def __init__(self, bot, message, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None, tag=None, seed=False):
|
||||
self.bot = bot
|
||||
self.message = message
|
||||
self.uid = self.message.message_id
|
||||
@ -47,15 +47,16 @@ class MirrorListener:
|
||||
self.isLeech = isLeech
|
||||
self.pswd = pswd
|
||||
self.tag = tag
|
||||
self.seed = any([seed, QB_SEED])
|
||||
self.isPrivate = self.message.chat.type in ['private', 'group']
|
||||
|
||||
def clean(self):
|
||||
try:
|
||||
aria2.purge()
|
||||
Interval[0].cancel()
|
||||
del Interval[0]
|
||||
Interval.clear()
|
||||
aria2.purge()
|
||||
delete_all_messages()
|
||||
except IndexError:
|
||||
except:
|
||||
pass
|
||||
|
||||
def onDownloadStart(self):
|
||||
@ -91,7 +92,7 @@ class MirrorListener:
|
||||
LOGGER.info('File to archive not found!')
|
||||
self.onUploadError('Internal error occurred!!')
|
||||
return
|
||||
if not self.isQbit or not QB_SEED or self.isLeech:
|
||||
if not self.isQbit or not self.seed or self.isLeech:
|
||||
try:
|
||||
rmtree(m_path)
|
||||
except:
|
||||
@ -232,7 +233,7 @@ class MirrorListener:
|
||||
share_urls = f'{INDEX_URL}/{url_path}?a=view'
|
||||
buttons.buildbutton("🌐 View Link", share_urls)
|
||||
sendMarkup(msg, self.bot, self.message, InlineKeyboardMarkup(buttons.build_menu(2)))
|
||||
if self.isQbit and QB_SEED and not self.extract:
|
||||
if self.isQbit and self.seed and not self.extract:
|
||||
if self.isZip:
|
||||
try:
|
||||
osremove(f'{DOWNLOAD_DIR}{self.uid}/{name}')
|
||||
@ -269,26 +270,30 @@ class MirrorListener:
|
||||
if not self.isPrivate and INCOMPLETE_TASK_NOTIFIER and DB_URI is not None:
|
||||
DbManger().rm_complete_task(self.message.link)
|
||||
|
||||
def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None, multi=0):
|
||||
def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None, multi=0, qbsd=False):
|
||||
mesg = message.text.split('\n')
|
||||
message_args = mesg[0].split(maxsplit=1)
|
||||
name_args = mesg[0].split('|', maxsplit=1)
|
||||
qbitsel = False
|
||||
is_gdtot = False
|
||||
qbsel = False
|
||||
index = 1
|
||||
|
||||
if len(message_args) > 1:
|
||||
link = message_args[1].strip()
|
||||
if link.startswith("s ") or link == "s":
|
||||
qbitsel = True
|
||||
message_args = mesg[0].split(maxsplit=2)
|
||||
if len(message_args) > 2:
|
||||
link = message_args[2].strip()
|
||||
else:
|
||||
args = mesg[0].split(maxsplit=3)
|
||||
if "s" in [x.strip() for x in args]:
|
||||
qbsel = True
|
||||
index += 1
|
||||
if "d" in [x.strip() for x in args]:
|
||||
qbsd = True
|
||||
index += 1
|
||||
message_args = mesg[0].split(maxsplit=index)
|
||||
if len(message_args) > index:
|
||||
link = message_args[index].strip()
|
||||
if link.isdigit():
|
||||
multi = int(link)
|
||||
link = ''
|
||||
elif link.isdigit():
|
||||
multi = int(link)
|
||||
link = ''
|
||||
if link.startswith(("|", "pswd:")):
|
||||
elif link.startswith(("|", "pswd:")):
|
||||
link = ''
|
||||
else:
|
||||
link = ''
|
||||
else:
|
||||
link = ''
|
||||
@ -329,9 +334,9 @@ def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=Fals
|
||||
|
||||
if not is_url(link) and not is_magnet(link) or len(link) == 0:
|
||||
if file is None:
|
||||
reply_text = reply_to.text
|
||||
reply_text = reply_to.text.split(maxsplit=1)[0].strip()
|
||||
if is_url(reply_text) or is_magnet(reply_text):
|
||||
link = reply_text.strip()
|
||||
link = reply_text
|
||||
elif file.mime_type != "application/x-bittorrent" and not isQbit:
|
||||
listener = MirrorListener(bot, message, isZip, extract, isQbit, isLeech, pswd, tag)
|
||||
Thread(target=TelegramDownloadHelper(listener).add_download, args=(message, f'{DOWNLOAD_DIR}{listener.uid}/', name)).start()
|
||||
@ -354,8 +359,8 @@ def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=Fals
|
||||
help_msg += "\n<code>/command</code> |newname pswd: xx [zip/unzip]"
|
||||
help_msg += "\n\n<b>Direct link authorization:</b>"
|
||||
help_msg += "\n<code>/command</code> {link} |newname pswd: xx\nusername\npassword"
|
||||
help_msg += "\n\n<b>Qbittorrent selection:</b>"
|
||||
help_msg += "\n<code>/qbcommand</code> <b>s</b> {link} or by replying to {file/link}"
|
||||
help_msg += "\n\n<b>Qbittorrent selection and seed:</b>"
|
||||
help_msg += "\n<code>/qbcommand</code> <b>s</b>(for selection) <b>d</b>(for seeding) {link} or by replying to {file/link}"
|
||||
help_msg += "\n\n<b>Multi links only by replying to first link or file:</b>"
|
||||
help_msg += "\n<code>/command</code> 10(number of links/files)"
|
||||
return sendMessage(help_msg, bot, message)
|
||||
@ -367,7 +372,6 @@ def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=Fals
|
||||
content_type = get_content_type(link)
|
||||
if content_type is None or re_match(r'text/html|text/plain', content_type):
|
||||
try:
|
||||
is_gdtot = is_gdtot_link(link)
|
||||
link = direct_link_generator(link)
|
||||
LOGGER.info(f"Generated link: {link}")
|
||||
except DirectDownloadLinkException as e:
|
||||
@ -401,7 +405,7 @@ def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=Fals
|
||||
return sendMessage(msg, bot, message)
|
||||
|
||||
|
||||
listener = MirrorListener(bot, message, isZip, extract, isQbit, isLeech, pswd, tag)
|
||||
listener = MirrorListener(bot, message, isZip, extract, isQbit, isLeech, pswd, tag, qbsd)
|
||||
|
||||
if is_gdrive_link(link):
|
||||
if not isZip and not extract and not isLeech:
|
||||
@ -410,11 +414,11 @@ def _mirror(bot, message, isZip=False, extract=False, isQbit=False, isLeech=Fals
|
||||
gmsg += f"Use /{BotCommands.UnzipMirrorCommand} to extracts Google Drive archive file"
|
||||
sendMessage(gmsg, bot, message)
|
||||
else:
|
||||
Thread(target=add_gd_download, args=(link, listener, is_gdtot)).start()
|
||||
Thread(target=add_gd_download, args=(link, listener)).start()
|
||||
elif is_mega_link(link):
|
||||
Thread(target=add_mega_download, args=(link, f'{DOWNLOAD_DIR}{listener.uid}/', listener)).start()
|
||||
elif isQbit and (is_magnet(link) or ospath.exists(link)):
|
||||
Thread(target=QbDownloader(listener).add_qb_torrent, args=(link, f'{DOWNLOAD_DIR}{listener.uid}', qbitsel)).start()
|
||||
elif isQbit:
|
||||
Thread(target=QbDownloader(listener).add_qb_torrent, args=(link, f'{DOWNLOAD_DIR}{listener.uid}', qbsel)).start()
|
||||
else:
|
||||
if len(mesg) > 1:
|
||||
try:
|
||||
|
@ -3,40 +3,53 @@ from time import time
|
||||
from threading import Thread
|
||||
from telegram.ext import CommandHandler, CallbackQueryHandler
|
||||
|
||||
from bot import dispatcher, status_reply_dict, status_reply_dict_lock, download_dict, download_dict_lock, botStartTime, DOWNLOAD_DIR
|
||||
from bot import dispatcher, status_reply_dict, status_reply_dict_lock, download_dict, download_dict_lock, botStartTime, DOWNLOAD_DIR, Interval, DOWNLOAD_STATUS_UPDATE_INTERVAL
|
||||
from bot.helper.telegram_helper.message_utils import sendMessage, deleteMessage, auto_delete_message, sendStatusMessage, update_all_messages
|
||||
from bot.helper.ext_utils.bot_utils import get_readable_file_size, get_readable_time, turn
|
||||
from bot.helper.ext_utils.bot_utils import get_readable_file_size, get_readable_time, turn, setInterval
|
||||
from bot.helper.telegram_helper.filters import CustomFilters
|
||||
from bot.helper.telegram_helper.bot_commands import BotCommands
|
||||
|
||||
|
||||
def mirror_status(update, context):
|
||||
with download_dict_lock:
|
||||
if len(download_dict) == 0:
|
||||
currentTime = get_readable_time(time() - botStartTime)
|
||||
free = get_readable_file_size(disk_usage(DOWNLOAD_DIR).free)
|
||||
message = 'No Active Downloads !\n___________________________'
|
||||
message += f"\n<b>CPU:</b> {cpu_percent()}% | <b>FREE:</b> {free}" \
|
||||
f"\n<b>RAM:</b> {virtual_memory().percent}% | <b>UPTIME:</b> {currentTime}"
|
||||
reply_message = sendMessage(message, context.bot, update.message)
|
||||
Thread(target=auto_delete_message, args=(context.bot, update.message, reply_message)).start()
|
||||
return
|
||||
index = update.effective_chat.id
|
||||
with status_reply_dict_lock:
|
||||
if index in status_reply_dict.keys():
|
||||
deleteMessage(context.bot, status_reply_dict[index])
|
||||
del status_reply_dict[index]
|
||||
sendStatusMessage(update.message, context.bot)
|
||||
deleteMessage(context.bot, update.message)
|
||||
count = len(download_dict)
|
||||
if count == 0:
|
||||
currentTime = get_readable_time(time() - botStartTime)
|
||||
free = get_readable_file_size(disk_usage(DOWNLOAD_DIR).free)
|
||||
message = 'No Active Downloads !\n___________________________'
|
||||
message += f"\n<b>CPU:</b> {cpu_percent()}% | <b>FREE:</b> {free}" \
|
||||
f"\n<b>RAM:</b> {virtual_memory().percent}% | <b>UPTIME:</b> {currentTime}"
|
||||
reply_message = sendMessage(message, context.bot, update.message)
|
||||
Thread(target=auto_delete_message, args=(context.bot, update.message, reply_message)).start()
|
||||
else:
|
||||
index = update.effective_chat.id
|
||||
with status_reply_dict_lock:
|
||||
if index in status_reply_dict:
|
||||
deleteMessage(context.bot, status_reply_dict[index][0])
|
||||
del status_reply_dict[index]
|
||||
try:
|
||||
if Interval:
|
||||
Interval[0].cancel()
|
||||
Interval.clear()
|
||||
except:
|
||||
pass
|
||||
finally:
|
||||
Interval.append(setInterval(DOWNLOAD_STATUS_UPDATE_INTERVAL, update_all_messages))
|
||||
sendStatusMessage(update.message, context.bot)
|
||||
deleteMessage(context.bot, update.message)
|
||||
|
||||
def status_pages(update, context):
|
||||
query = update.callback_query
|
||||
with status_reply_dict_lock:
|
||||
if not status_reply_dict or not Interval or time() - list(status_reply_dict.values())[0][1] < 2:
|
||||
query.answer(text="Wait One More Second!", show_alert=True)
|
||||
return
|
||||
data = query.data
|
||||
data = data.split()
|
||||
query.answer()
|
||||
done = turn(data)
|
||||
if done:
|
||||
update_all_messages()
|
||||
update_all_messages(True)
|
||||
else:
|
||||
query.message.delete()
|
||||
|
||||
|
@ -107,7 +107,7 @@ def rss_sub(update, context):
|
||||
msg = f"Use this format to add feed url:\n/{BotCommands.RssSubCommand} Title https://www.rss-url.com"
|
||||
msg += " f: 1080 or 720 or 144p|mkv or mp4|hevc (optional)\n\nThis filter will parse links that it's titles"
|
||||
msg += " contains `(1080 or 720 or 144p) and (mkv or mp4) and hevc` words. You can add whatever you want.\n\n"
|
||||
msg += "Another example: f: 1080 or 720p|.web. or .webrip.|hvec or x264 .. This will parse titles that contains"
|
||||
msg += "Another example: f: 1080 or 720p|.web. or .webrip.|hvec or x264. This will parse titles that contains"
|
||||
msg += " ( 1080 or 720p) and (.web. or .webrip.) and (hvec or x264). I have added space before and after 1080"
|
||||
msg += " to avoid wrong matching. If this `10805695` number in title it will match 1080 if added 1080 without"
|
||||
msg += " spaces after it."
|
||||
@ -206,7 +206,7 @@ def rss_monitor(context):
|
||||
break
|
||||
except IndexError:
|
||||
LOGGER.warning(f"Reached Max index no. {feed_count} for this feed: {name}. \
|
||||
Maybe you need to add less RSS_DELAY to not miss some torrents")
|
||||
Maybe you need to use less RSS_DELAY to not miss some torrents")
|
||||
break
|
||||
parse = True
|
||||
for list in data[3]:
|
||||
|
@ -23,10 +23,10 @@ def _watch(bot, message, isZip=False, isLeech=False, multi=0):
|
||||
link = mssg.split()
|
||||
if len(link) > 1:
|
||||
link = link[1].strip()
|
||||
if link.isdigit():
|
||||
if link.strip().isdigit():
|
||||
multi = int(link)
|
||||
link = ''
|
||||
elif link.startswith(("|", "pswd:", "args:")):
|
||||
elif link.strip().startswith(("|", "pswd:", "args:")):
|
||||
link = ''
|
||||
else:
|
||||
link = ''
|
||||
@ -64,7 +64,7 @@ def _watch(bot, message, isZip=False, isLeech=False, multi=0):
|
||||
reply_to = message.reply_to_message
|
||||
if reply_to is not None:
|
||||
if len(link) == 0:
|
||||
link = reply_to.text.strip()
|
||||
link = reply_to.text.split(maxsplit=1)[0].strip()
|
||||
if reply_to.from_user.username:
|
||||
tag = f"@{reply_to.from_user.username}"
|
||||
else:
|
||||
|
@ -7,10 +7,10 @@ OWNER_ID =
|
||||
DOWNLOAD_DIR = "/usr/src/app/downloads"
|
||||
DOWNLOAD_STATUS_UPDATE_INTERVAL = 10
|
||||
AUTO_DELETE_MESSAGE_DURATION = 20
|
||||
IS_TEAM_DRIVE = ""
|
||||
TELEGRAM_API =
|
||||
TELEGRAM_HASH = ""
|
||||
# OPTIONAL CONFIG
|
||||
IS_TEAM_DRIVE = ""
|
||||
DATABASE_URL = ""
|
||||
AUTHORIZED_CHATS = ""
|
||||
SUDO_USERS = ""
|
||||
|
Loading…
Reference in New Issue
Block a user