Signed-off-by: anasty17 <e.anastayyar@gmail.com>
This commit is contained in:
anasty17 2021-11-23 03:35:16 +02:00
commit 3ecb731ad1
85 changed files with 10082 additions and 0 deletions

20
.github/workflows/deploy.yml vendored Normal file
View File

@ -0,0 +1,20 @@
name: Manually Deploy to Heroku
on: workflow_dispatch
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: akhileshns/heroku-deploy@v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: ${{secrets.HEROKU_APP_NAME}}
heroku_email: ${{secrets.HEROKU_EMAIL}}
usedocker: true
docker_heroku_process_type: web
stack: "container"
region: "us"
env:
HD_CONFIG_FILE_URL: ${{secrets.CONFIG_FILE_URL}}

13
.gitignore vendored Normal file
View File

@ -0,0 +1,13 @@
config.env
*auth_token.txt
*.pyc
data*
.vscode
.idea
*.json
*.pickle
authorized_chats.txt
sudo_users.txt
accounts/*
Thumbnails/*
drive_folder

1
.netrc Normal file
View File

@ -0,0 +1 @@

12
Dockerfile Normal file
View File

@ -0,0 +1,12 @@
FROM anasty17/mltb:latest
# FROM anasty17/mltb-oracle:latest
WORKDIR /usr/src/app
RUN chmod 777 /usr/src/app
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
CMD ["bash", "start.sh"]

674
LICENSE Normal file
View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

400
README.md Normal file
View File

@ -0,0 +1,400 @@
This is a Telegram Bot written in Python for mirroring files on the Internet to your Google Drive or Telegram. Based on [python-aria-mirror-bot](https://github.com/lzzy12/python-aria-mirror-bot)
# Features:
## By [Anas](https://github.com/anasty17)
- qBittorrent
- Select files from Torrent before downloading using qbittorrent
- Leech (splitting, thumbnail for each user, setting as document or as media for each user)
- Size limiting for Torrent/Direct, Zip/Unzip, Mega and Clone
- Stop duplicates for all tasks except youtube-dl tasks
- Zip/Unzip G-Drive links
- Counting files/folders from Google Drive link
- View Link button, extra button to open file index link in broswer instead of direct download
- Status Pages for unlimited tasks
- Clone status
- Search in multiple Drive folder/TeamDrive
- Recursive Search (only with `root` or TeamDrive ID, folder ids will be listed with non-recursive method)
- Multi-Search by token.pickle if exists
- Extract rar, zip and 7z splits with or without password
- Zip file/folder with or without password
- Use Token.pickle if file not found with Service Account for all Gdrive functions
- Random Service Account at startup
- Mirror/Leech/Watch/Clone/Count/Del by reply
- YT-DLP quality buttons
- Search for torrents with Torrent Search API
- Docker image support for `linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6` (**Note**: Use `anasty17/mltb-oracle:latest` for oracle or if u faced problem with arm64 docker run)
- Update bot at startup or with restart command
- Many bugs have been fixed
## From Other Repositories
- Mirror direct download links, Torrent, and Telegram files to Google Drive
- Mirror Mega.nz links to Google Drive (If you have non-premium Mega account, it will limit download to 5GB per 6 hours)
- Copy files from someone's Drive to your Drive (Using Autorclone)
- Download/Upload progress, Speeds and ETAs
- Mirror all Youtube-dl supported links
- Docker support
- Uploading to Team Drive
- Index Link support
- Service Account support
- Delete files from Drive
- Shortener support
- Speedtest
- Multiple Trackers support
- Shell and Executor
- Sudo with or without Database
- Custom Filename* (Only for direct links, Telegram files and Youtube-dl. Not for Mega links, Gdrive links or Torrents)
- Extract or Compress password protected files.
- Extract these filetypes and uploads to Google Drive
> ZIP, RAR, TAR, 7z, ISO, WIM, CAB, GZIP, BZIP2, APM, ARJ, CHM, CPIO, CramFS, DEB, DMG, FAT, HFS, LZH, LZMA, LZMA2, MBR, MSI, MSLZ, NSIS, NTFS, RPM, SquashFS, UDF, VHD, XAR, Z, tar.xz
- Direct links Supported:
>letsupload.io, hxfile.co, anonfiles.com, bayfiles.com, antfiles, fembed.com, fembed.net, femax20.com, layarkacaxxi.icu, fcdn.stream, sbplay.org, naniplay.com, naniplay.nanime.in, naniplay.nanime.biz, sbembed.com, streamtape.com, streamsb.net, feurl.com, pixeldrain.com, racaty.net, 1fichier.com, 1drv.ms (Only works for file not folder or business account), uptobox.com (Uptobox account must be premium), solidfiles.com
# How to deploy?
## Deploying on Heroku
- Deploying on Heroku with Github Workflow. **Note**: Use heroku branch to avoid suspension.
<p><a href="https://telegra.ph/Heroku-Deployment-10-04"> <img src="https://img.shields.io/badge/Deploy%20Guide-blueviolet?style=for-the-badge&logo=heroku" width="170""/></a></p>
- Deploying on Heroku with helper script and Goorm IDE (works on VPS too)
<p><a href="https://telegra.ph/Deploying-your-own-Mirrorbot-10-19"> <img src="https://img.shields.io/badge/Deploy%20Guide-grey?style=for-the-badge&logo=telegraph" width="170""/></a></p>
- Deploying on Heroku with heroku-cli and Goorm IDE
<p><a href="https://telegra.ph/How-to-Deploy-a-Mirror-Bot-to-Heroku-with-CLI-05-06"> <img src="https://img.shields.io/badge/Deploy%20Guide-grey?style=for-the-badge&logo=telegraph" width="170""/></a></p>
## Deploying on VPS
### 1) Installing requirements
- Clone this repo:
```
git clone https://github.com/anasty17/mirror-leech-telegram-bot mirrorbot/ && cd mirrorbot
```
- Install requirements
For Debian based distros
```
sudo apt install python3
```
Install Docker by following the [official Docker docs](https://docs.docker.com/engine/install/debian/)
OR
```
sudo apt install snapd
sudo snap install docker
```
- For Arch and it's derivatives:
```
sudo pacman -S docker python
```
- Install dependencies for running setup scripts:
```
pip3 install -r requirements-cli.txt
```
------
### Generate Database (optional)
<details>
<summary><b>Click Here For More Details</b></summary>
**1. Using ElephantSQL**
- Go to https://elephantsql.com and create account (skip this if you already have **ElephantSQL** account)
- Hit `Create New Instance`
- Follow the further instructions in the screen
- Hit `Select Region`
- Hit `Review`
- Hit `Create instance`
- Select your database name
- Copy your database url, and fill to `DATABASE_URL` in config
**2. Using Heroku PostgreSQL**
<p><a href="https://dev.to/prisma/how-to-setup-a-free-postgresql-database-on-heroku-1dc1"> <img src="https://img.shields.io/badge/See%20Dev.to-black?style=for-the-badge&logo=dev.to" width="160""/></a></p>
</details>
------
### 2) Setting up config file
```
cp config_sample.env config.env
```
- Remove the first line saying:
```
_____REMOVE_THIS_LINE_____=True
```
Fill up rest of the fields. Meaning of each field is discussed below:
**1. Required Fields**
<details>
<summary><b>Click Here For More Details</b></summary>
- `BOT_TOKEN`: The Telegram Bot Token that you got from [@BotFather](https://t.me/BotFather)
- `TELEGRAM_API`: This is to authenticate your Telegram account for downloading Telegram files. You can get this from https://my.telegram.org. **NOTE**: DO NOT put this in quotes.
- `TELEGRAM_HASH`: This is to authenticate your Telegram account for downloading Telegram files. You can get this from https://my.telegram.org
- `OWNER_ID`: The Telegram User ID (not username) of the Owner of the bot
- `GDRIVE_FOLDER_ID`: This is the folder ID of the Google Drive Folder to which you want to upload all the mirrors.
- `DOWNLOAD_DIR`: The path to the local folder where the downloads should be downloaded to
- `DOWNLOAD_STATUS_UPDATE_INTERVAL`: A short interval of time in seconds after which the Mirror progress/status message is updated. (I recommend to keep it to `7` seconds at least)
- `AUTO_DELETE_MESSAGE_DURATION`: Interval of time (in seconds), after which the bot deletes it's message (and command message) which is expected to be viewed instantly. (**NOTE**: Set to `-1` to never automatically delete messages)
- `BASE_URL_OF_BOT`: (Required for Heroku to avoid sleep/idling) Valid BASE URL of app where the bot is deployed. Format of URL should be `http://myip` (where `myip` is the IP/Domain of your bot) or if you have chosen other port than `80` then fill in this format `http://myip:port`, for Heroku fill `https://yourappname.herokuapp.com` (**NOTE**: Don't add slash at the end), still got idling? You can use http://cron-job.org to ping your Heroku app.
</details>
**2. Optional Fields**
<details>
<summary><b>Click Here For More Details</b></summary>
- `ACCOUNTS_ZIP_URL`: Only if you want to load your Service Account externally from an Index Link. Archive the accounts folder to a zip file. Fill this with the direct link of that file.
- `TOKEN_PICKLE_URL`: Only if you want to load your **token.pickle** externally from an Index Link. Fill this with the direct link of that file.
- `MULTI_SEARCH_URL`: Check `drive_folder` setup [here](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#multi-search-ids). Upload **drive_folder** file [here](https://gist.github.com/). Open the raw file of that gist, it's URL will be your required variable.
- `DATABASE_URL`: Your Database URL. See [Generate Database](https://github.com/anasty17/mirror-leech-telegram-bot/tree/master#generate-database) to generate database (**NOTE**: If you use database you can save your Sudo ID permanently using `/addsudo` command).
- `AUTHORIZED_CHATS`: Fill user_id and chat_id (not username) of groups/users you want to authorize. Separate them with space, Examples: `-0123456789 -1122334455 6915401739`.
- `SUDO_USERS`: Fill user_id (not username) of users whom you want to give sudo permission. Separate them with space, Examples: `0123456789 1122334455 6915401739` (**NOTE**: If you want to save Sudo ID permanently without database, you must fill your Sudo Id here).
- `IS_TEAM_DRIVE`: Set to `True` if `GDRIVE_FOLDER_ID` is from a Team Drive else `False` or Leave it empty. `Bool`
- `USE_SERVICE_ACCOUNTS`: (Leave empty if unsure) Whether to use Service Accounts or not. For this to work see [Using Service Accounts](https://github.com/anasty17/mirror-leech-telegram-bot#generate-service-accounts-what-is-service-account) section below.
- `INDEX_URL`: Refer to https://gitlab.com/ParveenBhadooOfficial/Google-Drive-Index The URL should not have any trailing '/'
- `MEGA_API_KEY`: Mega.nz API key to mirror mega.nz links. Get it from [Mega SDK Page](https://mega.nz/sdk)
- `MEGA_EMAIL_ID`: Your E-Mail ID used to sign up on mega.nz for using premium account (Leave though)
- `MEGA_PASSWORD`: Your Password for your mega.nz account
- `BLOCK_MEGA_FOLDER`: If you want to remove mega.nz folder support, set it to `True`. `Bool`
- `BLOCK_MEGA_LINKS`: If you want to remove mega.nz mirror support, set it to `True`. `Bool`
- `STOP_DUPLICATE`: (Leave empty if unsure) if this field is set to `True`, bot will check file in Drive, if it is present in Drive, downloading or cloning will be stopped. (**NOTE**: File will be checked using filename not file hash, so this feature is not perfect yet). `Bool`
- `CLONE_LIMIT`: To limit the size of Google Drive folder/file which you can clone. Don't add unit, the default unit is `GB`.
- `MEGA_LIMIT`: To limit the size of Mega download. Don't add unit, the default unit is `GB`.
- `TORRENT_DIRECT_LIMIT`: To limit the Torrent/Direct mirror size. Don't add unit, the default unit is `GB`.
- `ZIP_UNZIP_LIMIT`: To limit the size of mirroring as Zip or unzipmirror. Don't add unit, the default unit is `GB`.
- `VIEW_LINK`: View Link button to open file Index Link in browser instead of direct download link, you can figure out if it's compatible with your Index code or not, open any video from you Index and check if its URL ends with `?a=view`, if yes make it `True` it will work (Compatible with https://gitlab.com/ParveenBhadooOfficial/Google-Drive-Index Code). `Bool`
- `UPTOBOX_TOKEN`: Uptobox token to mirror uptobox links. Get it from [Uptobox Premium Account](https://uptobox.com/my_account).
- `IGNORE_PENDING_REQUESTS`: If you want the bot to ignore pending requests after it restarts, set this to `True`. `Bool`
- `STATUS_LIMIT`: Limit the no. of tasks shown in status message with button. (**NOTE**: Recommended limit is `4` tasks).
- `IS_VPS`: (Only for VPS) Don't set this to `True` even if you are using VPS, unless facing error with web server. `Bool`
- `SERVER_PORT`: Only For VPS even if `IS_VPS` is `False` --> Base URL Port
- `TG_SPLIT_SIZE`: Size of split in bytes, leave it empty for max size `2GB`.
- `AS_DOCUMENT`: Default Telegram file type upload. Empty or `False` means as media. `Bool`
- `EQUAL_SPLITS`: Split files larger than **TG_SPLIT_SIZE** into equal parts size (Not working with zip cmd). `Bool`
- `CUSTOM_FILENAME`: Add custom word to leeched file name.
- `UPSTREAM_REPO`: Your github repository link, If your repo is private add `https://{githubtoken}@github.com/{username}/{reponame}` format. Get token from [Github settings](https://github.com/settings/tokens). (**NOTE**: Any change in docker or requirements you need to deploy again with updated repo to take effect)
- `SHORTENER_API`: Fill your Shortener API key.
- `SHORTENER`: Shortener URL.
Supported URL Shorteners:
>exe.io, gplinks.in, shrinkme.io, urlshortx.com, shortzon.com, bit.ly, shorte.st, linkvertise.com , ouo.io
- `SEARCH_API_LINK`: Search api app link. Get your api from deploying this [repository](https://github.com/Ryuk-me/Torrents-Api). **Note**: Don't add slash at the end
Supported Sites:
>rarbg, 1337x, yts, etzv, tgx, torlock, piratebay, nyaasi, ettv
### Add more buttons (Optional Field)
Three buttons are already added including Drive Link, Index Link, and View Link, you can add extra buttons, if you don't know what are the below entries, simply leave them empty.
- `BUTTON_FOUR_NAME`:
- `BUTTON_FOUR_URL`:
- `BUTTON_FIVE_NAME`:
- `BUTTON_FIVE_URL`:
- `BUTTON_SIX_NAME`:
- `BUTTON_SIX_URL`:
</details>
------
### 3) Getting Google OAuth API credential file and token.pickle
- Visit the [Google Cloud Console](https://console.developers.google.com/apis/credentials)
- Go to the OAuth Consent tab, fill it, and save.
- Go to the Credentials tab and click Create Credentials -> OAuth Client ID
- Choose Desktop and Create.
- Use the download button to download your credentials.
- Move that file to the root of mirrorbot, and rename it to **credentials.json**
- Visit [Google API page](https://console.developers.google.com/apis/library)
- Search for Drive and enable it if it is disabled
- Finally, run the script to generate **token.pickle** file for Google Drive:
```
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
python3 generate_drive_token.py
```
------
### 4) Final steps for deploying on VPS
**IMPORTANT NOTE**: You must set `SERVER_PORT` variable to `80` or any other port you want to use.
- Start Docker daemon (skip if already running):
```
sudo dockerd
```
**Note**: If not started or starting do this command below then try to start.
```
sudo apt install docker.io
```
- Build Docker image:
```
sudo docker build . -t mirror-bot
```
- Run the image:
```
sudo docker run -p 80:80 mirror-bot
```
#### OR
#### Using Docker-compose, you can edit and build your image in seconds:
**NOTE**: If you want to use port other than 80, change it in [docker-compose.yml](https://github.com/anasty17/mirror-leech-telegram-bot/blob/master/docker-compose.yml)
```
sudo apt install docker-compose
```
- Build and run Docker image:
```
sudo docker-compose up
```
- After editing files with nano for example (nano start.sh):
```
sudo docker-compose build
sudo docker-compose up
```
OR
```
sudo docker-compose up --build
```
- To stop Docker:
If docker-compose
```
sudo docker-compose stop
```
**Note**: To start the docker again `sudo docker-compose start`
```
sudo docker ps
```
```
sudo docker stop id
```
- To clear the container (this will not affect the image):
```
sudo docker container prune
```
- To delete the image:
```
sudo docker image prune -a
```
- Tutorial video from Tortoolkit repo
<p><a href="https://youtu.be/c8_TU1sPK08"> <img src="https://img.shields.io/badge/See%20Video-black?style=for-the-badge&logo=YouTube" width="160""/></a></p>
------
# Extras
## Bot commands to be set in [@BotFather](https://t.me/BotFather)
```
mirror - Start mirroring
zipmirror - Start mirroring and upload as .zip
unzipmirror - Extract files
qbmirror - Start mirroring using qBittorrent
qbzipmirror - Start mirroring and upload as .zip using qb
qbunzipmirror - Extract files using qBittorrent
leech - Leech Torrent/Direct link
zipleech - Leech Torrent/Direct link and upload as .zip
unzipleech - Leech Torrent/Direct link and extract
qbleech - Leech Torrent/Magnet using qBittorrent
qbzipleech - Leech Torrent/Magnet and upload as .zip using qb
qbunzipleech - Leech Torrent and extract using qb
clone - Copy file/folder to Drive
count - Count file/folder of Drive
watch - Mirror Youtube-dl supported link
zipwatch - Mirror Youtube playlist link and upload as .zip
leechwatch - Leech through Youtube-dl supported link
leechzipwatch - Leech Youtube playlist link and upload as .zip
leechset - Leech settings
setthumb - Set Thumbnail
status - Get Mirror Status message
list - [query] Search files in Drive
search - [site] [query] Search for torrents with API
cancel - Cancel a task
cancelall - Cancel all tasks
del - [drive_url] Delete file from Drive
log - Get the Bot Log [owner/sudo only]
shell - Run commands in Shell [owner only]
restart - Restart the Bot [owner/sudo only]
stats - Bot Usage Stats
ping - Ping the Bot
help - All cmds with description
```
------
## Using Service Accounts for uploading to avoid user rate limit
>For Service Account to work, you must set `USE_SERVICE_ACCOUNTS` = "True" in config file or environment variables.
>**NOTE**: Using Service Accounts is only recommended while uploading to a Team Drive.
### Generate Service Accounts. [What is Service Account?](https://cloud.google.com/iam/docs/service-accounts)
Let us create only the Service Accounts that we need.
**Warning**: Abuse of this feature is not the aim of this project and we do **NOT** recommend that you make a lot of projects, just one project and 100 SAs allow you plenty of use, its also possible that over abuse might get your projects banned by Google.
>**NOTE**: If you have created SAs in past from this script, you can also just re download the keys by running:
python3 gen_sa_accounts.py --download-keys project_id
>**NOTE:** 1 Service Account can upload/copy around 750 GB a day, 1 project can make 100 Service Accounts so you can upload 75 TB a day or clone 2 TB from each file creator (uploader email).
>**NOTE:** Add Service Accounts to team drive or google group no need to add them in both.
#### 1) Create Service Accounts to Current Project (Recommended Method)
- List your projects ids
```
python3 gen_sa_accounts.py --list-projects
```
- Enable services automatically by this command
```
python3 gen_sa_accounts.py --enable-services $PROJECTID
```
- Create Sevice Accounts to current project
```
python3 gen_sa_accounts.py --create-sas $PROJECTID
```
- Download Sevice Accounts as accounts folder
```
python3 gen_sa_accounts.py --download-keys $PROJECTID
```
#### 2) Another Quick Method
```
python3 gen_sa_accounts.py --quick-setup 1 --new-only
```
A folder named accounts will be created which will contain keys for the Service Accounts.
### a) Add Service Accounts to Google Group
- Mount accounts folder
```
cd accounts
```
- Grab emails form all accounts to emails.txt file that would be created in accounts folder
```
grep -oPh '"client_email": "\K[^"]+' *.json > emails.txt
```
- Unmount acounts folder
```
cd -
```
Then add emails from emails.txt to Google Group, after that add this Google Group to your Shared Drive and promote it to manager.
### b) Add Service Accounts to the Team Drive
- Run:
```
python3 add_to_team_drive.py -d SharedTeamDriveSrcID
```
------
## Multi Search IDs
To use list from multi TD/folder. Run driveid.py in your terminal and follow it. It will generate **drive_folder** file or u can simply create `drive_folder` file in working directory and fill it, check below format:
```
MyTdName folderID/tdID IndexLink(if available)
MyTdName2 folderID/tdID IndexLink(if available)
```
---
## Yt-dlp and Index Authentication Using .netrc File
For using your premium accounts in Youtube-dl or for protected Index Links, edit the netrc file according to following format:
```
machine host login username password my_youtube_password
```
**Note**: For `youtube` authentication use cookies.txt file.
For Index Link with only password without username, even http auth will not work, so this is the solution.
```
machine example.workers.dev password index_password
```
Where host is the name of extractor (eg. Youtube, Twitch). Multiple accounts of different hosts can be added each separated by a new line.

77
add_to_team_drive.py Normal file
View File

@ -0,0 +1,77 @@
from __future__ import print_function
from google.oauth2.service_account import Credentials
import googleapiclient.discovery, json, progress.bar, glob, sys, argparse, time
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
import os, pickle
stt = time.time()
parse = argparse.ArgumentParser(
description='A tool to add service accounts to a shared drive from a folder containing credential files.')
parse.add_argument('--path', '-p', default='accounts',
help='Specify an alternative path to the service accounts folder.')
parse.add_argument('--credentials', '-c', default='./credentials.json',
help='Specify the relative path for the credentials file.')
parse.add_argument('--yes', '-y', default=False, action='store_true', help='Skips the sanity prompt.')
parsereq = parse.add_argument_group('required arguments')
parsereq.add_argument('--drive-id', '-d', help='The ID of the Shared Drive.', required=True)
args = parse.parse_args()
acc_dir = args.path
did = args.drive_id
credentials = glob.glob(args.credentials)
try:
open(credentials[0], 'r')
print('>> Found credentials.')
except IndexError:
print('>> No credentials found.')
sys.exit(0)
if not args.yes:
# input('Make sure the following client id is added to the shared drive as Manager:\n' + json.loads((open(
# credentials[0],'r').read()))['installed']['client_id'])
input('>> Make sure the **Google account** that has generated credentials.json\n is added into your Team Drive '
'(shared drive) as Manager\n>> (Press any key to continue)')
creds = None
if os.path.exists('token_sa.pickle'):
with open('token_sa.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(credentials[0], scopes=[
'https://www.googleapis.com/auth/admin.directory.group',
'https://www.googleapis.com/auth/admin.directory.group.member'
])
# creds = flow.run_local_server(port=0)
creds = flow.run_console()
# Save the credentials for the next run
with open('token_sa.pickle', 'wb') as token:
pickle.dump(creds, token)
drive = googleapiclient.discovery.build("drive", "v3", credentials=creds)
batch = drive.new_batch_http_request()
aa = glob.glob('%s/*.json' % acc_dir)
pbar = progress.bar.Bar("Readying accounts", max=len(aa))
for i in aa:
ce = json.loads(open(i, 'r').read())['client_email']
batch.add(drive.permissions().create(fileId=did, supportsAllDrives=True, body={
"role": "organizer",
"type": "user",
"emailAddress": ce
}))
pbar.next()
pbar.finish()
print('Adding...')
batch.execute()
print('Complete.')
hours, rem = divmod((time.time() - stt), 3600)
minutes, sec = divmod(rem, 60)
print("Elapsed Time:\n{:0>2}:{:0>2}:{:05.2f}".format(int(hours), int(minutes), sec))

15
alive.py Normal file
View File

@ -0,0 +1,15 @@
import time
import requests
import os
BASE_URL = os.environ.get('BASE_URL_OF_BOT', None)
try:
if len(BASE_URL) == 0:
raise TypeError
except TypeError:
BASE_URL = None
PORT = os.environ.get('PORT', None)
if PORT is not None and BASE_URL is not None:
while True:
time.sleep(600)
status = requests.get(BASE_URL).status_code

1
aria.bat Normal file
View File

@ -0,0 +1 @@
aria2c --enable-rpc --rpc-listen-all=false --rpc-listen-port 6800 --max-connection-per-server=10 --rpc-max-request-size=1024M --seed-time=0.01 --min-split-size=10M --follow-torrent=mem --split=10 --daemon=true --allow-overwrite=true

10
aria.sh Executable file
View File

@ -0,0 +1,10 @@
tracker_list=$(curl -Ns https://raw.githubusercontent.com/XIU2/TrackersListCollection/master/all.txt https://ngosang.github.io/trackerslist/trackers_all_http.txt https://newtrackon.com/api/all https://raw.githubusercontent.com/hezhijie0327/Trackerslist/main/trackerslist_tracker.txt https://raw.githubusercontent.com/hezhijie0327/Trackerslist/main/trackerslist_exclude.txt | awk '$0' | tr '\n\n' ',')
aria2c --enable-rpc=true --check-certificate=false --daemon=true \
--max-connection-per-server=10 --rpc-max-request-size=1024M --bt-max-peers=0 \
--bt-stop-timeout=0 --min-split-size=10M --split=10 --allow-overwrite=true \
--max-overall-download-limit=0 --bt-tracker="[$tracker_list]" --disk-cache=32M \
--max-overall-upload-limit=1K --max-concurrent-downloads=15 --continue=true \
--peer-id-prefix=-qB4380- --user-agent=qBittorrent/4.3.8 --peer-agent=qBittorrent/4.3.8 \
--bt-enable-lpd=true --seed-time=0 --max-file-not-found=0 --max-tries=20 \
--auto-file-renaming=true --reuse-uri=true --http-accept-gzip=true \
--content-disposition-default-utf8=true --netrc-path=/usr/src/app/.netrc

427
bot/__init__.py Normal file
View File

@ -0,0 +1,427 @@
import logging
import os
import threading
import time
import subprocess
import requests
import socket
import faulthandler
import aria2p
import psycopg2
import qbittorrentapi as qba
import telegram.ext as tg
from pyrogram import Client
from psycopg2 import Error
from dotenv import load_dotenv
faulthandler.enable()
socket.setdefaulttimeout(600)
botStartTime = time.time()
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.FileHandler('log.txt'), logging.StreamHandler()],
level=logging.INFO)
LOGGER = logging.getLogger(__name__)
load_dotenv('config.env', override=True)
SERVER_PORT = os.environ.get('SERVER_PORT', None)
try:
if len(SERVER_PORT) == 0:
raise TypeError
except TypeError:
SERVER_PORT = 80
PORT = os.environ.get('PORT', SERVER_PORT)
web = subprocess.Popen([f"gunicorn wserver:start_server --bind 0.0.0.0:{PORT} --worker-class aiohttp.GunicornWebWorker"], shell=True)
alive = subprocess.Popen(["python3", "alive.py"])
nox = subprocess.Popen(["qbittorrent-nox", "--profile=."])
subprocess.run(["chmod", "+x", "aria.sh"])
subprocess.run(["chmod", "600", ".netrc"])
subprocess.run(["./aria.sh"], shell=True)
time.sleep(0.5)
Interval = []
DRIVES_NAMES = []
DRIVES_IDS = []
INDEX_URLS = []
def getConfig(name: str):
return os.environ[name]
def mktable():
try:
conn = psycopg2.connect(DB_URI)
cur = conn.cursor()
sql = "CREATE TABLE users (uid bigint, sudo boolean DEFAULT FALSE);"
cur.execute(sql)
conn.commit()
logging.info("Table Created!")
except Error as e:
logging.error(e)
exit(1)
try:
if bool(getConfig('_____REMOVE_THIS_LINE_____')):
logging.error('The README.md file there to be read! Exiting now!')
exit()
except KeyError:
pass
aria2 = aria2p.API(
aria2p.Client(
host="http://localhost",
port=6800,
secret="",
)
)
def get_client() -> qba.TorrentsAPIMixIn:
qb_client = qba.Client(host="localhost", port=8090)
return qb_client
"""
trackers = subprocess.check_output(["curl -Ns https://raw.githubusercontent.com/XIU2/TrackersListCollection/master/all.txt https://ngosang.github.io/trackerslist/trackers_all_http.txt https://newtrackon.com/api/all | awk '$0'"], shell=True).decode('utf-8')
trackerslist = set(trackers.split("\n"))
trackerslist.remove("")
trackerslist = "\n\n".join(trackerslist)
get_client().application.set_preferences({"add_trackers":f"{trackerslist}"})
"""
DOWNLOAD_DIR = None
BOT_TOKEN = None
download_dict_lock = threading.Lock()
status_reply_dict_lock = threading.Lock()
# Key: update.effective_chat.id
# Value: telegram.Message
status_reply_dict = {}
# Key: update.message.message_id
# Value: An object of Status
download_dict = {}
# Stores list of users and chats the bot is authorized to use in
AUTHORIZED_CHATS = set()
SUDO_USERS = set()
AS_DOC_USERS = set()
AS_MEDIA_USERS = set()
if os.path.exists('authorized_chats.txt'):
with open('authorized_chats.txt', 'r+') as f:
lines = f.readlines()
for line in lines:
AUTHORIZED_CHATS.add(int(line.split()[0]))
if os.path.exists('sudo_users.txt'):
with open('sudo_users.txt', 'r+') as f:
lines = f.readlines()
for line in lines:
SUDO_USERS.add(int(line.split()[0]))
try:
achats = getConfig('AUTHORIZED_CHATS')
achats = achats.split(" ")
for chats in achats:
AUTHORIZED_CHATS.add(int(chats))
except:
pass
try:
schats = getConfig('SUDO_USERS')
schats = schats.split(" ")
for chats in schats:
SUDO_USERS.add(int(chats))
except:
pass
try:
BOT_TOKEN = getConfig('BOT_TOKEN')
parent_id = getConfig('GDRIVE_FOLDER_ID')
DOWNLOAD_DIR = getConfig('DOWNLOAD_DIR')
if not DOWNLOAD_DIR.endswith("/"):
DOWNLOAD_DIR = DOWNLOAD_DIR + '/'
DOWNLOAD_STATUS_UPDATE_INTERVAL = int(getConfig('DOWNLOAD_STATUS_UPDATE_INTERVAL'))
OWNER_ID = int(getConfig('OWNER_ID'))
AUTO_DELETE_MESSAGE_DURATION = int(getConfig('AUTO_DELETE_MESSAGE_DURATION'))
TELEGRAM_API = getConfig('TELEGRAM_API')
TELEGRAM_HASH = getConfig('TELEGRAM_HASH')
except KeyError as e:
LOGGER.error("One or more env variables missing! Exiting now")
exit(1)
try:
DB_URI = getConfig('DATABASE_URL')
if len(DB_URI) == 0:
raise KeyError
except KeyError:
DB_URI = None
if DB_URI is not None:
try:
conn = psycopg2.connect(DB_URI)
cur = conn.cursor()
sql = "SELECT * from users;"
cur.execute(sql)
rows = cur.fetchall() #returns a list ==> (uid, sudo)
for row in rows:
AUTHORIZED_CHATS.add(row[0])
if row[1]:
SUDO_USERS.add(row[0])
except Error as e:
if 'relation "users" does not exist' in str(e):
mktable()
else:
LOGGER.error(e)
exit(1)
finally:
cur.close()
conn.close()
LOGGER.info("Generating USER_SESSION_STRING")
app = Client('pyrogram', api_id=int(TELEGRAM_API), api_hash=TELEGRAM_HASH, bot_token=BOT_TOKEN, workers=343)
try:
TG_SPLIT_SIZE = getConfig('TG_SPLIT_SIZE')
if len(TG_SPLIT_SIZE) == 0 or int(TG_SPLIT_SIZE) > 2097151000:
raise KeyError
else:
TG_SPLIT_SIZE = int(TG_SPLIT_SIZE)
except KeyError:
TG_SPLIT_SIZE = 2097151000
try:
STATUS_LIMIT = getConfig('STATUS_LIMIT')
if len(STATUS_LIMIT) == 0:
raise KeyError
else:
STATUS_LIMIT = int(STATUS_LIMIT)
except KeyError:
STATUS_LIMIT = None
try:
MEGA_API_KEY = getConfig('MEGA_API_KEY')
if len(MEGA_API_KEY) == 0:
raise KeyError
except KeyError:
logging.warning('MEGA API KEY not provided!')
MEGA_API_KEY = None
try:
MEGA_EMAIL_ID = getConfig('MEGA_EMAIL_ID')
MEGA_PASSWORD = getConfig('MEGA_PASSWORD')
if len(MEGA_EMAIL_ID) == 0 or len(MEGA_PASSWORD) == 0:
raise KeyError
except KeyError:
logging.warning('MEGA Credentials not provided!')
MEGA_EMAIL_ID = None
MEGA_PASSWORD = None
try:
UPTOBOX_TOKEN = getConfig('UPTOBOX_TOKEN')
if len(UPTOBOX_TOKEN) == 0:
raise KeyError
except KeyError:
logging.warning('UPTOBOX_TOKEN not provided!')
UPTOBOX_TOKEN = None
try:
INDEX_URL = getConfig('INDEX_URL')
if len(INDEX_URL) == 0:
raise KeyError
else:
INDEX_URLS.append(INDEX_URL)
except KeyError:
INDEX_URL = None
INDEX_URLS.append(None)
try:
SEARCH_API_LINK = getConfig('SEARCH_API_LINK')
if len(SEARCH_API_LINK) == 0:
raise KeyError
except KeyError:
SEARCH_API_LINK = None
try:
TORRENT_DIRECT_LIMIT = getConfig('TORRENT_DIRECT_LIMIT')
if len(TORRENT_DIRECT_LIMIT) == 0:
raise KeyError
else:
TORRENT_DIRECT_LIMIT = float(TORRENT_DIRECT_LIMIT)
except KeyError:
TORRENT_DIRECT_LIMIT = None
try:
CLONE_LIMIT = getConfig('CLONE_LIMIT')
if len(CLONE_LIMIT) == 0:
raise KeyError
else:
CLONE_LIMIT = float(CLONE_LIMIT)
except KeyError:
CLONE_LIMIT = None
try:
MEGA_LIMIT = getConfig('MEGA_LIMIT')
if len(MEGA_LIMIT) == 0:
raise KeyError
else:
MEGA_LIMIT = float(MEGA_LIMIT)
except KeyError:
MEGA_LIMIT = None
try:
ZIP_UNZIP_LIMIT = getConfig('ZIP_UNZIP_LIMIT')
if len(ZIP_UNZIP_LIMIT) == 0:
raise KeyError
else:
ZIP_UNZIP_LIMIT = float(ZIP_UNZIP_LIMIT)
except KeyError:
ZIP_UNZIP_LIMIT = None
try:
BUTTON_FOUR_NAME = getConfig('BUTTON_FOUR_NAME')
BUTTON_FOUR_URL = getConfig('BUTTON_FOUR_URL')
if len(BUTTON_FOUR_NAME) == 0 or len(BUTTON_FOUR_URL) == 0:
raise KeyError
except KeyError:
BUTTON_FOUR_NAME = None
BUTTON_FOUR_URL = None
try:
BUTTON_FIVE_NAME = getConfig('BUTTON_FIVE_NAME')
BUTTON_FIVE_URL = getConfig('BUTTON_FIVE_URL')
if len(BUTTON_FIVE_NAME) == 0 or len(BUTTON_FIVE_URL) == 0:
raise KeyError
except KeyError:
BUTTON_FIVE_NAME = None
BUTTON_FIVE_URL = None
try:
BUTTON_SIX_NAME = getConfig('BUTTON_SIX_NAME')
BUTTON_SIX_URL = getConfig('BUTTON_SIX_URL')
if len(BUTTON_SIX_NAME) == 0 or len(BUTTON_SIX_URL) == 0:
raise KeyError
except KeyError:
BUTTON_SIX_NAME = None
BUTTON_SIX_URL = None
try:
STOP_DUPLICATE = getConfig('STOP_DUPLICATE')
STOP_DUPLICATE = STOP_DUPLICATE.lower() == 'true'
except KeyError:
STOP_DUPLICATE = False
try:
VIEW_LINK = getConfig('VIEW_LINK')
VIEW_LINK = VIEW_LINK.lower() == 'true'
except KeyError:
VIEW_LINK = False
try:
IS_TEAM_DRIVE = getConfig('IS_TEAM_DRIVE')
IS_TEAM_DRIVE = IS_TEAM_DRIVE.lower() == 'true'
except KeyError:
IS_TEAM_DRIVE = False
try:
USE_SERVICE_ACCOUNTS = getConfig('USE_SERVICE_ACCOUNTS')
USE_SERVICE_ACCOUNTS = USE_SERVICE_ACCOUNTS.lower() == 'true'
except KeyError:
USE_SERVICE_ACCOUNTS = False
try:
BLOCK_MEGA_FOLDER = getConfig('BLOCK_MEGA_FOLDER')
BLOCK_MEGA_FOLDER = BLOCK_MEGA_FOLDER.lower() == 'true'
except KeyError:
BLOCK_MEGA_FOLDER = False
try:
BLOCK_MEGA_LINKS = getConfig('BLOCK_MEGA_LINKS')
BLOCK_MEGA_LINKS = BLOCK_MEGA_LINKS.lower() == 'true'
except KeyError:
BLOCK_MEGA_LINKS = False
try:
SHORTENER = getConfig('SHORTENER')
SHORTENER_API = getConfig('SHORTENER_API')
if len(SHORTENER) == 0 or len(SHORTENER_API) == 0:
raise KeyError
except KeyError:
SHORTENER = None
SHORTENER_API = None
try:
IGNORE_PENDING_REQUESTS = getConfig("IGNORE_PENDING_REQUESTS")
IGNORE_PENDING_REQUESTS = IGNORE_PENDING_REQUESTS.lower() == 'true'
except KeyError:
IGNORE_PENDING_REQUESTS = False
try:
BASE_URL = getConfig('BASE_URL_OF_BOT')
if len(BASE_URL) == 0:
raise KeyError
except KeyError:
logging.warning('BASE_URL_OF_BOT not provided!')
BASE_URL = None
try:
IS_VPS = getConfig('IS_VPS')
IS_VPS = IS_VPS.lower() == 'true'
except KeyError:
IS_VPS = False
try:
AS_DOCUMENT = getConfig('AS_DOCUMENT')
AS_DOCUMENT = AS_DOCUMENT.lower() == 'true'
except KeyError:
AS_DOCUMENT = False
try:
EQUAL_SPLITS = getConfig('EQUAL_SPLITS')
EQUAL_SPLITS = EQUAL_SPLITS.lower() == 'true'
except KeyError:
EQUAL_SPLITS = False
try:
CUSTOM_FILENAME = getConfig('CUSTOM_FILENAME')
if len(CUSTOM_FILENAME) == 0:
raise KeyError
except KeyError:
CUSTOM_FILENAME = None
try:
TOKEN_PICKLE_URL = getConfig('TOKEN_PICKLE_URL')
if len(TOKEN_PICKLE_URL) == 0:
raise KeyError
else:
res = requests.get(TOKEN_PICKLE_URL)
if res.status_code == 200:
with open('token.pickle', 'wb+') as f:
f.write(res.content)
f.close()
else:
logging.error(f"Failed to download token.pickle {res.status_code}")
raise KeyError
except KeyError:
pass
try:
ACCOUNTS_ZIP_URL = getConfig('ACCOUNTS_ZIP_URL')
if len(ACCOUNTS_ZIP_URL) == 0:
raise KeyError
else:
res = requests.get(ACCOUNTS_ZIP_URL)
if res.status_code == 200:
with open('accounts.zip', 'wb+') as f:
f.write(res.content)
f.close()
else:
logging.error(f"Failed to download accounts.zip {res.status_code}")
raise KeyError
subprocess.run(["unzip", "-q", "-o", "accounts.zip"])
os.remove("accounts.zip")
except KeyError:
pass
try:
MULTI_SEARCH_URL = getConfig('MULTI_SEARCH_URL')
if len(MULTI_SEARCH_URL) == 0:
raise KeyError
else:
res = requests.get(MULTI_SEARCH_URL)
if res.status_code == 200:
with open('drive_folder', 'wb+') as f:
f.write(res.content)
f.close()
else:
logging.error(f"Failed to download drive_folder {res.status_code}")
raise KeyError
except KeyError:
pass
DRIVES_NAMES.append("Main")
DRIVES_IDS.append(parent_id)
if os.path.exists('drive_folder'):
with open('drive_folder', 'r+') as f:
lines = f.readlines()
for line in lines:
try:
temp = line.strip().split()
DRIVES_IDS.append(temp[1])
DRIVES_NAMES.append(temp[0].replace("_", " "))
except:
pass
try:
INDEX_URLS.append(temp[2])
except IndexError as e:
INDEX_URLS.append(None)
updater = tg.Updater(token=BOT_TOKEN, request_kwargs={'read_timeout': 30, 'connect_timeout': 15})
bot = updater.bot
dispatcher = updater.dispatcher

266
bot/__main__.py Normal file
View File

@ -0,0 +1,266 @@
import shutil, psutil
import signal
import os
import asyncio
import time
import subprocess
from pyrogram import idle
from sys import executable
from telegram import ParseMode, InlineKeyboardMarkup
from telegram.ext import CommandHandler
from wserver import start_server_async
from bot import bot, app, dispatcher, updater, botStartTime, IGNORE_PENDING_REQUESTS, IS_VPS, PORT, alive, web, nox, OWNER_ID, AUTHORIZED_CHATS, LOGGER
from bot.helper.ext_utils import fs_utils
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, editMessage, sendLogFile
from .helper.ext_utils.telegraph_helper import telegraph
from .helper.ext_utils.bot_utils import get_readable_file_size, get_readable_time
from .helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper import button_build
from .modules import authorize, list, cancel_mirror, mirror_status, mirror, clone, watch, shell, eval, delete, speedtest, count, leech_settings, search
def stats(update, context):
currentTime = get_readable_time(time.time() - botStartTime)
total, used, free = shutil.disk_usage('.')
total = get_readable_file_size(total)
used = get_readable_file_size(used)
free = get_readable_file_size(free)
sent = get_readable_file_size(psutil.net_io_counters().bytes_sent)
recv = get_readable_file_size(psutil.net_io_counters().bytes_recv)
cpuUsage = psutil.cpu_percent(interval=0.5)
disk = psutil.disk_usage('/').percent
p_core = psutil.cpu_count(logical=False)
t_core = psutil.cpu_count(logical=True)
swap = psutil.swap_memory()
swap_p = swap.percent
swap_t = get_readable_file_size(swap.total)
swap_u = get_readable_file_size(swap.used)
memory = psutil.virtual_memory()
mem_p = memory.percent
mem_t = get_readable_file_size(memory.total)
mem_a = get_readable_file_size(memory.available)
mem_u = get_readable_file_size(memory.used)
stats = f'<b>Bot Uptime:</b> {currentTime}\n\n'\
f'<b>Total Disk Space:</b> {total}\n'\
f'<b>Used:</b> {used} | <b>Free:</b> {free}\n\n'\
f'<b>Upload:</b> {sent}\n'\
f'<b>Download:</b> {recv}\n\n'\
f'<b>CPU:</b> {cpuUsage}%\n'\
f'<b>RAM:</b> {mem_p}%\n'\
f'<b>DISK:</b> {disk}%\n\n'\
f'<b>Physical Cores:</b> {p_core}\n'\
f'<b>Total Cores:</b> {t_core}\n\n'\
f'<b>SWAP:</b> {swap_t} | <b>Used:</b> {swap_p}%\n'\
f'<b>Memory Total:</b> {mem_t}\n'\
f'<b>Memory Free:</b> {mem_a}\n'\
f'<b>Memory Used:</b> {mem_u}\n'
sendMessage(stats, context.bot, update)
def start(update, context):
buttons = button_build.ButtonMaker()
buttons.buildbutton("Repo", "https://www.github.com/anasty17/mirror-leech-telegram-bot")
buttons.buildbutton("Group", "https://t.me/mirrorLeechGroup")
reply_markup = InlineKeyboardMarkup(buttons.build_menu(2))
if CustomFilters.authorized_user(update) or CustomFilters.authorized_chat(update):
start_string = f'''
This bot can mirror all your links to Google Drive!
Type /{BotCommands.HelpCommand} to get a list of available commands
'''
sendMarkup(start_string, context.bot, update, reply_markup)
else:
sendMarkup('Not Authorized user', context.bot, update, reply_markup)
def restart(update, context):
restart_message = sendMessage("Restarting...", context.bot, update)
fs_utils.clean_all()
alive.kill()
process = psutil.Process(web.pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
nox.kill()
subprocess.run(["python3", "update.py"])
# Save restart message object in order to reply to it after restarting
with open(".restartmsg", "w") as f:
f.truncate(0)
f.write(f"{restart_message.chat.id}\n{restart_message.message_id}\n")
os.execl(executable, executable, "-m", "bot")
def ping(update, context):
start_time = int(round(time.time() * 1000))
reply = sendMessage("Starting Ping", context.bot, update)
end_time = int(round(time.time() * 1000))
editMessage(f'{end_time - start_time} ms', reply)
def log(update, context):
sendLogFile(context.bot, update)
help_string_telegraph = f'''<br>
<b>/{BotCommands.HelpCommand}</b>: To get this message
<br><br>
<b>/{BotCommands.MirrorCommand}</b> [download_url][magnet_link]: Start mirroring the link to Google Drive.
<br><br>
<b>/{BotCommands.ZipMirrorCommand}</b> [download_url][magnet_link]: Start mirroring and upload the archived (.zip) version of the download
<br><br>
<b>/{BotCommands.UnzipMirrorCommand}</b> [download_url][magnet_link]: Start mirroring and if downloaded file is any archive, extracts it to Google Drive
<br><br>
<b>/{BotCommands.QbMirrorCommand}</b> [magnet_link]: Start Mirroring using qBittorrent, Use <b>/{BotCommands.QbMirrorCommand} s</b> to select files before downloading
<br><br>
<b>/{BotCommands.QbZipMirrorCommand}</b> [magnet_link]: Start mirroring using qBittorrent and upload the archived (.zip) version of the download
<br><br>
<b>/{BotCommands.QbUnzipMirrorCommand}</b> [magnet_link]: Start mirroring using qBittorrent and if downloaded file is any archive, extracts it to Google Drive
<br><br>
<b>/{BotCommands.LeechCommand}</b> [download_url][magnet_link]: Start leeching to Telegram, Use <b>/{BotCommands.LeechCommand} s</b> to select files before leeching
<br><br>
<b>/{BotCommands.ZipLeechCommand}</b> [download_url][magnet_link]: Start leeching to Telegram and upload it as (.zip)
<br><br>
<b>/{BotCommands.UnzipLeechCommand}</b> [download_url][magnet_link]: Start leeching to Telegram and if downloaded file is any archive, extracts it to Telegram
<br><br>
<b>/{BotCommands.QbLeechCommand}</b> [magnet_link]: Start leeching to Telegram using qBittorrent, Use <b>/{BotCommands.QbLeechCommand} s</b> to select files before leeching
<br><br>
<b>/{BotCommands.QbZipLeechCommand}</b> [magnet_link]: Start leeching to Telegram using qBittorrent and upload it as (.zip)
<br><br>
<b>/{BotCommands.QbUnzipLeechCommand}</b> [magnet_link]: Start leeching to Telegram using qBittorrent and if downloaded file is any archive, extracts it to Telegram
<br><br>
<b>/{BotCommands.CloneCommand}</b> [drive_url]: Copy file/folder to Google Drive
<br><br>
<b>/{BotCommands.CountCommand}</b> [drive_url]: Count file/folder of Google Drive Links
<br><br>
<b>/{BotCommands.DeleteCommand}</b> [drive_url]: Delete file from Google Drive (Only Owner & Sudo)
<br><br>
<b>/{BotCommands.WatchCommand}</b> [youtube-dl supported link]: Mirror through youtube-dl. Click <b>/{BotCommands.WatchCommand}</b> for more help
<br><br>
<b>/{BotCommands.ZipWatchCommand}</b> [youtube-dl supported link]: Mirror through youtube-dl and zip before uploading
<br><br>
<b>/{BotCommands.LeechWatchCommand}</b> [youtube-dl supported link]: Leech through youtube-dl
<br><br>
<b>/{BotCommands.LeechZipWatchCommand}</b> [youtube-dl supported link]: Leech through youtube-dl and zip before uploading
<br><br>
<b>/{BotCommands.LeechSetCommand}</b>: Leech Settings
<br><br>
<b>/{BotCommands.SetThumbCommand}</b>: Reply photo to set it as Thumbnail
<br><br>
<b>/{BotCommands.CancelMirror}</b>: Reply to the message by which the download was initiated and that download will be cancelled
<br><br>
<b>/{BotCommands.CancelAllCommand}</b>: Cancel all running tasks
<br><br>
<b>/{BotCommands.ListCommand}</b> [query]: Search in Google Drive
<br><br>
<b>/{BotCommands.SearchCommand}</b> [site](optional) [query]: Search for torrents with API
<br>sites: <code>rarbg, 1337x, yts, etzv, tgx, torlock, piratebay, nyaasi, ettv</code><br><br>
<b>/{BotCommands.StatusCommand}</b>: Shows a status of all the downloads
<br><br>
<b>/{BotCommands.StatsCommand}</b>: Show Stats of the machine the bot is hosted on
'''
help = telegraph.create_page(
title='Mirror-Leech-Bot Help',
content=help_string_telegraph,
)["path"]
help_string = f'''
/{BotCommands.PingCommand}: Check how long it takes to Ping the Bot
/{BotCommands.AuthorizeCommand}: Authorize a chat or a user to use the bot (Can only be invoked by Owner & Sudo of the bot)
/{BotCommands.UnAuthorizeCommand}: Unauthorize a chat or a user to use the bot (Can only be invoked by Owner & Sudo of the bot)
/{BotCommands.AuthorizedUsersCommand}: Show authorized users (Only Owner & Sudo)
/{BotCommands.AddSudoCommand}: Add sudo user (Only Owner)
/{BotCommands.RmSudoCommand}: Remove sudo users (Only Owner)
/{BotCommands.RestartCommand}: Restart and update the bot
/{BotCommands.LogCommand}: Get a log file of the bot. Handy for getting crash reports
/{BotCommands.SpeedCommand}: Check Internet Speed of the Host
/{BotCommands.ShellCommand}: Run commands in Shell (Only Owner)
/{BotCommands.ExecHelpCommand}: Get help for Executor module (Only Owner)
'''
def bot_help(update, context):
button = button_build.ButtonMaker()
button.buildbutton("Other Commands", f"https://telegra.ph/{help}")
reply_markup = InlineKeyboardMarkup(button.build_menu(1))
sendMarkup(help_string, context.bot, update, reply_markup)
'''
botcmds = [
(f'{BotCommands.MirrorCommand}', 'Start Mirroring'),
(f'{BotCommands.ZipMirrorCommand}','Start mirroring and upload as .zip'),
(f'{BotCommands.UnzipMirrorCommand}','Extract files'),
(f'{BotCommands.QbMirrorCommand}','Start Mirroring using qBittorrent'),
(f'{BotCommands.QbZipMirrorCommand}','Start mirroring and upload as .zip using qb'),
(f'{BotCommands.QbUnzipMirrorCommand}','Extract files using qBitorrent'),
(f'{BotCommands.CloneCommand}','Copy file/folder to Drive'),
(f'{BotCommands.CountCommand}','Count file/folder of Drive link'),
(f'{BotCommands.DeleteCommand}','Delete file from Drive'),
(f'{BotCommands.WatchCommand}','Mirror Youtube-dl support link'),
(f'{BotCommands.ZipWatchCommand}','Mirror Youtube playlist link as .zip'),
(f'{BotCommands.CancelMirror}','Cancel a task'),
(f'{BotCommands.CancelAllCommand}','Cancel all tasks'),
(f'{BotCommands.ListCommand}','Searches files in Drive'),
(f'{BotCommands.StatusCommand}','Get Mirror Status message'),
(f'{BotCommands.StatsCommand}','Bot Usage Stats'),
(f'{BotCommands.PingCommand}','Ping the Bot'),
(f'{BotCommands.RestartCommand}','Restart the bot [owner/sudo only]'),
(f'{BotCommands.LogCommand}','Get the Bot Log [owner/sudo only]'),
(f'{BotCommands.HelpCommand}','Get Detailed Help')
]
'''
def main():
fs_utils.start_cleanup()
if IS_VPS:
asyncio.new_event_loop().run_until_complete(start_server_async(PORT))
# Check if the bot is restarting
if os.path.isfile(".restartmsg"):
with open(".restartmsg") as f:
chat_id, msg_id = map(int, f)
bot.edit_message_text("Restarted successfully!", chat_id, msg_id)
os.remove(".restartmsg")
elif OWNER_ID:
try:
text = "<b>Bot Restarted!</b>"
bot.sendMessage(chat_id=OWNER_ID, text=text, parse_mode=ParseMode.HTML)
if AUTHORIZED_CHATS:
for i in AUTHORIZED_CHATS:
bot.sendMessage(chat_id=i, text=text, parse_mode=ParseMode.HTML)
except Exception as e:
LOGGER.warning(e)
# bot.set_my_commands(botcmds)
start_handler = CommandHandler(BotCommands.StartCommand, start, run_async=True)
ping_handler = CommandHandler(BotCommands.PingCommand, ping,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
restart_handler = CommandHandler(BotCommands.RestartCommand, restart,
filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
help_handler = CommandHandler(BotCommands.HelpCommand,
bot_help, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
stats_handler = CommandHandler(BotCommands.StatsCommand,
stats, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
log_handler = CommandHandler(BotCommands.LogCommand, log, filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
dispatcher.add_handler(start_handler)
dispatcher.add_handler(ping_handler)
dispatcher.add_handler(restart_handler)
dispatcher.add_handler(help_handler)
dispatcher.add_handler(stats_handler)
dispatcher.add_handler(log_handler)
updater.start_polling(drop_pending_updates=IGNORE_PENDING_REQUESTS)
LOGGER.info("Bot Started!")
signal.signal(signal.SIGINT, fs_utils.exit_clean_up)
app.start()
main()
idle()

1
bot/helper/__init__.py Normal file
View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,262 @@
import re
import threading
import time
import math
import psutil
import shutil
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot import dispatcher, download_dict, download_dict_lock, STATUS_LIMIT, botStartTime
from telegram import InlineKeyboardMarkup
from telegram.ext import CallbackQueryHandler
from bot.helper.telegram_helper import button_build, message_utils
MAGNET_REGEX = r"magnet:\?xt=urn:btih:[a-zA-Z0-9]*"
URL_REGEX = r"(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-?=%.]+"
COUNT = 0
PAGE_NO = 1
class MirrorStatus:
STATUS_UPLOADING = "Uploading...📤"
STATUS_DOWNLOADING = "Downloading...📥"
STATUS_CLONING = "Cloning...♻️"
STATUS_WAITING = "Queued...📝"
STATUS_FAILED = "Failed 🚫. Cleaning Download..."
STATUS_PAUSE = "Paused...⛔️"
STATUS_ARCHIVING = "Archiving...🔐"
STATUS_EXTRACTING = "Extracting...📂"
STATUS_SPLITTING = "Splitting...✂️"
SIZE_UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
class setInterval:
def __init__(self, interval, action):
self.interval = interval
self.action = action
self.stopEvent = threading.Event()
thread = threading.Thread(target=self.__setInterval)
thread.start()
def __setInterval(self):
nextTime = time.time() + self.interval
while not self.stopEvent.wait(nextTime - time.time()):
nextTime += self.interval
self.action()
def cancel(self):
self.stopEvent.set()
def get_readable_file_size(size_in_bytes) -> str:
if size_in_bytes is None:
return '0B'
index = 0
while size_in_bytes >= 1024:
size_in_bytes /= 1024
index += 1
try:
return f'{round(size_in_bytes, 2)}{SIZE_UNITS[index]}'
except IndexError:
return 'File too large'
def getDownloadByGid(gid):
with download_dict_lock:
for dl in download_dict.values():
status = dl.status()
if (
status
not in [
MirrorStatus.STATUS_ARCHIVING,
MirrorStatus.STATUS_EXTRACTING,
MirrorStatus.STATUS_SPLITTING,
]
and dl.gid() == gid
):
return dl
return None
def getAllDownload():
with download_dict_lock:
for dlDetails in download_dict.values():
status = dlDetails.status()
if (
status
not in [
MirrorStatus.STATUS_ARCHIVING,
MirrorStatus.STATUS_EXTRACTING,
MirrorStatus.STATUS_SPLITTING,
MirrorStatus.STATUS_CLONING,
MirrorStatus.STATUS_UPLOADING,
]
and dlDetails
):
return dlDetails
return None
def get_progress_bar_string(status):
completed = status.processed_bytes() / 8
total = status.size_raw() / 8
p = 0 if total == 0 else round(completed * 100 / total)
p = min(max(p, 0), 100)
cFull = p // 8
p_str = '' * cFull
p_str += '' * (12 - cFull)
p_str = f"[{p_str}]"
return p_str
def get_readable_message():
with download_dict_lock:
msg = ""
START = 0
dlspeed_bytes = 0
uldl_bytes = 0
if STATUS_LIMIT is not None:
dick_no = len(download_dict)
global pages
pages = math.ceil(dick_no/STATUS_LIMIT)
if pages != 0 and PAGE_NO > pages:
globals()['COUNT'] -= STATUS_LIMIT
globals()['PAGE_NO'] -= 1
START = COUNT
for index, download in enumerate(list(download_dict.values())[START:], start=1):
msg += f"<b>Name:</b> <code>{download.name()}</code>"
msg += f"\n<b>Status:</b> <i>{download.status()}</i>"
if download.status() not in [
MirrorStatus.STATUS_ARCHIVING,
MirrorStatus.STATUS_EXTRACTING,
MirrorStatus.STATUS_SPLITTING,
]:
msg += f"\n{get_progress_bar_string(download)} {download.progress()}"
if download.status() == MirrorStatus.STATUS_CLONING:
msg += f"\n<b>Cloned:</b> {get_readable_file_size(download.processed_bytes())} of {download.size()}"
elif download.status() == MirrorStatus.STATUS_UPLOADING:
msg += f"\n<b>Uploaded:</b> {get_readable_file_size(download.processed_bytes())} of {download.size()}"
else:
msg += f"\n<b>Downloaded:</b> {get_readable_file_size(download.processed_bytes())} of {download.size()}"
msg += f"\n<b>Speed:</b> {download.speed()} | <b>ETA:</b> {download.eta()}"
try:
msg += f"\n<b>Seeders:</b> {download.aria_download().num_seeders}" \
f" | <b>Peers:</b> {download.aria_download().connections}"
except:
pass
try:
msg += f"\n<b>Seeders:</b> {download.torrent_info().num_seeds}" \
f" | <b>Leechers:</b> {download.torrent_info().num_leechs}"
except:
pass
msg += f"\n<code>/{BotCommands.CancelMirror} {download.gid()}</code>"
else:
msg += f"\n<b>Size: </b>{download.size()}"
msg += "\n\n"
if STATUS_LIMIT is not None and index == STATUS_LIMIT:
break
total, used, free = shutil.disk_usage('.')
free = get_readable_file_size(free)
currentTime = get_readable_time(time.time() - botStartTime)
bmsg = f"<b>CPU:</b> {psutil.cpu_percent()}% | <b>FREE:</b> {free}"
for download in list(download_dict.values()):
speedy = download.speed()
if download.status() == MirrorStatus.STATUS_DOWNLOADING:
if 'K' in speedy:
dlspeed_bytes += float(speedy.split('K')[0]) * 1024
elif 'M' in speedy:
dlspeed_bytes += float(speedy.split('M')[0]) * 1048576
if download.status() == MirrorStatus.STATUS_UPLOADING:
if 'KB/s' in speedy:
uldl_bytes += float(speedy.split('K')[0]) * 1024
elif 'MB/s' in speedy:
uldl_bytes += float(speedy.split('M')[0]) * 1048576
dlspeed = get_readable_file_size(dlspeed_bytes)
ulspeed = get_readable_file_size(uldl_bytes)
bmsg += f"\n<b>RAM:</b> {psutil.virtual_memory().percent}% | <b>UPTIME:</b> {currentTime}" \
f"\n<b>DL:</b> {dlspeed}/s | <b>UL:</b> {ulspeed}/s"
if STATUS_LIMIT is not None and dick_no > STATUS_LIMIT:
msg += f"<b>Page:</b> {PAGE_NO}/{pages} | <b>Tasks:</b> {dick_no}\n"
buttons = button_build.ButtonMaker()
buttons.sbutton("Previous", "pre")
buttons.sbutton("Next", "nex")
button = InlineKeyboardMarkup(buttons.build_menu(2))
return msg + bmsg, button
return msg + bmsg, ""
def turn(update, context):
query = update.callback_query
query.answer()
global COUNT, PAGE_NO
if query.data == "nex":
if PAGE_NO == pages:
COUNT = 0
PAGE_NO = 1
else:
COUNT += STATUS_LIMIT
PAGE_NO += 1
elif query.data == "pre":
if PAGE_NO == 1:
COUNT = STATUS_LIMIT * (pages - 1)
PAGE_NO = pages
else:
COUNT -= STATUS_LIMIT
PAGE_NO -= 1
message_utils.update_all_messages()
def get_readable_time(seconds: int) -> str:
result = ''
(days, remainder) = divmod(seconds, 86400)
days = int(days)
if days != 0:
result += f'{days}d'
(hours, remainder) = divmod(remainder, 3600)
hours = int(hours)
if hours != 0:
result += f'{hours}h'
(minutes, seconds) = divmod(remainder, 60)
minutes = int(minutes)
if minutes != 0:
result += f'{minutes}m'
seconds = int(seconds)
result += f'{seconds}s'
return result
def is_url(url: str):
url = re.findall(URL_REGEX, url)
return bool(url)
def is_gdrive_link(url: str):
return "drive.google.com" in url
def is_mega_link(url: str):
return "mega.nz" in url or "mega.co.nz" in url
def get_mega_link_type(url: str):
if "folder" in url:
return "folder"
elif "file" in url:
return "file"
elif "/#F!" in url:
return "folder"
return "file"
def is_magnet(url: str):
magnet = re.findall(MAGNET_REGEX, url)
return bool(magnet)
def new_thread(fn):
"""To use as decorator to make a function call threaded.
Needs import
from threading import Thread"""
def wrapper(*args, **kwargs):
thread = threading.Thread(target=fn, args=args, kwargs=kwargs)
thread.start()
return thread
return wrapper
next_handler = CallbackQueryHandler(turn, pattern="nex", run_async=True)
previous_handler = CallbackQueryHandler(turn, pattern="pre", run_async=True)
dispatcher.add_handler(next_handler)
dispatcher.add_handler(previous_handler)

View File

@ -0,0 +1,71 @@
import psycopg2
from psycopg2 import Error
from bot import AUTHORIZED_CHATS, SUDO_USERS, DB_URI, LOGGER
class DbManger:
def __init__(self):
self.err = False
def connect(self):
try:
self.conn = psycopg2.connect(DB_URI)
self.cur = self.conn.cursor()
except psycopg2.DatabaseError as error :
LOGGER.error("Error in dbMang : ", error)
self.err = True
def disconnect(self):
self.cur.close()
self.conn.close()
def db_auth(self,chat_id: int):
self.connect()
if self.err:
return "There's some error check log for details"
sql = 'INSERT INTO users VALUES ({});'.format(chat_id)
self.cur.execute(sql)
self.conn.commit()
self.disconnect()
AUTHORIZED_CHATS.add(chat_id)
return 'Authorized successfully'
def db_unauth(self,chat_id: int):
self.connect()
if self.err:
return "There's some error check log for details"
sql = 'DELETE from users where uid = {};'.format(chat_id)
self.cur.execute(sql)
self.conn.commit()
self.disconnect()
AUTHORIZED_CHATS.remove(chat_id)
return 'Unauthorized successfully'
def db_addsudo(self,chat_id: int):
self.connect()
if self.err:
return "There's some error check log for details"
if chat_id in AUTHORIZED_CHATS:
sql = 'UPDATE users SET sudo = TRUE where uid = {};'.format(chat_id)
self.cur.execute(sql)
self.conn.commit()
self.disconnect()
SUDO_USERS.add(chat_id)
return 'Successfully promoted as Sudo'
else:
sql = 'INSERT INTO users VALUES ({},TRUE);'.format(chat_id)
self.cur.execute(sql)
self.conn.commit()
self.disconnect()
SUDO_USERS.add(chat_id)
return 'Successfully Authorized and promoted as Sudo'
def db_rmsudo(self,chat_id: int):
self.connect()
if self.err:
return "There's some error check log for details"
sql = 'UPDATE users SET sudo = FALSE where uid = {};'.format(chat_id)
self.cur.execute(sql)
self.conn.commit()
self.disconnect()
SUDO_USERS.remove(chat_id)
return 'Successfully removed from Sudo'

View File

@ -0,0 +1,8 @@
class DirectDownloadLinkException(Exception):
"""Not method found for extracting direct download link from the http link"""
pass
class NotSupportedExtractionArchive(Exception):
"""The archive format use is trying to extract is not supported"""
pass

View File

@ -0,0 +1,222 @@
import sys
import shutil
import os
import pathlib
import magic
import tarfile
import subprocess
import time
import math
import json
from PIL import Image
from .exceptions import NotSupportedExtractionArchive
from bot import aria2, LOGGER, DOWNLOAD_DIR, get_client, TG_SPLIT_SIZE, EQUAL_SPLITS
VIDEO_SUFFIXES = ("M4V", "MP4", "MOV", "FLV", "WMV", "3GP", "MPG", "WEBM", "MKV", "AVI")
def clean_download(path: str):
if os.path.exists(path):
LOGGER.info(f"Cleaning Download: {path}")
shutil.rmtree(path)
def start_cleanup():
try:
shutil.rmtree(DOWNLOAD_DIR)
except FileNotFoundError:
pass
def clean_all():
aria2.remove_all(True)
get_client().torrents_delete(torrent_hashes="all", delete_files=True)
try:
shutil.rmtree(DOWNLOAD_DIR)
except FileNotFoundError:
pass
def exit_clean_up(signal, frame):
try:
LOGGER.info("Please wait, while we clean up the downloads and stop running downloads")
clean_all()
sys.exit(0)
except KeyboardInterrupt:
LOGGER.warning("Force Exiting before the cleanup finishes!")
sys.exit(1)
def get_path_size(path):
if os.path.isfile(path):
return os.path.getsize(path)
total_size = 0
for root, dirs, files in os.walk(path):
for f in files:
abs_path = os.path.join(root, f)
total_size += os.path.getsize(abs_path)
return total_size
"""
def tar(org_path):
tar_path = org_path + ".tar"
path = pathlib.PurePath(org_path)
LOGGER.info(f'Tar: orig_path: {org_path}, tar_path: {tar_path}')
tar = tarfile.open(tar_path, "w")
tar.add(org_path, arcname=path.name)
tar.close()
return tar_path
"""
def get_base_name(orig_path: str):
if orig_path.endswith(".tar.bz2"):
return orig_path.rsplit(".tar.bz2", 1)[0]
elif orig_path.endswith(".tar.gz"):
return orig_path.rsplit(".tar.gz", 1)[0]
elif orig_path.endswith(".bz2"):
return orig_path.rsplit(".bz2", 1)[0]
elif orig_path.endswith(".gz"):
return orig_path.rsplit(".gz", 1)[0]
elif orig_path.endswith(".tar.xz"):
return orig_path.rsplit(".tar.xz", 1)[0]
elif orig_path.endswith(".tar"):
return orig_path.rsplit(".tar", 1)[0]
elif orig_path.endswith(".tbz2"):
return orig_path.rsplit("tbz2", 1)[0]
elif orig_path.endswith(".tgz"):
return orig_path.rsplit(".tgz", 1)[0]
elif orig_path.endswith(".zip"):
return orig_path.rsplit(".zip", 1)[0]
elif orig_path.endswith(".7z"):
return orig_path.rsplit(".7z", 1)[0]
elif orig_path.endswith(".Z"):
return orig_path.rsplit(".Z", 1)[0]
elif orig_path.endswith(".rar"):
return orig_path.rsplit(".rar", 1)[0]
elif orig_path.endswith(".iso"):
return orig_path.rsplit(".iso", 1)[0]
elif orig_path.endswith(".wim"):
return orig_path.rsplit(".wim", 1)[0]
elif orig_path.endswith(".cab"):
return orig_path.rsplit(".cab", 1)[0]
elif orig_path.endswith(".apm"):
return orig_path.rsplit(".apm", 1)[0]
elif orig_path.endswith(".arj"):
return orig_path.rsplit(".arj", 1)[0]
elif orig_path.endswith(".chm"):
return orig_path.rsplit(".chm", 1)[0]
elif orig_path.endswith(".cpio"):
return orig_path.rsplit(".cpio", 1)[0]
elif orig_path.endswith(".cramfs"):
return orig_path.rsplit(".cramfs", 1)[0]
elif orig_path.endswith(".deb"):
return orig_path.rsplit(".deb", 1)[0]
elif orig_path.endswith(".dmg"):
return orig_path.rsplit(".dmg", 1)[0]
elif orig_path.endswith(".fat"):
return orig_path.rsplit(".fat", 1)[0]
elif orig_path.endswith(".hfs"):
return orig_path.rsplit(".hfs", 1)[0]
elif orig_path.endswith(".lzh"):
return orig_path.rsplit(".lzh", 1)[0]
elif orig_path.endswith(".lzma"):
return orig_path.rsplit(".lzma", 1)[0]
elif orig_path.endswith(".lzma2"):
return orig_path.rsplit(".lzma2", 1)[0]
elif orig_path.endswith(".mbr"):
return orig_path.rsplit(".mbr", 1)[0]
elif orig_path.endswith(".msi"):
return orig_path.rsplit(".msi", 1)[0]
elif orig_path.endswith(".mslz"):
return orig_path.rsplit(".mslz", 1)[0]
elif orig_path.endswith(".nsis"):
return orig_path.rsplit(".nsis", 1)[0]
elif orig_path.endswith(".ntfs"):
return orig_path.rsplit(".ntfs", 1)[0]
elif orig_path.endswith(".rpm"):
return orig_path.rsplit(".rpm", 1)[0]
elif orig_path.endswith(".squashfs"):
return orig_path.rsplit(".squashfs", 1)[0]
elif orig_path.endswith(".udf"):
return orig_path.rsplit(".udf", 1)[0]
elif orig_path.endswith(".vhd"):
return orig_path.rsplit(".vhd", 1)[0]
elif orig_path.endswith(".xar"):
return orig_path.rsplit(".xar", 1)[0]
else:
raise NotSupportedExtractionArchive('File format not supported for extraction')
def get_mime_type(file_path):
mime = magic.Magic(mime=True)
mime_type = mime.from_file(file_path)
mime_type = mime_type or "text/plain"
return mime_type
def take_ss(video_file):
des_dir = 'Thumbnails'
if not os.path.exists(des_dir):
os.mkdir(des_dir)
des_dir = os.path.join(des_dir, f"{time.time()}.jpg")
duration = get_media_info(video_file)[0]
if duration == 0:
duration = 3
duration = duration // 2
try:
subprocess.run(["ffmpeg", "-hide_banner", "-loglevel", "error", "-ss", str(duration),
"-i", video_file, "-vframes", "1", des_dir])
except:
return None
if not os.path.lexists(des_dir):
return None
Image.open(des_dir).convert("RGB").save(des_dir)
img = Image.open(des_dir)
img.resize((480, 320))
img.save(des_dir, "JPEG")
return des_dir
def split(path, size, filee, dirpath, split_size, start_time=0, i=1, inLoop=False):
parts = math.ceil(size/TG_SPLIT_SIZE)
if EQUAL_SPLITS and not inLoop:
split_size = math.ceil(size/parts)
if filee.upper().endswith(VIDEO_SUFFIXES):
base_name, extension = os.path.splitext(filee)
split_size = split_size - 2500000
while i <= parts :
parted_name = "{}.part{}{}".format(str(base_name), str(i).zfill(3), str(extension))
out_path = os.path.join(dirpath, parted_name)
subprocess.run(["ffmpeg", "-hide_banner", "-loglevel", "error", "-i",
path, "-ss", str(start_time), "-fs", str(split_size),
"-async", "1", "-strict", "-2", "-c", "copy", out_path])
out_size = get_path_size(out_path)
if out_size > 2097152000:
dif = out_size - 2097152000
split_size = split_size - dif + 2400000
os.remove(out_path)
return split(path, size, filee, dirpath, split_size, start_time, i, inLoop=True)
lpd = get_media_info(out_path)[0]
start_time += lpd - 3
i = i + 1
else:
out_path = os.path.join(dirpath, filee + ".")
subprocess.run(["split", "--numeric-suffixes=1", "--suffix-length=3", f"--bytes={split_size}", path, out_path])
def get_media_info(path):
try:
result = subprocess.check_output(["ffprobe", "-hide_banner", "-loglevel", "error", "-print_format",
"json", "-show_format", path]).decode('utf-8')
fields = json.loads(result)['format']
except Exception as e:
return 0, None, None
try:
duration = round(float(fields['duration']))
except:
duration = 0
try:
artist = str(fields['tags']['artist'])
except:
artist = None
try:
title = str(fields['tags']['title'])
except:
title = None
return duration, artist, title

View File

@ -0,0 +1,31 @@
# Implemented by https://github.com/junedkh
import requests
import random
import base64
import pyshorteners
from urllib.parse import quote
from urllib3 import disable_warnings
from bot import SHORTENER, SHORTENER_API
def short_url(longurl):
if "shorte.st" in SHORTENER:
disable_warnings()
return requests.get(f'http://api.shorte.st/stxt/{SHORTENER_API}/{longurl}', verify=False).text
elif "linkvertise" in SHORTENER:
url = quote(base64.b64encode(longurl.encode("utf-8")))
linkvertise = [
f"https://link-to.net/{SHORTENER_API}/{random.random() * 1000}/dynamic?r={url}",
f"https://up-to-down.net/{SHORTENER_API}/{random.random() * 1000}/dynamic?r={url}",
f"https://direct-link.net/{SHORTENER_API}/{random.random() * 1000}/dynamic?r={url}",
f"https://file-link.net/{SHORTENER_API}/{random.random() * 1000}/dynamic?r={url}"]
return random.choice(linkvertise)
elif "bitly.com" in SHORTENER:
s = pyshorteners.Shortener(api_key=SHORTENER_API)
return s.bitly.short(longurl)
elif "ouo.io" in SHORTENER:
disable_warnings()
return requests.get(f'http://ouo.io/api/{SHORTENER_API}?s={longurl}', verify=False).text
else:
return requests.get(f'https://{SHORTENER}/api?api={SHORTENER_API}&url={longurl}&format=text').text

View File

@ -0,0 +1,62 @@
# Implement By - @VarnaX-279
import time
import string
import random
import logging
from telegraph import Telegraph
from telegraph.exceptions import RetryAfterError
from bot import LOGGER
class TelegraphHelper:
def __init__(self, author_name=None, author_url=None):
self.telegraph = Telegraph()
self.short_name = ''.join(random.SystemRandom().choices(string.ascii_letters, k=8))
self.access_token = None
self.author_name = author_name
self.author_url = author_url
self.create_account()
def create_account(self):
self.telegraph.create_account(
short_name=self.short_name,
author_name=self.author_name,
author_url=self.author_url
)
self.access_token = self.telegraph.get_access_token()
LOGGER.info(f"Creating TELEGRAPH Account using '{self.short_name}' name")
def create_page(self, title, content):
try:
result = self.telegraph.create_page(
title = title,
author_name=self.author_name,
author_url=self.author_url,
html_content=content
)
return result
except RetryAfterError as st:
LOGGER.warning(f'Telegraph Flood control exceeded. I will sleep for {st.retry_after} seconds.')
time.sleep(st.retry_after)
return self.create_page(title, content)
def edit_page(self, path, title, content):
try:
result = self.telegraph.edit_page(
path = path,
title = title,
author_name=self.author_name,
author_url=self.author_url,
html_content=content
)
return result
except RetryAfterError as st:
LOGGER.warning(f'Telegraph Flood control exceeded. I will sleep for {st.retry_after} seconds.')
time.sleep(st.retry_after)
return self.edit_page(path, title, content)
telegraph=TelegraphHelper('Mirror-Leech-Telegram-Bot', 'https://github.com/anasty17/mirror-leech-telegram-bot')

View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,100 @@
from bot import aria2, download_dict_lock, download_dict, STOP_DUPLICATE, TORRENT_DIRECT_LIMIT, ZIP_UNZIP_LIMIT, LOGGER
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
from bot.helper.ext_utils.bot_utils import is_magnet, getDownloadByGid, new_thread, get_readable_file_size
from bot.helper.mirror_utils.status_utils.aria_download_status import AriaDownloadStatus
from bot.helper.telegram_helper.message_utils import sendMarkup
from time import sleep
import threading
class AriaDownloadHelper:
def __init__(self):
super().__init__()
@new_thread
def __onDownloadStarted(self, api, gid):
if STOP_DUPLICATE or TORRENT_DIRECT_LIMIT is not None or ZIP_UNZIP_LIMIT is not None:
sleep(1)
dl = getDownloadByGid(gid)
download = api.get_download(gid)
if STOP_DUPLICATE and dl is not None and not dl.getListener().isLeech:
LOGGER.info('Checking File/Folder if already in Drive...')
sname = download.name
if dl.getListener().isZip:
sname = sname + ".zip"
if not dl.getListener().extract:
gdrive = GoogleDriveHelper()
smsg, button = gdrive.drive_list(sname, True)
if smsg:
dl.getListener().onDownloadError('File/Folder already available in Drive.\n\n')
api.remove([download], force=True)
sendMarkup("Here are the search results:", dl.getListener().bot, dl.getListener().update, button)
return
if dl is not None and (ZIP_UNZIP_LIMIT is not None or TORRENT_DIRECT_LIMIT is not None):
limit = None
if ZIP_UNZIP_LIMIT is not None and (dl.getListener().isZip or dl.getListener().extract):
mssg = f'Zip/Unzip limit is {ZIP_UNZIP_LIMIT}GB'
limit = ZIP_UNZIP_LIMIT
elif TORRENT_DIRECT_LIMIT is not None:
mssg = f'Torrent/Direct limit is {TORRENT_DIRECT_LIMIT}GB'
limit = TORRENT_DIRECT_LIMIT
if limit is not None:
LOGGER.info('Checking File/Folder Size...')
sleep(1)
size = dl.size_raw()
if size > limit * 1024**3:
dl.getListener().onDownloadError(f'{mssg}.\nYour File/Folder size is {get_readable_file_size(size)}')
api.remove([download], force=True)
return
def __onDownloadComplete(self, api, gid):
dl = getDownloadByGid(gid)
download = api.get_download(gid)
if download.followed_by_ids:
new_gid = download.followed_by_ids[0]
new_download = api.get_download(new_gid)
if dl is None:
dl = getDownloadByGid(new_gid)
with download_dict_lock:
download_dict[dl.uid()] = AriaDownloadStatus(new_gid, dl.getListener())
LOGGER.info(f'Changed gid from {gid} to {new_gid}')
elif dl:
threading.Thread(target=dl.getListener().onDownloadComplete).start()
@new_thread
def __onDownloadStopped(self, api, gid):
sleep(4)
dl = getDownloadByGid(gid)
if dl:
dl.getListener().onDownloadError('Dead torrent!')
@new_thread
def __onDownloadError(self, api, gid):
LOGGER.info(f"onDownloadError: {gid}")
sleep(0.5)
dl = getDownloadByGid(gid)
download = api.get_download(gid)
error = download.error_message
LOGGER.info(f"Download Error: {error}")
if dl:
dl.getListener().onDownloadError(error)
def start_listener(self):
aria2.listen_to_notifications(threaded=True, on_download_start=self.__onDownloadStarted,
on_download_error=self.__onDownloadError,
on_download_stop=self.__onDownloadStopped,
on_download_complete=self.__onDownloadComplete,
timeout=30)
def add_download(self, link: str, path, listener, filename):
if is_magnet(link):
download = aria2.add_magnet(link, {'dir': path, 'out': filename})
else:
download = aria2.add_uris([link], {'dir': path, 'out': filename})
if download.error_message:
listener.onDownloadError(download.error_message)
return
with download_dict_lock:
download_dict[listener.uid] = AriaDownloadStatus(download.gid, listener)
LOGGER.info(f"Started: {download.gid} DIR:{download.dir} ")

View File

@ -0,0 +1,443 @@
# Copyright (C) 2019 The Raphielscape Company LLC.
#
# Licensed under the Raphielscape Public License, Version 1.c (the "License");
# you may not use this file except in compliance with the License.
#
""" Helper Module containing various sites direct links generators. This module is copied and modified as per need
from https://github.com/AvinashReddy3108/PaperplaneExtended . I hereby take no credit of the following code other
than the modifications. See https://github.com/AvinashReddy3108/PaperplaneExtended/commits/master/userbot/modules/direct_links.py
for original authorship. """
from bot import LOGGER, UPTOBOX_TOKEN
import json
import math
import re
import urllib.parse
from os import popen
from random import choice
from urllib.parse import urlparse
import lk21
import requests, cfscrape
from bs4 import BeautifulSoup
from js2py import EvalJs
from lk21.extractors.bypasser import Bypass
from base64 import standard_b64encode
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.ext_utils.exceptions import DirectDownloadLinkException
def direct_link_generator(link: str):
""" direct links generator """
if not link:
raise DirectDownloadLinkException("No links found!")
elif 'youtube.com' in link or 'youtu.be' in link:
raise DirectDownloadLinkException(f"Use /{BotCommands.WatchCommand} to mirror Youtube link\nUse /{BotCommands.ZipWatchCommand} to make zip of Youtube playlist")
elif 'zippyshare.com' in link:
return zippy_share(link)
elif 'yadi.sk' in link:
return yandex_disk(link)
elif 'mediafire.com' in link:
return mediafire(link)
elif 'uptobox.com' in link:
return uptobox(link)
elif 'osdn.net' in link:
return osdn(link)
elif 'github.com' in link:
return github(link)
elif 'hxfile.co' in link:
return hxfile(link)
elif 'anonfiles.com' in link:
return anonfiles(link)
elif 'letsupload.io' in link:
return letsupload(link)
elif 'fembed.net' in link:
return fembed(link)
elif 'fembed.com' in link:
return fembed(link)
elif 'femax20.com' in link:
return fembed(link)
elif 'fcdn.stream' in link:
return fembed(link)
elif 'feurl.com' in link:
return fembed(link)
elif 'naniplay.nanime.in' in link:
return fembed(link)
elif 'naniplay.nanime.biz' in link:
return fembed(link)
elif 'naniplay.com' in link:
return fembed(link)
elif 'layarkacaxxi.icu' in link:
return fembed(link)
elif 'sbembed.com' in link:
return sbembed(link)
elif 'streamsb.net' in link:
return sbembed(link)
elif 'sbplay.org' in link:
return sbembed(link)
elif '1drv.ms' in link:
return onedrive(link)
elif 'pixeldrain.com' in link:
return pixeldrain(link)
elif 'antfiles.com' in link:
return antfiles(link)
elif 'streamtape.com' in link:
return streamtape(link)
elif 'bayfiles.com' in link:
return anonfiles(link)
elif 'racaty.net' in link:
return racaty(link)
elif '1fichier.com' in link:
return fichier(link)
elif 'solidfiles.com' in link:
return solidfiles(link)
elif 'krakenfiles.com' in link:
return krakenfiles(link)
else:
raise DirectDownloadLinkException(f'No Direct link function found for {link}')
def zippy_share(url: str) -> str:
""" ZippyShare direct links generator
Based on https://github.com/KenHV/Mirror-Bot
https://github.com/jovanzers/WinTenCermin """
try:
link = re.findall(r'\bhttps?://.*zippyshare\.com\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No Zippyshare links found")
try:
base_url = re.search('http.+.zippyshare.com', link).group()
response = requests.get(link).content
pages = BeautifulSoup(response, "lxml")
try:
js_script = pages.find("div", {"class": "center"}).find_all("script")[1]
except IndexError:
js_script = pages.find("div", {"class": "right"}).find_all("script")[0]
js_content = re.findall(r'\.href.=."/(.*?)";', str(js_script))
js_content = 'var x = "/' + js_content[0] + '"'
evaljs = EvalJs()
setattr(evaljs, "x", None)
evaljs.execute(js_content)
js_content = getattr(evaljs, "x")
return base_url + js_content
except IndexError:
raise DirectDownloadLinkException("ERROR: Can't find download button")
def yandex_disk(url: str) -> str:
""" Yandex.Disk direct links generator
Based on https://github.com/wldhx/yadisk-direct """
try:
link = re.findall(r'\bhttps?://.*yadi\.sk\S+', url)[0]
except IndexError:
return "No Yandex.Disk links found\n"
api = 'https://cloud-api.yandex.net/v1/disk/public/resources/download?public_key={}'
try:
return requests.get(api.format(link)).json()['href']
except KeyError:
raise DirectDownloadLinkException("ERROR: File not found/Download limit reached\n")
def uptobox(url: str) -> str:
""" Uptobox direct links generator
based on https://github.com/jovanzers/WinTenCermin """
try:
link = re.findall(r'\bhttps?://.*uptobox\.com\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No Uptobox links found\n")
if UPTOBOX_TOKEN is None:
LOGGER.error('UPTOBOX_TOKEN not provided!')
dl_url = link
else:
try:
link = re.findall(r'\bhttp?://.*uptobox\.com/dl\S+', url)[0]
dl_url = link
except:
file_id = re.findall(r'\bhttps?://.*uptobox\.com/(\w+)', url)[0]
file_link = 'https://uptobox.com/api/link?token=%s&file_code=%s' % (UPTOBOX_TOKEN, file_id)
req = requests.get(file_link)
result = req.json()
dl_url = result['data']['dlLink']
return dl_url
def mediafire(url: str) -> str:
""" MediaFire direct links generator """
try:
link = re.findall(r'\bhttps?://.*mediafire\.com\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No MediaFire links found\n")
page = BeautifulSoup(requests.get(link).content, 'lxml')
info = page.find('a', {'aria-label': 'Download file'})
return info.get('href')
def osdn(url: str) -> str:
""" OSDN direct links generator """
osdn_link = 'https://osdn.net'
try:
link = re.findall(r'\bhttps?://.*osdn\.net\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No OSDN links found\n")
page = BeautifulSoup(
requests.get(link, allow_redirects=True).content, 'lxml')
info = page.find('a', {'class': 'mirror_link'})
link = urllib.parse.unquote(osdn_link + info['href'])
mirrors = page.find('form', {'id': 'mirror-select-form'}).findAll('tr')
urls = []
for data in mirrors[1:]:
mirror = data.find('input')['value']
urls.append(re.sub(r'm=(.*)&f', f'm={mirror}&f', link))
return urls[0]
def github(url: str) -> str:
""" GitHub direct links generator """
try:
re.findall(r'\bhttps?://.*github\.com.*releases\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No GitHub Releases links found\n")
download = requests.get(url, stream=True, allow_redirects=False)
try:
return download.headers["location"]
except KeyError:
raise DirectDownloadLinkException("ERROR: Can't extract the link\n")
def hxfile(url: str) -> str:
""" Hxfile direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
return bypasser.bypass_filesIm(url)
def anonfiles(url: str) -> str:
""" Anonfiles direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
return bypasser.bypass_anonfiles(url)
def letsupload(url: str) -> str:
""" Letsupload direct link generator
Based on https://github.com/zevtyardt/lk21
"""
dl_url = ''
try:
link = re.findall(r'\bhttps?://.*letsupload\.io\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No Letsupload links found\n")
bypasser = lk21.Bypass()
dl_url=bypasser.bypass_url(link)
return dl_url
def fembed(link: str) -> str:
""" Fembed direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
dl_url=bypasser.bypass_fembed(link)
count = len(dl_url)
lst_link = [dl_url[i] for i in dl_url]
return lst_link[count-1]
def sbembed(link: str) -> str:
""" Sbembed direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
dl_url=bypasser.bypass_sbembed(link)
count = len(dl_url)
lst_link = [dl_url[i] for i in dl_url]
return lst_link[count-1]
def onedrive(link: str) -> str:
""" Onedrive direct link generator
Based on https://github.com/UsergeTeam/Userge """
link_without_query = urlparse(link)._replace(query=None).geturl()
direct_link_encoded = str(standard_b64encode(bytes(link_without_query, "utf-8")), "utf-8")
direct_link1 = f"https://api.onedrive.com/v1.0/shares/u!{direct_link_encoded}/root/content"
resp = requests.head(direct_link1)
if resp.status_code != 302:
return "ERROR: Unauthorized link, the link may be private"
dl_link = resp.next.url
file_name = dl_link.rsplit("/", 1)[1]
resp2 = requests.head(dl_link)
return dl_link
def pixeldrain(url: str) -> str:
""" Based on https://github.com/yash-dk/TorToolkit-Telegram """
url = url.strip("/ ")
file_id = url.split("/")[-1]
info_link = f"https://pixeldrain.com/api/file/{file_id}/info"
dl_link = f"https://pixeldrain.com/api/file/{file_id}"
resp = requests.get(info_link).json()
if resp["success"]:
return dl_link
else:
raise DirectDownloadLinkException("ERROR: Cant't download due {}.".format(resp.text["value"]))
def antfiles(url: str) -> str:
""" Antfiles direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
return bypasser.bypass_antfiles(url)
def streamtape(url: str) -> str:
""" Streamtape direct link generator
Based on https://github.com/zevtyardt/lk21
"""
bypasser = lk21.Bypass()
return bypasser.bypass_streamtape(url)
def racaty(url: str) -> str:
""" Racaty direct links generator
based on https://github.com/SlamDevs/slam-mirrorbot"""
dl_url = ''
try:
link = re.findall(r'\bhttps?://.*racaty\.net\S+', url)[0]
except IndexError:
raise DirectDownloadLinkException("No Racaty links found\n")
scraper = cfscrape.create_scraper()
r = scraper.get(url)
soup = BeautifulSoup(r.text, "lxml")
op = soup.find("input", {"name": "op"})["value"]
ids = soup.find("input", {"name": "id"})["value"]
rpost = scraper.post(url, data = {"op": op, "id": ids})
rsoup = BeautifulSoup(rpost.text, "lxml")
dl_url = rsoup.find("a", {"id": "uniqueExpirylink"})["href"].replace(" ", "%20")
return dl_url
def fichier(link: str) -> str:
""" 1Fichier direct links generator
Based on https://github.com/Maujar
"""
regex = r"^([http:\/\/|https:\/\/]+)?.*1fichier\.com\/\?.+"
gan = re.match(regex, link)
if not gan:
raise DirectDownloadLinkException("ERROR: The link you entered is wrong!")
if "::" in link:
pswd = link.split("::")[-1]
url = link.split("::")[-2]
else:
pswd = None
url = link
try:
if pswd is None:
req = requests.post(url)
else:
pw = {"pass": pswd}
req = requests.post(url, data=pw)
except:
raise DirectDownloadLinkException("ERROR: Unable to reach 1fichier server!")
if req.status_code == 404:
raise DirectDownloadLinkException("ERROR: File not found/The link you entered is wrong!")
soup = BeautifulSoup(req.content, 'lxml')
if soup.find("a", {"class": "ok btn-general btn-orange"}) is not None:
dl_url = soup.find("a", {"class": "ok btn-general btn-orange"})["href"]
if dl_url is None:
raise DirectDownloadLinkException("ERROR: Unable to generate Direct Link 1fichier!")
else:
return dl_url
elif len(soup.find_all("div", {"class": "ct_warn"})) == 2:
str_2 = soup.find_all("div", {"class": "ct_warn"})[-1]
if "you must wait" in str(str_2).lower():
numbers = [int(word) for word in str(str_2).split() if word.isdigit()]
if not numbers:
raise DirectDownloadLinkException("ERROR: 1fichier is on a limit. Please wait a few minutes/hour.")
else:
raise DirectDownloadLinkException(f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
elif "protect access" in str(str_2).lower():
raise DirectDownloadLinkException(f"ERROR: This link requires a password!\n\n<b>This link requires a password!</b>\n- Insert sign <b>::</b> after the link and write the password after the sign.\n\n<b>Example:</b>\n<code>/{BotCommands.MirrorCommand} https://1fichier.com/?smmtd8twfpm66awbqz04::love you</code>\n\n* No spaces between the signs <b>::</b>\n* For the password, you can use a space!")
else:
raise DirectDownloadLinkException("ERROR: Error trying to generate Direct Link from 1fichier!")
elif len(soup.find_all("div", {"class": "ct_warn"})) == 3:
str_1 = soup.find_all("div", {"class": "ct_warn"})[-2]
str_3 = soup.find_all("div", {"class": "ct_warn"})[-1]
if "you must wait" in str(str_1).lower():
numbers = [int(word) for word in str(str_1).split() if word.isdigit()]
if not numbers:
raise DirectDownloadLinkException("ERROR: 1fichier is on a limit. Please wait a few minutes/hour.")
else:
raise DirectDownloadLinkException(f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
elif "bad password" in str(str_3).lower():
raise DirectDownloadLinkException("ERROR: The password you entered is wrong!")
else:
raise DirectDownloadLinkException("ERROR: Error trying to generate Direct Link from 1fichier!")
else:
raise DirectDownloadLinkException("ERROR: Error trying to generate Direct Link from 1fichier!")
def solidfiles(url: str) -> str:
""" Solidfiles direct links generator
Based on https://github.com/Xonshiz/SolidFiles-Downloader
By https://github.com/Jusidama18 """
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'
}
pageSource = requests.get(url, headers = headers).text
mainOptions = str(re.search(r'viewerOptions\'\,\ (.*?)\)\;', pageSource).group(1))
return json.loads(mainOptions)["downloadUrl"]
def krakenfiles(page_link: str) -> str:
""" krakenfiles direct links generator
Based on https://github.com/tha23rd/py-kraken
By https://github.com/junedkh """
page_resp = requests.session().get(page_link)
soup = BeautifulSoup(page_resp.text, "lxml")
try:
token = soup.find("input", id="dl-token")["value"]
except:
raise DirectDownloadLinkException(f"Page link is wrong: {page_link}")
hashes = [
item["data-file-hash"]
for item in soup.find_all("div", attrs={"data-file-hash": True})
]
if len(hashes) < 1:
raise DirectDownloadLinkException(
f"Hash not found for : {page_link}")
dl_hash = hashes[0]
payload = f'------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name="token"\r\n\r\n{token}\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW--'
headers = {
"content-type": "multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW",
"cache-control": "no-cache",
"hash": dl_hash,
}
dl_link_resp = requests.session().post(
f"https://krakenfiles.com/download/{hash}", data=payload, headers=headers)
dl_link_json = dl_link_resp.json()
if "url" in dl_link_json:
return dl_link_json["url"]
else:
raise DirectDownloadLinkException(
f"Failed to acquire download URL from kraken for : {page_link}")
def useragent():
"""
useragent random setter
"""
useragents = BeautifulSoup(
requests.get(
'https://developers.whatismybrowser.com/'
'useragents/explore/operating_system_name/android/').content,
'lxml').findAll('td', {'class': 'useragent'})
user_agent = choice(useragents)
return user_agent.text

View File

@ -0,0 +1,82 @@
RAPHIELSCAPE PUBLIC LICENSE
Version 1.c, June 2019
Copyright (C) 2019 Raphielscape LLC.
Copyright (C) 2019 Devscapes Open Source Holding GmbH.
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
RAPHIELSCAPE PUBLIC LICENSE
A-1. DEFINITIONS
0. “This License” refers to version 1.c of the Raphielscape Public License.
1. “Copyright” also means copyright-like laws that apply to other kinds of works.
2. “The Work" refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”.
“Licensees” and “recipients” may be individuals or organizations.
3. To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission,
other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work
or a work “based on” the earlier work.
4. Source Form. The “source form” for a work means the preferred form of the work for making modifications to it.
“Object code” means any non-source form of a work.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and
(for an executable work) run the object code and to modify the work, including scripts to control those activities.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
5. "The author" refers to "author" of the code, which is the one that made the particular code which exists inside of
the Corresponding Source.
6. "Owner" refers to any parties which is made the early form of the Corresponding Source.
A-2. TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You must give any other recipients of the Work or Derivative Works a copy of this License; and
1. You must cause any modified files to carry prominent notices stating that You changed the files; and
2. You must retain, in the Source form of any Derivative Works that You distribute,
this license, all copyright, patent, trademark, authorships and attribution notices
from the Source form of the Work; and
3. Respecting the author and owner of works that are distributed in any way.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction,
or distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
B. DISCLAIMER OF WARRANTY
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C. REVISED VERSION OF THIS LICENSE
The Devscapes Open Source Holding GmbH. may publish revised and/or new versions of the
Raphielscape Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a
certain numbered version of the Raphielscape Public License "or any later version" applies to it,
you have the option of following the terms and conditions either of that numbered version or of
any later version published by the Devscapes Open Source Holding GmbH. If the Program does not specify a
version number of the Raphielscape Public License, you may choose any version ever published
by the Devscapes Open Source Holding GmbH.
END OF LICENSE

View File

@ -0,0 +1,27 @@
# An abstract class which will be inherited by the tool specific classes like aria2_helper or mega_download_helper
import threading
class MethodNotImplementedError(NotImplementedError):
def __init__(self):
super(self, 'Not implemented method')
class DownloadHelper:
def __init__(self):
self.name = '' # Name of the download; empty string if no download has been started
self.size = 0.0 # Size of the download
self.downloaded_bytes = 0.0 # Bytes downloaded
self.speed = 0.0 # Download speed in bytes per second
self.progress = 0.0
self.progress_string = '0.00%'
self.eta = 0 # Estimated time of download complete
self.eta_string = '0s' # A listener class which have event callbacks
self._resource_lock = threading.Lock()
def add_download(self, link: str, path):
raise MethodNotImplementedError
def cancel_download(self):
# Returns None if successfully cancelled, else error string
raise MethodNotImplementedError

View File

@ -0,0 +1,198 @@
from bot import LOGGER, MEGA_API_KEY, download_dict_lock, download_dict, MEGA_EMAIL_ID, MEGA_PASSWORD
import threading
from mega import (MegaApi, MegaListener, MegaRequest, MegaTransfer, MegaError)
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, sendStatusMessage
import os
from bot.helper.ext_utils.bot_utils import new_thread, get_mega_link_type, get_readable_file_size
from bot.helper.mirror_utils.status_utils.mega_download_status import MegaDownloadStatus
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
from bot import MEGA_LIMIT, STOP_DUPLICATE, ZIP_UNZIP_LIMIT
import random
import string
class MegaAppListener(MegaListener):
_NO_EVENT_ON = (MegaRequest.TYPE_LOGIN,MegaRequest.TYPE_FETCH_NODES)
NO_ERROR = "no error"
def __init__(self, continue_event: threading.Event, listener):
self.continue_event = continue_event
self.node = None
self.public_node = None
self.listener = listener
self.uid = listener.uid
self.__bytes_transferred = 0
self.is_cancelled = False
self.__speed = 0
self.__name = ''
self.__size = 0
self.error = None
self.gid = ""
super(MegaAppListener, self).__init__()
@property
def speed(self):
"""Returns speed of the download in bytes/second"""
return self.__speed
@property
def name(self):
"""Returns name of the download"""
return self.__name
def setValues(self, name, size, gid):
self.__name = name
self.__size = size
self.gid = gid
@property
def size(self):
"""Size of download in bytes"""
return self.__size
@property
def downloaded_bytes(self):
return self.__bytes_transferred
def onRequestStart(self, api, request):
pass
def onRequestFinish(self, api, request, error):
if str(error).lower() != "no error":
self.error = error.copy()
return
request_type = request.getType()
if request_type == MegaRequest.TYPE_LOGIN:
api.fetchNodes()
elif request_type == MegaRequest.TYPE_GET_PUBLIC_NODE:
self.public_node = request.getPublicMegaNode()
elif request_type == MegaRequest.TYPE_FETCH_NODES:
LOGGER.info("Fetching Root Node.")
self.node = api.getRootNode()
LOGGER.info(f"Node Name: {self.node.getName()}")
if request_type not in self._NO_EVENT_ON or self.node and "cloud drive" not in self.node.getName().lower():
self.continue_event.set()
def onRequestTemporaryError(self, api, request, error: MegaError):
LOGGER.error(f'Mega Request error in {error}')
if not self.is_cancelled:
self.is_cancelled = True
self.listener.onDownloadError("RequestTempError: " + error.toString())
self.error = error.toString()
self.continue_event.set()
def onTransferStart(self, api: MegaApi, transfer: MegaTransfer):
pass
def onTransferUpdate(self, api: MegaApi, transfer: MegaTransfer):
if self.is_cancelled:
api.cancelTransfer(transfer, None)
return
self.__speed = transfer.getSpeed()
self.__bytes_transferred = transfer.getTransferredBytes()
def onTransferFinish(self, api: MegaApi, transfer: MegaTransfer, error):
try:
if self.is_cancelled:
self.continue_event.set()
elif transfer.isFinished() and (transfer.isFolderTransfer() or transfer.getFileName() == self.name):
self.listener.onDownloadComplete()
self.continue_event.set()
except Exception as e:
LOGGER.error(e)
def onTransferTemporaryError(self, api, transfer, error):
filen = transfer.getFileName()
state = transfer.getState()
errStr = error.toString()
LOGGER.error(f'Mega download error in file {transfer} {filen}: {error}')
if state in [1, 4]:
# Sometimes MEGA (offical client) can't stream a node either and raises a temp failed error.
# Don't break the transfer queue if transfer's in queued (1) or retrying (4) state [causes seg fault]
return
self.error = errStr
if not self.is_cancelled:
self.is_cancelled = True
self.listener.onDownloadError(f"TransferTempError: {errStr} ({filen})")
def cancel_download(self):
self.is_cancelled = True
self.listener.onDownloadError("Download Canceled by user")
class AsyncExecutor:
def __init__(self):
self.continue_event = threading.Event()
def do(self, function, args):
self.continue_event.clear()
function(*args)
self.continue_event.wait()
listeners = []
class MegaDownloadHelper:
def __init__(self):
pass
@staticmethod
@new_thread
def add_download(mega_link: str, path: str, listener):
executor = AsyncExecutor()
api = MegaApi(MEGA_API_KEY, None, None, 'telegram-mirror-bot')
global listeners
mega_listener = MegaAppListener(executor.continue_event, listener)
listeners.append(mega_listener)
api.addListener(mega_listener)
if MEGA_EMAIL_ID is not None and MEGA_PASSWORD is not None:
executor.do(api.login, (MEGA_EMAIL_ID, MEGA_PASSWORD))
link_type = get_mega_link_type(mega_link)
if link_type == "file":
LOGGER.info("File. If your download didn't start, then check your link if it's available to download")
executor.do(api.getPublicNode, (mega_link,))
node = mega_listener.public_node
else:
LOGGER.info("Folder. If your download didn't start, then check your link if it's available to download")
folder_api = MegaApi(MEGA_API_KEY,None,None,'TgBot')
folder_api.addListener(mega_listener)
executor.do(folder_api.loginToFolder, (mega_link,))
node = folder_api.authorizeNode(mega_listener.node)
if mega_listener.error is not None:
return sendMessage(str(mega_listener.error), listener.bot, listener.update)
if STOP_DUPLICATE and not listener.isLeech:
LOGGER.info('Checking File/Folder if already in Drive')
mname = node.getName()
if listener.isZip:
mname = mname + ".zip"
if not listener.extract:
gd = GoogleDriveHelper()
smsg, button = gd.drive_list(mname, True)
if smsg:
msg1 = "File/Folder is already available in Drive.\nHere are the search results:"
sendMarkup(msg1, listener.bot, listener.update, button)
executor.continue_event.set()
return
limit = None
if ZIP_UNZIP_LIMIT is not None and (listener.isZip or listener.extract):
msg3 = f'Failed, Zip/Unzip limit is {ZIP_UNZIP_LIMIT}GB.\nYour File/Folder size is {get_readable_file_size(api.getSize(node))}.'
limit = ZIP_UNZIP_LIMIT
elif MEGA_LIMIT is not None:
msg3 = f'Failed, Mega limit is {MEGA_LIMIT}GB.\nYour File/Folder size is {get_readable_file_size(api.getSize(node))}.'
limit = MEGA_LIMIT
if limit is not None:
LOGGER.info('Checking File/Folder Size...')
size = api.getSize(node)
if size > limit * 1024**3:
sendMessage(msg3, listener.bot, listener.update)
executor.continue_event.set()
return
with download_dict_lock:
download_dict[listener.uid] = MegaDownloadStatus(mega_listener, listener)
os.makedirs(path)
gid = ''.join(random.SystemRandom().choices(string.ascii_letters + string.digits, k=8))
mega_listener.setValues(node.getName(), api.getSize(node), gid)
sendStatusMessage(listener.update, listener.bot)
executor.do(api.startDownload,(node,path))

View File

@ -0,0 +1,267 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/commit/0bfba523f095ab1dccad431d72561e0e002e7a59)
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
import os
import random
import string
import time
import logging
import shutil
import re
import qbittorrentapi as qba
from torrentool.api import Torrent
from telegram import InlineKeyboardMarkup
from telegram.ext import CallbackQueryHandler
from bot import download_dict, download_dict_lock, BASE_URL, dispatcher, get_client, TORRENT_DIRECT_LIMIT, ZIP_UNZIP_LIMIT, STOP_DUPLICATE
from bot.helper.mirror_utils.status_utils.qbit_download_status import QbDownloadStatus
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, deleteMessage, sendStatusMessage
from bot.helper.ext_utils.bot_utils import setInterval, MirrorStatus, getDownloadByGid, get_readable_file_size, new_thread
from bot.helper.telegram_helper import button_build
LOGGER = logging.getLogger(__name__)
logging.getLogger('qbittorrentapi').setLevel(logging.ERROR)
logging.getLogger('requests').setLevel(logging.ERROR)
logging.getLogger('urllib3').setLevel(logging.ERROR)
class QbitTorrent:
def __init__(self):
self.update_interval = 4
self.client = get_client()
self.meta_time = time.time()
self.stalled_time = time.time()
self.sizeChecked = False
self.dupChecked = False
self.is_file = False
self.pincode = ""
self.get_info = 0
@new_thread
def add_torrent(self, link, dire, listener, qbitsel):
self.listener = listener
self.dire = dire
self.qbitsel = qbitsel
try:
if os.path.exists(link):
self.is_file = True
self.ext_hash = get_hash_file(link)
else:
self.ext_hash = get_hash_magnet(link)
tor_info = self.client.torrents_info(torrent_hashes=self.ext_hash)
if len(tor_info) > 0:
sendMessage("This Torrent is already in list.", listener.bot, listener.update)
self.client.auth_log_out()
return
if self.is_file:
op = self.client.torrents_add(torrent_files=[link], save_path=dire)
os.remove(link)
else:
op = self.client.torrents_add(link, save_path=dire)
time.sleep(0.3)
if op.lower() == "ok.":
tor_info = self.client.torrents_info(torrent_hashes=self.ext_hash)
if len(tor_info) == 0:
while True:
if time.time() - self.meta_time >= 20:
sendMessage("The Torrent was not added. Report when you see this error", listener.bot, listener.update)
self.client.torrents_delete(torrent_hashes=self.ext_hash, delete_files=True)
self.client.auth_log_out()
return False
tor_info = self.client.torrents_info(torrent_hashes=self.ext_hash)
if len(tor_info) > 0:
break
else:
sendMessage("This is an unsupported/invalid link.", listener.bot, listener.update)
self.client.torrents_delete(torrent_hashes=self.ext_hash, delete_files=True)
self.client.auth_log_out()
return
tor_info = tor_info[0]
self.ext_hash = tor_info.hash
gid = ''.join(random.SystemRandom().choices(string.ascii_letters + string.digits, k=14))
with download_dict_lock:
download_dict[listener.uid] = QbDownloadStatus(gid, listener, self.ext_hash, self.client)
LOGGER.info(f"QbitDownload started: {tor_info.name} {self.ext_hash}")
self.updater = setInterval(self.update_interval, self.update)
if BASE_URL is not None and qbitsel:
if not self.is_file:
meta = sendMessage("Downloading Metadata, wait then you can select files or mirror Torrent file if it have low seeders", listener.bot, listener.update)
while True:
tor_info = self.client.torrents_info(torrent_hashes=self.ext_hash)
if len(tor_info) == 0:
deleteMessage(listener.bot, meta)
return False
try:
tor_info = tor_info[0]
if tor_info.state == "metaDL" or tor_info.state == "checkingResumeData":
time.sleep(1)
else:
deleteMessage(listener.bot, meta)
break
except:
deleteMessage(listener.bot, meta)
return False
self.client.torrents_pause(torrent_hashes=self.ext_hash)
for n in str(self.ext_hash):
if n.isdigit():
self.pincode += str(n)
if len(self.pincode) == 4:
break
buttons = button_build.ButtonMaker()
buttons.buildbutton("Select Files", f"{BASE_URL}/app/files/{self.ext_hash}")
buttons.sbutton("Pincode", f"pin {gid} {self.pincode}")
buttons.sbutton("Done Selecting", f"done {gid} {self.ext_hash}")
QBBUTTONS = InlineKeyboardMarkup(buttons.build_menu(2))
msg = "Your download paused. Choose files then press Done Selecting button to start downloading."
sendMarkup(msg, listener.bot, listener.update, QBBUTTONS)
else:
sendStatusMessage(listener.update, listener.bot)
except qba.UnsupportedMediaType415Error as e:
LOGGER.error(str(e))
sendMessage("This is an unsupported/invalid link: {str(e)}", listener.bot, listener.update)
self.client.auth_log_out()
except Exception as e:
sendMessage(str(e), listener.bot, listener.update)
self.client.auth_log_out()
def update(self):
tor_info = self.client.torrents_info(torrent_hashes=self.ext_hash)
if len(tor_info) == 0:
self.get_info += 1
if self.get_info > 10:
self.client.auth_log_out()
self.updater.cancel()
return
try:
tor_info = tor_info[0]
if tor_info.state == "metaDL":
self.stalled_time = time.time()
if time.time() - self.meta_time >= 999999999: # timeout while downloading metadata
self.client.torrents_pause(torrent_hashes=self.ext_hash)
time.sleep(0.3)
self.listener.onDownloadError("Dead Torrent!")
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
elif tor_info.state == "downloading":
self.stalled_time = time.time()
if STOP_DUPLICATE and not self.listener.isLeech and not self.dupChecked and os.path.isdir(f'{self.dire}'):
LOGGER.info('Checking File/Folder if already in Drive')
qbname = str(os.listdir(f'{self.dire}')[0])
if qbname.endswith('.!qB'):
qbname = os.path.splitext(qbname)[0]
if self.listener.isZip:
qbname = qbname + ".zip"
if not self.listener.extract:
gd = GoogleDriveHelper()
qbmsg, button = gd.drive_list(qbname, True)
if qbmsg:
msg = "File/Folder is already available in Drive."
self.client.torrents_pause(torrent_hashes=self.ext_hash)
time.sleep(0.3)
self.listener.onDownloadError(msg)
sendMarkup("Here are the search results:", self.listener.bot, self.listener.update, button)
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
return
self.dupChecked = True
if not self.sizeChecked:
limit = None
if ZIP_UNZIP_LIMIT is not None and (self.listener.isZip or self.listener.extract):
mssg = f'Zip/Unzip limit is {ZIP_UNZIP_LIMIT}GB'
limit = ZIP_UNZIP_LIMIT
elif TORRENT_DIRECT_LIMIT is not None:
mssg = f'Torrent limit is {TORRENT_DIRECT_LIMIT}GB'
limit = TORRENT_DIRECT_LIMIT
if limit is not None:
LOGGER.info('Checking File/Folder Size...')
time.sleep(1)
size = tor_info.size
if size > limit * 1024**3:
self.client.torrents_pause(torrent_hashes=self.ext_hash)
time.sleep(0.3)
self.listener.onDownloadError(f"{mssg}.\nYour File/Folder size is {get_readable_file_size(size)}")
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
self.sizeChecked = True
elif tor_info.state == "stalledDL":
if time.time() - self.stalled_time >= 999999999: # timeout after downloading metadata
self.client.torrents_pause(torrent_hashes=self.ext_hash)
time.sleep(0.3)
self.listener.onDownloadError("Dead Torrent!")
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
elif tor_info.state == "error":
self.client.torrents_pause(torrent_hashes=self.ext_hash)
time.sleep(0.3)
self.listener.onDownloadError("No enough space for this torrent on device")
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
elif tor_info.state != "checkingUP" and (tor_info.state == "uploading" or \
tor_info.state.lower().endswith("up")):
self.client.torrents_pause(torrent_hashes=self.ext_hash)
if self.qbitsel:
for dirpath, subdir, files in os.walk(f"{self.dire}", topdown=False):
for filee in files:
if filee.endswith(".!qB") or filee.endswith('.parts') and filee.startswith('.'):
os.remove(os.path.join(dirpath, filee))
for folder in subdir:
if folder == ".unwanted":
shutil.rmtree(os.path.join(dirpath, folder))
for dirpath, subdir, files in os.walk(f"{self.dire}", topdown=False):
if not os.listdir(dirpath):
os.rmdir(dirpath)
self.listener.onDownloadComplete()
self.client.torrents_delete(torrent_hashes=self.ext_hash)
self.client.auth_log_out()
self.updater.cancel()
except (IndexError, NameError):
self.get_info += 1
if self.get_info > 10:
self.client.auth_log_out()
self.updater.cancel()
def get_confirm(update, context):
query = update.callback_query
user_id = query.from_user.id
data = query.data
data = data.split(" ")
qbdl = getDownloadByGid(data[1])
if qbdl is None:
query.answer(text="This task has been cancelled!", show_alert=True)
query.message.delete()
elif user_id != qbdl.listener.message.from_user.id:
query.answer(text="Don't waste your time!", show_alert=True)
elif data[0] == "pin":
query.answer(text=data[2], show_alert=True)
elif data[0] == "done":
query.answer()
qbdl.client.torrents_resume(torrent_hashes=data[2])
sendStatusMessage(qbdl.listener.update, qbdl.listener.bot)
query.message.delete()
def get_hash_magnet(mgt):
if mgt.startswith('magnet:'):
try:
mHash = re.search(r'xt=urn:btih:(.*)&dn=', mgt).group(1)
except:
mHash = re.search(r'xt=urn:btih:(.*)', mgt).group(1)
return mHash.lower()
def get_hash_file(path):
tr = Torrent.from_file(path)
mgt = tr.magnet_link
return get_hash_magnet(mgt)
pin_handler = CallbackQueryHandler(get_confirm, pattern="pin", run_async=True)
done_handler = CallbackQueryHandler(get_confirm, pattern="done", run_async=True)
dispatcher.add_handler(pin_handler)
dispatcher.add_handler(done_handler)

View File

@ -0,0 +1,120 @@
import logging
import threading
import time
from bot import LOGGER, download_dict, download_dict_lock, app, STOP_DUPLICATE
from .download_helper import DownloadHelper
from ..status_utils.telegram_download_status import TelegramDownloadStatus
from bot.helper.telegram_helper.message_utils import sendMarkup, sendStatusMessage
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
global_lock = threading.Lock()
GLOBAL_GID = set()
logging.getLogger("pyrogram").setLevel(logging.WARNING)
class TelegramDownloadHelper(DownloadHelper):
def __init__(self, listener):
super().__init__()
self.__listener = listener
self.__resource_lock = threading.RLock()
self.__name = ""
self.__start_time = time.time()
self.__gid = ""
self._bot = app
self.__is_cancelled = False
@property
def gid(self):
with self.__resource_lock:
return self.__gid
@property
def download_speed(self):
with self.__resource_lock:
return self.downloaded_bytes / (time.time() - self.__start_time)
def __onDownloadStart(self, name, size, file_id):
with download_dict_lock:
download_dict[self.__listener.uid] = TelegramDownloadStatus(self, self.__listener)
with global_lock:
GLOBAL_GID.add(file_id)
with self.__resource_lock:
self.name = name
self.size = size
self.__gid = file_id
self.__listener.onDownloadStarted()
def __onDownloadProgress(self, current, total):
if self.__is_cancelled:
self.__onDownloadError('Cancelled by user!')
self._bot.stop_transmission()
return
with self.__resource_lock:
self.downloaded_bytes = current
try:
self.progress = current / self.size * 100
except ZeroDivisionError:
self.progress = 0
def __onDownloadError(self, error):
with global_lock:
try:
GLOBAL_GID.remove(self.gid)
except KeyError:
pass
self.__listener.onDownloadError(error)
def __onDownloadComplete(self):
with global_lock:
GLOBAL_GID.remove(self.gid)
self.__listener.onDownloadComplete()
def __download(self, message, path):
download = self._bot.download_media(
message,
progress = self.__onDownloadProgress,
file_name = path
)
if download is not None:
self.__onDownloadComplete()
elif not self.__is_cancelled:
self.__onDownloadError('Internal error occurred')
def add_download(self, message, path, filename):
_message = self._bot.get_messages(message.chat.id, reply_to_message_ids=message.message_id)
media = None
media_array = [_message.document, _message.video, _message.audio]
for i in media_array:
if i is not None:
media = i
break
if media is not None:
with global_lock:
# For avoiding locking the thread lock for long time unnecessarily
download = media.file_id not in GLOBAL_GID
if filename == "":
name = media.file_name
else:
name = filename
path = path + name
if download:
if STOP_DUPLICATE and not self.__listener.isLeech:
LOGGER.info('Checking File/Folder if already in Drive...')
gd = GoogleDriveHelper()
smsg, button = gd.drive_list(name, True, True)
if smsg:
sendMarkup("File/Folder is already available in Drive.\nHere are the search results:", self.__listener.bot, self.__listener.update, button)
return
sendStatusMessage(self.__listener.update, self.__listener.bot)
self.__onDownloadStart(name, media.file_size, media.file_id)
LOGGER.info(f'Downloading Telegram file with id: {media.file_id}')
threading.Thread(target=self.__download, args=(_message, path)).start()
else:
self.__onDownloadError('File already being downloaded!')
else:
self.__onDownloadError('No document in the replied message')
def cancel_download(self):
LOGGER.info(f'Cancelling download on user request: {self.gid}')
self.__is_cancelled = True

View File

@ -0,0 +1,178 @@
import random
import string
import time
import logging
import re
import threading
from .download_helper import DownloadHelper
from yt_dlp import YoutubeDL, DownloadError
from bot import download_dict_lock, download_dict
from bot.helper.telegram_helper.message_utils import sendStatusMessage
from ..status_utils.youtube_dl_download_status import YoutubeDLDownloadStatus
LOGGER = logging.getLogger(__name__)
class MyLogger:
def __init__(self, obj):
self.obj = obj
def debug(self, msg):
# Hack to fix changing extension
match = re.search(r'.Merger..Merging formats into..(.*?).$', msg) # To mkv
if not match and not self.obj.is_playlist:
match = re.search(r'.ExtractAudio..Destination..(.*?)$', msg) # To mp3
if match and not self.obj.is_playlist:
newname = match.group(1)
newname = newname.split("/")[-1]
self.obj.name = newname
@staticmethod
def warning(msg):
LOGGER.warning(msg)
@staticmethod
def error(msg):
LOGGER.error(msg)
class YoutubeDLHelper(DownloadHelper):
def __init__(self, listener):
super().__init__()
self.name = ""
self.__start_time = time.time()
self.__listener = listener
self.__gid = ""
self.__download_speed = 0
self.downloaded_bytes = 0
self.size = 0
self.is_playlist = False
self.last_downloaded = 0
self.is_cancelled = False
self.downloading = False
self.__resource_lock = threading.RLock()
self.opts = {'progress_hooks': [self.__onDownloadProgress],
'logger': MyLogger(self),
'usenetrc': True,
'continuedl': True,
'embedsubtitles': True,
'prefer_ffmpeg': True,
'skip_playlist_after_errors': 10,
'cookiefile': 'cookies.txt' }
@property
def download_speed(self):
with self.__resource_lock:
return self.__download_speed
@property
def gid(self):
with self.__resource_lock:
return self.__gid
def __onDownloadProgress(self, d):
self.downloading = True
if self.is_cancelled:
raise ValueError("Cancelling Download..")
if d['status'] == "finished":
if self.is_playlist:
self.last_downloaded = 0
elif d['status'] == "downloading":
with self.__resource_lock:
self.__download_speed = d['speed']
try:
tbyte = d['total_bytes']
except KeyError:
tbyte = d['total_bytes_estimate']
if self.is_playlist:
downloadedBytes = d['downloaded_bytes']
chunk_size = downloadedBytes - self.last_downloaded
self.last_downloaded = downloadedBytes
self.downloaded_bytes += chunk_size
else:
self.size = tbyte
self.downloaded_bytes = d['downloaded_bytes']
try:
self.progress = (self.downloaded_bytes / self.size) * 100
except ZeroDivisionError:
pass
def __onDownloadStart(self):
with download_dict_lock:
download_dict[self.__listener.uid] = YoutubeDLDownloadStatus(self, self.__listener)
def __onDownloadComplete(self):
self.__listener.onDownloadComplete()
def onDownloadError(self, error):
self.__listener.onDownloadError(error)
def extractMetaData(self, link, name, get_info=False):
if get_info:
self.opts['playlist_items'] = '0'
with YoutubeDL(self.opts) as ydl:
try:
result = ydl.extract_info(link, download=False)
if get_info:
return result
realName = ydl.prepare_filename(result)
except DownloadError as e:
if get_info:
raise e
self.onDownloadError(str(e))
return
if 'entries' in result:
for v in result['entries']:
try:
self.size += v['filesize_approx']
except KeyError:
pass
self.is_playlist = True
if name == "":
self.name = str(realName).split(f" [{result['id']}]")[0]
else:
self.name = name
else:
ext = realName.split('.')[-1]
if name == "":
self.name = str(realName).split(f" [{result['id']}]")[0] + '.' + ext
else:
self.name = f"{name}.{ext}"
def __download(self, link):
try:
with YoutubeDL(self.opts) as ydl:
ydl.download([link])
self.__onDownloadComplete()
except DownloadError as e:
self.onDownloadError(str(e))
except ValueError:
self.onDownloadError("Download Cancelled by User!")
def add_download(self, link, path, name, qual):
if "hotstar" in link or "sonyliv" in link:
self.opts['geo_bypass_country'] = 'IN'
self.__gid = ''.join(random.SystemRandom().choices(string.ascii_letters + string.digits, k=10))
self.__onDownloadStart()
sendStatusMessage(self.__listener.update, self.__listener.bot)
self.opts['format'] = qual
if qual == 'ba/b':
self.opts['postprocessors'] = [{'key': 'FFmpegExtractAudio','preferredcodec': 'mp3','preferredquality': '340'}]
LOGGER.info(f"Downloading with YT-DL: {link}")
self.extractMetaData(link, name)
if self.is_cancelled:
return
if not self.is_playlist:
self.opts['outtmpl'] = f"{path}/{self.name}"
else:
self.opts['outtmpl'] = f"{path}/{self.name}/%(title)s.%(ext)s"
self.__download(link)
def cancel_download(self):
self.is_cancelled = True
if not self.downloading:
self.onDownloadError("Download Cancelled by User!")

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,98 @@
from bot import aria2, DOWNLOAD_DIR, LOGGER
from bot.helper.ext_utils.bot_utils import MirrorStatus
from .status import Status
def get_download(gid):
return aria2.get_download(gid)
class AriaDownloadStatus(Status):
def __init__(self, gid, listener):
super().__init__()
self.__gid = gid
self.__download = get_download(self.__gid)
self.__uid = listener.uid
self.__listener = listener
self.message = listener.message
def __update(self):
self.__download = get_download(self.__gid)
download = self.__download
if download.followed_by_ids:
self.__gid = download.followed_by_ids[0]
def progress(self):
"""
Calculates the progress of the mirror (upload or download)
:return: returns progress in percentage
"""
self.__update()
return self.__download.progress_string()
def size_raw(self):
"""
Gets total size of the mirror file/folder
:return: total size of mirror
"""
return self.aria_download().total_length
def processed_bytes(self):
return self.aria_download().completed_length
def speed(self):
return self.aria_download().download_speed_string()
def name(self):
return self.aria_download().name
def path(self):
return f"{DOWNLOAD_DIR}{self.__uid}"
def size(self):
return self.aria_download().total_length_string()
def eta(self):
return self.aria_download().eta_string()
def status(self):
download = self.aria_download()
if download.is_waiting:
return MirrorStatus.STATUS_WAITING
elif download.has_failed:
return MirrorStatus.STATUS_FAILED
else:
return MirrorStatus.STATUS_DOWNLOADING
def aria_download(self):
self.__update()
return self.__download
def download(self):
return self
def getListener(self):
return self.__listener
def uid(self):
return self.__uid
def gid(self):
self.__update()
return self.__gid
def cancel_download(self):
LOGGER.info(f"Cancelling Download: {self.name()}")
download = self.aria_download()
if download.is_waiting:
self.__listener.onDownloadError("Cancelled by user")
aria2.remove([download], force=True)
return
if len(download.followed_by_ids) != 0:
downloads = aria2.get_downloads(download.followed_by_ids)
self.__listener.onDownloadError('Download stopped by user!')
aria2.remove(downloads, force=True)
aria2.remove([download], force=True)
return
self.__listener.onDownloadError('Download stopped by user!')
aria2.remove([download], force=True)

View File

@ -0,0 +1,60 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/commit/80d33430715b4296cd253f62cefc089a81937ebf)
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
from .status import Status
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
class CloneStatus(Status):
def __init__(self, obj, size, update, gid):
self.cobj = obj
self.__csize = size
self.message = update.message
self.__cgid = gid
def processed_bytes(self):
return self.cobj.transferred_size
def size_raw(self):
return self.__csize
def size(self):
return get_readable_file_size(self.__csize)
def status(self):
return MirrorStatus.STATUS_CLONING
def name(self):
return self.cobj.name
def gid(self) -> str:
return self.__cgid
def progress_raw(self):
try:
return self.cobj.transferred_size / self.__csize * 100
except ZeroDivisionError:
return 0
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Download speed in Bytes/Seconds
"""
return self.cobj.cspeed()
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.__csize - self.cobj.transferred_size) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def download(self):
return self.cobj

View File

@ -0,0 +1,36 @@
from .status import Status
from bot.helper.ext_utils.bot_utils import get_readable_file_size, MirrorStatus
class ExtractStatus(Status):
def __init__(self, name, path, size):
self.__name = name
self.__path = path
self.__size = size
# The progress of extract function cannot be tracked. So we just return dummy values.
# If this is possible in future,we should implement it
def progress(self):
return '0'
def speed(self):
return '0'
def name(self):
return self.__name
def path(self):
return self.__path
def size(self):
return get_readable_file_size(self.__size)
def eta(self):
return '0s'
def status(self):
return MirrorStatus.STATUS_EXTRACTING
def processed_bytes(self):
return 0

View File

@ -0,0 +1,65 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/pull/220)
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
from .status import Status
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from bot import DOWNLOAD_DIR
class DownloadStatus(Status):
def __init__(self, obj, size, listener, gid):
self.dobj = obj
self.__dsize = size
self.uid = listener.uid
self.message = listener.message
self.__dgid = gid
def path(self):
return f"{DOWNLOAD_DIR}{self.uid}"
def processed_bytes(self):
return self.dobj.downloaded_bytes
def size_raw(self):
return self.__dsize
def size(self):
return get_readable_file_size(self.__dsize)
def status(self):
return MirrorStatus.STATUS_DOWNLOADING
def name(self):
return self.dobj.name
def gid(self) -> str:
return self.__dgid
def progress_raw(self):
try:
return self.dobj.downloaded_bytes / self.__dsize * 100
except ZeroDivisionError:
return 0
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Download speed in Bytes/Seconds
"""
return self.dobj.dspeed()
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.__dsize - self.dobj.downloaded_bytes) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def download(self):
return self.dobj

View File

@ -0,0 +1,30 @@
class MirrorListeners:
def __init__(self, context, update):
self.bot = context
self.update = update
self.message = update.message
self.uid = self.message.message_id
def onDownloadStarted(self):
raise NotImplementedError
def onDownloadProgress(self):
raise NotImplementedError
def onDownloadComplete(self):
raise NotImplementedError
def onDownloadError(self, error: str):
raise NotImplementedError
def onUploadStarted(self):
raise NotImplementedError
def onUploadProgress(self):
raise NotImplementedError
def onUploadComplete(self, link: str):
raise NotImplementedError
def onUploadError(self, error: str):
raise NotImplementedError

View File

@ -0,0 +1,62 @@
from bot.helper.ext_utils.bot_utils import get_readable_file_size,MirrorStatus, get_readable_time
from bot import DOWNLOAD_DIR
from .status import Status
class MegaDownloadStatus(Status):
def __init__(self, obj, listener):
self.uid = obj.uid
self.listener = listener
self.obj = obj
self.message = listener.message
def name(self) -> str:
return self.obj.name
def progress_raw(self):
try:
return round(self.processed_bytes() / self.obj.size * 100,2)
except ZeroDivisionError:
return 0.0
def progress(self):
"""Progress of download in percentage"""
return f"{self.progress_raw()}%"
def status(self) -> str:
return MirrorStatus.STATUS_DOWNLOADING
def processed_bytes(self):
return self.obj.downloaded_bytes
def eta(self):
try:
seconds = (self.size_raw() - self.processed_bytes()) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def size_raw(self):
return self.obj.size
def size(self) -> str:
return get_readable_file_size(self.size_raw())
def downloaded(self) -> str:
return get_readable_file_size(self.obj.downloadedBytes)
def speed_raw(self):
return self.obj.speed
def speed(self) -> str:
return f'{get_readable_file_size(self.speed_raw())}/s'
def gid(self) -> str:
return self.obj.gid
def path(self) -> str:
return f"{DOWNLOAD_DIR}{self.uid}"
def download(self):
return self.obj

View File

@ -0,0 +1,83 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/commit/0bfba523f095ab1dccad431d72561e0e002e7a59)
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
from bot import DOWNLOAD_DIR, LOGGER
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from .status import Status
from time import sleep
class QbDownloadStatus(Status):
def __init__(self, gid, listener, qbhash, client):
super().__init__()
self.__gid = gid
self.__hash = qbhash
self.client = client
self.__uid = listener.uid
self.listener = listener
self.message = listener.message
def progress(self):
"""
Calculates the progress of the mirror (upload or download)
:return: returns progress in percentage
"""
return f'{round(self.torrent_info().progress*100,2)}%'
def size_raw(self):
"""
Gets total size of the mirror file/folder
:return: total size of mirror
"""
return self.torrent_info().size
def processed_bytes(self):
return self.torrent_info().downloaded
def speed(self):
return f"{get_readable_file_size(self.torrent_info().dlspeed)}/s"
def name(self):
return self.torrent_info().name
def path(self):
return f"{DOWNLOAD_DIR}{self.__uid}"
def size(self):
return get_readable_file_size(self.torrent_info().size)
def eta(self):
return get_readable_time(self.torrent_info().eta)
def status(self):
download = self.torrent_info().state
if download == "queuedDL":
return MirrorStatus.STATUS_WAITING
elif download in ["metaDL", "checkingResumeData"]:
return MirrorStatus.STATUS_DOWNLOADING + " (Metadata)"
elif download == "pausedDL":
return MirrorStatus.STATUS_PAUSE
else:
return MirrorStatus.STATUS_DOWNLOADING
def torrent_info(self):
return self.client.torrents_info(torrent_hashes=self.__hash)[0]
def download(self):
return self
def uid(self):
return self.__uid
def gid(self):
return self.__gid
def cancel_download(self):
LOGGER.info(f"Cancelling Download: {self.name()}")
self.client.torrents_pause(torrent_hashes=self.__hash)
sleep(0.3)
self.listener.onDownloadError('Download stopped by user!')
self.client.torrents_delete(torrent_hashes=self.__hash)

View File

@ -0,0 +1,33 @@
from .status import Status
from bot.helper.ext_utils.bot_utils import get_readable_file_size, MirrorStatus
class SplitStatus(Status):
def __init__(self, name, path, size):
self.__name = name
self.__path = path
self.__size = size
def progress(self):
return '0'
def speed(self):
return '0'
def name(self):
return self.__name
def path(self):
return self.__path
def size(self):
return get_readable_file_size(self.__size)
def eta(self):
return '0s'
def status(self):
return MirrorStatus.STATUS_SPLITTING
def processed_bytes(self):
return 0

View File

@ -0,0 +1,40 @@
# Generic status class. All other status classes must inherit this class
class Status:
def progress(self):
"""
Calculates the progress of the mirror (upload or download)
:return: progress in percentage
"""
raise NotImplementedError
def speed(self):
""":return: speed in bytes per second"""
raise NotImplementedError
def name(self):
""":return name of file/directory being processed"""
raise NotImplementedError
def path(self):
""":return path of the file/directory"""
raise NotImplementedError
def size(self):
""":return Size of file folder"""
raise NotImplementedError
def eta(self):
""":return ETA of the process to complete"""
raise NotImplementedError
def status(self):
""":return String describing what is the object of this class will be tracking (upload/download/something
else) """
raise NotImplementedError
def processed_bytes(self):
""":return The size of file that has been processed (downloaded/uploaded/archived)"""
raise NotImplementedError

View File

@ -0,0 +1,56 @@
from bot import DOWNLOAD_DIR
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from .status import Status
class TelegramDownloadStatus(Status):
def __init__(self, obj, listener):
self.obj = obj
self.uid = listener.uid
self.message = listener.message
def gid(self):
return self.obj.gid
def path(self):
return f"{DOWNLOAD_DIR}{self.uid}"
def processed_bytes(self):
return self.obj.downloaded_bytes
def size_raw(self):
return self.obj.size
def size(self):
return get_readable_file_size(self.size_raw())
def status(self):
return MirrorStatus.STATUS_DOWNLOADING
def name(self):
return self.obj.name
def progress_raw(self):
return self.obj.progress
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Download speed in Bytes/Seconds
"""
return self.obj.download_speed
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.size_raw() - self.processed_bytes()) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def download(self):
return self.obj

View File

@ -0,0 +1,65 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/commit/d888a1e7237f4633c066f7c2bbfba030b83ad616)
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
from .status import Status
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from bot import DOWNLOAD_DIR
class TgUploadStatus(Status):
def __init__(self, obj, size, gid, listener):
self.obj = obj
self.__size = size
self.uid = listener.uid
self.message = listener.message
self.__gid = gid
def path(self):
return f"{DOWNLOAD_DIR}{self.uid}"
def processed_bytes(self):
return self.obj.uploaded_bytes
def size_raw(self):
return self.__size
def size(self):
return get_readable_file_size(self.__size)
def status(self):
return MirrorStatus.STATUS_UPLOADING
def name(self):
return self.obj.name
def progress_raw(self):
try:
return self.obj.uploaded_bytes / self.__size * 100
except ZeroDivisionError:
return 0
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Upload speed in Bytes/Seconds
"""
return self.obj.speed()
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.__size - self.obj.uploaded_bytes) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def gid(self) -> str:
return self.__gid
def download(self):
return self.obj

View File

@ -0,0 +1,61 @@
from .status import Status
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from bot import DOWNLOAD_DIR
class UploadStatus(Status):
def __init__(self, obj, size, gid, listener):
self.obj = obj
self.__size = size
self.uid = listener.uid
self.message = listener.message
self.__gid = gid
def path(self):
return f"{DOWNLOAD_DIR}{self.uid}"
def processed_bytes(self):
return self.obj.uploaded_bytes
def size_raw(self):
return self.__size
def size(self):
return get_readable_file_size(self.__size)
def status(self):
return MirrorStatus.STATUS_UPLOADING
def name(self):
return self.obj.name
def progress_raw(self):
try:
return self.obj.uploaded_bytes / self.__size * 100
except ZeroDivisionError:
return 0
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Upload speed in Bytes/Seconds
"""
return self.obj.speed()
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.__size - self.obj.uploaded_bytes) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except ZeroDivisionError:
return '-'
def gid(self) -> str:
return self.__gid
def download(self):
return self.obj

View File

@ -0,0 +1,59 @@
from bot import DOWNLOAD_DIR
from bot.helper.ext_utils.bot_utils import MirrorStatus, get_readable_file_size, get_readable_time
from .status import Status
from bot.helper.ext_utils.fs_utils import get_path_size
class YoutubeDLDownloadStatus(Status):
def __init__(self, obj, listener):
self.obj = obj
self.uid = listener.uid
self.message = listener.message
def gid(self):
return self.obj.gid
def path(self):
return f"{DOWNLOAD_DIR}{self.uid}"
def processed_bytes(self):
if self.obj.downloaded_bytes != 0:
return self.obj.downloaded_bytes
else:
return get_path_size(f"{DOWNLOAD_DIR}{self.uid}")
def size_raw(self):
return self.obj.size
def size(self):
return get_readable_file_size(self.size_raw())
def status(self):
return MirrorStatus.STATUS_DOWNLOADING
def name(self):
return self.obj.name
def progress_raw(self):
return self.obj.progress
def progress(self):
return f'{round(self.progress_raw(), 2)}%'
def speed_raw(self):
"""
:return: Download speed in Bytes/Seconds
"""
return self.obj.download_speed
def speed(self):
return f'{get_readable_file_size(self.speed_raw())}/s'
def eta(self):
try:
seconds = (self.size_raw() - self.processed_bytes()) / self.speed_raw()
return f'{get_readable_time(seconds)}'
except:
return '-'
def download(self):
return self.obj

View File

@ -0,0 +1,36 @@
from .status import Status
from bot.helper.ext_utils.bot_utils import get_readable_file_size, MirrorStatus
class ZipStatus(Status):
def __init__(self, name, path, size):
self.__name = name
self.__path = path
self.__size = size
# The progress of Zip function cannot be tracked. So we just return dummy values.
# If this is possible in future,we should implement it
def progress(self):
return '0'
def speed(self):
return '0'
def name(self):
return self.__name
def path(self):
return self.__path
def size(self):
return get_readable_file_size(self.__size)
def eta(self):
return '0s'
def status(self):
return MirrorStatus.STATUS_ARCHIVING
def processed_bytes(self):
return 0

View File

@ -0,0 +1 @@

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,174 @@
import os
import logging
import time
import threading
from pyrogram.errors import FloodWait
from bot import app, DOWNLOAD_DIR, AS_DOCUMENT, AS_DOC_USERS, AS_MEDIA_USERS, CUSTOM_FILENAME
from bot.helper.ext_utils.fs_utils import take_ss, get_media_info
LOGGER = logging.getLogger(__name__)
logging.getLogger("pyrogram").setLevel(logging.ERROR)
VIDEO_SUFFIXES = ("MKV", "MP4", "MOV", "WMV", "3GP", "MPG", "WEBM", "AVI", "FLV", "M4V")
AUDIO_SUFFIXES = ("MP3", "M4A", "M4B", "FLAC", "WAV", "AIF", "OGG", "AAC", "DTS", "MID", "AMR", "MKA")
IMAGE_SUFFIXES = ("JPG", "JPX", "PNG", "GIF", "WEBP", "CR2", "TIF", "BMP", "JXR", "PSD", "ICO", "HEIC", "JPEG")
class TgUploader:
def __init__(self, name=None, listener=None):
self.__listener = listener
self.name = name
self.__app = app
self.total_bytes = 0
self.uploaded_bytes = 0
self.last_uploaded = 0
self.start_time = time.time()
self.__resource_lock = threading.RLock()
self.is_cancelled = False
self.chat_id = listener.message.chat.id
self.message_id = listener.uid
self.user_id = listener.message.from_user.id
self.as_doc = AS_DOCUMENT
self.thumb = f"Thumbnails/{self.user_id}.jpg"
self.sent_msg = self.__app.get_messages(self.chat_id, self.message_id)
self.msgs_dict = {}
self.corrupted = 0
def upload(self):
path = f"{DOWNLOAD_DIR}{self.message_id}"
self.user_settings()
for dirpath, subdir, files in sorted(os.walk(path)):
for filee in sorted(files):
if self.is_cancelled:
return
if filee.endswith('.torrent'):
continue
up_path = os.path.join(dirpath, filee)
fsize = os.path.getsize(up_path)
if fsize == 0:
self.corrupted += 1
continue
self.upload_file(up_path, filee, dirpath)
if self.is_cancelled:
return
self.msgs_dict[filee] = self.sent_msg.message_id
self.last_uploaded = 0
time.sleep(1.5)
LOGGER.info(f"Leech Done: {self.name}")
self.__listener.onUploadComplete(self.name, None, self.msgs_dict, None, self.corrupted)
def upload_file(self, up_path, filee, dirpath):
if CUSTOM_FILENAME is not None:
cap_mono = f"{CUSTOM_FILENAME} <code>{filee}</code>"
filee = f"{CUSTOM_FILENAME} {filee}"
new_path = os.path.join(dirpath, filee)
os.rename(up_path, new_path)
up_path = new_path
else:
cap_mono = f"<code>{filee}</code>"
notMedia = False
thumb = self.thumb
try:
if not self.as_doc:
duration = 0
if filee.upper().endswith(VIDEO_SUFFIXES):
duration = get_media_info(up_path)[0]
if thumb is None:
thumb = take_ss(up_path)
if self.is_cancelled:
if self.thumb is None and thumb is not None and os.path.lexists(thumb):
os.remove(thumb)
return
if not filee.upper().endswith(("MKV", "MP4")):
filee = os.path.splitext(filee)[0] + '.mp4'
new_path = os.path.join(dirpath, filee)
os.rename(up_path, new_path)
up_path = new_path
self.sent_msg = self.sent_msg.reply_video(video=up_path,
quote=True,
caption=cap_mono,
parse_mode="html",
duration=duration,
width=480,
height=320,
thumb=thumb,
supports_streaming=True,
disable_notification=True,
progress=self.upload_progress)
elif filee.upper().endswith(AUDIO_SUFFIXES):
duration , artist, title = get_media_info(up_path)
self.sent_msg = self.sent_msg.reply_audio(audio=up_path,
quote=True,
caption=cap_mono,
parse_mode="html",
duration=duration,
performer=artist,
title=title,
thumb=thumb,
disable_notification=True,
progress=self.upload_progress)
elif filee.upper().endswith(IMAGE_SUFFIXES):
self.sent_msg = self.sent_msg.reply_photo(photo=up_path,
quote=True,
caption=cap_mono,
parse_mode="html",
disable_notification=True,
progress=self.upload_progress)
else:
notMedia = True
if self.as_doc or notMedia:
if filee.upper().endswith(VIDEO_SUFFIXES) and thumb is None:
thumb = take_ss(up_path)
if self.is_cancelled:
if self.thumb is None and thumb is not None and os.path.lexists(thumb):
os.remove(thumb)
return
self.sent_msg = self.sent_msg.reply_document(document=up_path,
quote=True,
thumb=thumb,
caption=cap_mono,
parse_mode="html",
disable_notification=True,
progress=self.upload_progress)
except FloodWait as f:
LOGGER.info(f)
time.sleep(f.x)
except Exception as e:
LOGGER.error(str(e))
self.is_cancelled = True
self.__listener.onUploadError(str(e))
if self.thumb is None and thumb is not None and os.path.lexists(thumb):
os.remove(thumb)
if not self.is_cancelled:
os.remove(up_path)
def upload_progress(self, current, total):
if self.is_cancelled:
self.__app.stop_transmission()
return
with self.__resource_lock:
chunk_size = current - self.last_uploaded
self.last_uploaded = current
self.uploaded_bytes += chunk_size
def user_settings(self):
if self.user_id in AS_DOC_USERS:
self.as_doc = True
elif self.user_id in AS_MEDIA_USERS:
self.as_doc = False
if not os.path.lexists(self.thumb):
self.thumb = None
def speed(self):
try:
return self.uploaded_bytes / (time.time() - self.start_time)
except ZeroDivisionError:
return 0
def cancel_download(self):
self.is_cancelled = True
LOGGER.info(f"Cancelling Upload: {self.name}")
self.__listener.onUploadError('your upload has been stopped!')

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,44 @@
class _BotCommands:
def __init__(self):
self.StartCommand = 'start'
self.MirrorCommand = 'mirror'
self.UnzipMirrorCommand = 'unzipmirror'
self.ZipMirrorCommand = 'zipmirror'
self.CancelMirror = 'cancel'
self.CancelAllCommand = 'cancelall'
self.ListCommand = 'list'
self.SearchCommand = 'search'
self.StatusCommand = 'status'
self.AuthorizedUsersCommand = 'users'
self.AuthorizeCommand = 'authorize'
self.UnAuthorizeCommand = 'unauthorize'
self.AddSudoCommand = 'addsudo'
self.RmSudoCommand = 'rmsudo'
self.PingCommand = 'ping'
self.RestartCommand = 'restart'
self.StatsCommand = 'stats'
self.HelpCommand = 'help'
self.LogCommand = 'log'
self.SpeedCommand = 'speedtest'
self.CloneCommand = 'clone'
self.CountCommand = 'count'
self.WatchCommand = 'watch'
self.ZipWatchCommand = 'zipwatch'
self.QbMirrorCommand = 'qbmirror'
self.QbUnzipMirrorCommand = 'qbunzipmirror'
self.QbZipMirrorCommand = 'qbzipmirror'
self.DeleteCommand = 'del'
self.ShellCommand = 'shell'
self.ExecHelpCommand = 'exechelp'
self.LeechSetCommand = 'leechset'
self.SetThumbCommand = 'setthumb'
self.LeechCommand = 'leech'
self.UnzipLeechCommand = 'unzipleech'
self.ZipLeechCommand = 'zipleech'
self.QbLeechCommand = 'qbleech'
self.QbUnzipLeechCommand = 'qbunzipleech'
self.QbZipLeechCommand = 'qbzipleech'
self.LeechWatchCommand = 'leechwatch'
self.LeechZipWatchCommand = 'leechzipwatch'
BotCommands = _BotCommands()

View File

@ -0,0 +1,20 @@
from telegram import InlineKeyboardButton
class ButtonMaker:
def __init__(self):
self.button = []
def buildbutton(self, key, link):
self.button.append(InlineKeyboardButton(text = key, url = link))
def sbutton(self, key, data):
self.button.append(InlineKeyboardButton(text = key, callback_data = data))
def build_menu(self, n_cols, footer_buttons=None, header_buttons=None):
menu = [self.button[i:i + n_cols] for i in range(0, len(self.button), n_cols)]
if header_buttons:
menu.insert(0, header_buttons)
if footer_buttons:
menu.append(footer_buttons)
return menu

View File

@ -0,0 +1,51 @@
from telegram.ext import MessageFilter
from telegram import Message
from bot import AUTHORIZED_CHATS, SUDO_USERS, OWNER_ID, download_dict, download_dict_lock
class CustomFilters:
class _OwnerFilter(MessageFilter):
def filter(self, message):
return bool(message.from_user.id == OWNER_ID)
owner_filter = _OwnerFilter()
class _AuthorizedUserFilter(MessageFilter):
def filter(self, message):
id = message.from_user.id
return bool(id in AUTHORIZED_CHATS or id in SUDO_USERS or id == OWNER_ID)
authorized_user = _AuthorizedUserFilter()
class _AuthorizedChat(MessageFilter):
def filter(self, message):
return bool(message.chat.id in AUTHORIZED_CHATS)
authorized_chat = _AuthorizedChat()
class _SudoUser(MessageFilter):
def filter(self, message):
return bool(message.from_user.id in SUDO_USERS)
sudo_user = _SudoUser()
class _MirrorOwner(MessageFilter):
def filter(self, message: Message):
user_id = message.from_user.id
if user_id == OWNER_ID:
return True
args = str(message.text).split(' ')
if len(args) > 1:
# Cancelling by gid
with download_dict_lock:
for message_id, status in download_dict.items():
if status.gid() == args[1] and status.message.from_user.id == user_id:
return True
else:
return False
elif not message.reply_to_message:
return True
# Cancelling by replying to original mirror message
reply_user = message.reply_to_message.from_user.id
return bool(reply_user == user_id)
mirror_owner_filter = _MirrorOwner()

View File

@ -0,0 +1,110 @@
import time
from telegram import InlineKeyboardMarkup
from telegram.message import Message
from telegram.update import Update
from telegram.error import TimedOut, BadRequest, RetryAfter
from bot import AUTO_DELETE_MESSAGE_DURATION, LOGGER, bot, status_reply_dict, status_reply_dict_lock, \
Interval, DOWNLOAD_STATUS_UPDATE_INTERVAL
from bot.helper.ext_utils.bot_utils import get_readable_message, setInterval
def sendMessage(text: str, bot, update: Update):
try:
return bot.send_message(update.message.chat_id,
reply_to_message_id=update.message.message_id,
text=text, allow_sending_without_reply=True, parse_mode='HTMl', disable_web_page_preview=True)
except RetryAfter as r:
LOGGER.error(str(r))
time.sleep(r.retry_after)
return sendMessage(text, bot, update)
except Exception as e:
LOGGER.error(str(e))
def sendMarkup(text: str, bot, update: Update, reply_markup: InlineKeyboardMarkup):
try:
return bot.send_message(update.message.chat_id,
reply_to_message_id=update.message.message_id,
text=text, reply_markup=reply_markup, allow_sending_without_reply=True,
parse_mode='HTMl', disable_web_page_preview=True)
except RetryAfter as r:
LOGGER.error(str(r))
time.sleep(r.retry_after)
return sendMarkup(text, bot, update, reply_markup)
except Exception as e:
LOGGER.error(str(e))
def editMessage(text: str, message: Message, reply_markup=None):
try:
bot.edit_message_text(text=text, message_id=message.message_id,
chat_id=message.chat.id,reply_markup=reply_markup,
parse_mode='HTMl', disable_web_page_preview=True)
except RetryAfter as r:
LOGGER.error(str(r))
time.sleep(r.retry_after)
return editMessage(text, message, reply_markup)
except Exception as e:
LOGGER.error(str(e))
def deleteMessage(bot, message: Message):
try:
bot.delete_message(chat_id=message.chat.id,
message_id=message.message_id)
except Exception as e:
LOGGER.error(str(e))
def sendLogFile(bot, update: Update):
with open('log.txt', 'rb') as f:
bot.send_document(document=f, filename=f.name,
reply_to_message_id=update.message.message_id,
chat_id=update.message.chat_id)
def auto_delete_message(bot, cmd_message: Message, bot_message: Message):
if AUTO_DELETE_MESSAGE_DURATION != -1:
time.sleep(AUTO_DELETE_MESSAGE_DURATION)
try:
# Skip if None is passed meaning we don't want to delete bot xor cmd message
deleteMessage(bot, cmd_message)
deleteMessage(bot, bot_message)
except AttributeError:
pass
def delete_all_messages():
with status_reply_dict_lock:
for message in list(status_reply_dict.values()):
try:
deleteMessage(bot, message)
del status_reply_dict[message.chat.id]
except Exception as e:
LOGGER.error(str(e))
def update_all_messages():
msg, buttons = get_readable_message()
with status_reply_dict_lock:
for chat_id in list(status_reply_dict.keys()):
if status_reply_dict[chat_id] and msg != status_reply_dict[chat_id].text:
if buttons == "":
editMessage(msg, status_reply_dict[chat_id])
else:
editMessage(msg, status_reply_dict[chat_id], buttons)
status_reply_dict[chat_id].text = msg
def sendStatusMessage(msg, bot):
if len(Interval) == 0:
Interval.append(setInterval(DOWNLOAD_STATUS_UPDATE_INTERVAL, update_all_messages))
progress, buttons = get_readable_message()
with status_reply_dict_lock:
if msg.message.chat.id in list(status_reply_dict.keys()):
try:
message = status_reply_dict[msg.message.chat.id]
deleteMessage(bot, message)
del status_reply_dict[msg.message.chat.id]
except Exception as e:
LOGGER.error(str(e))
del status_reply_dict[msg.message.chat.id]
if buttons == "":
message = sendMessage(progress, bot, msg)
else:
message = sendMarkup(progress, bot, msg, buttons)
status_reply_dict[msg.message.chat.id] = message

1
bot/modules/__init__.py Normal file
View File

@ -0,0 +1 @@

187
bot/modules/authorize.py Normal file
View File

@ -0,0 +1,187 @@
from bot.helper.telegram_helper.message_utils import sendMessage
from bot import AUTHORIZED_CHATS, SUDO_USERS, dispatcher, DB_URI
from telegram.ext import CommandHandler
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.ext_utils.db_handler import DbManger
def authorize(update, context):
reply_message = None
message_ = None
reply_message = update.message.reply_to_message
message_ = update.message.text.split(' ')
if len(message_) == 2:
user_id = int(message_[1])
if user_id in AUTHORIZED_CHATS:
msg = 'User Already Authorized'
elif DB_URI is not None:
msg = DbManger().db_auth(user_id)
else:
with open('authorized_chats.txt', 'a') as file:
file.write(f'{user_id}\n')
AUTHORIZED_CHATS.add(user_id)
msg = 'User Authorized'
elif reply_message is None:
# Trying to authorize a chat
chat_id = update.effective_chat.id
if chat_id in AUTHORIZED_CHATS:
msg = 'Chat Already Authorized'
elif DB_URI is not None:
msg = DbManger().db_auth(chat_id)
else:
with open('authorized_chats.txt', 'a') as file:
file.write(f'{chat_id}\n')
AUTHORIZED_CHATS.add(chat_id)
msg = 'Chat Authorized'
else:
# Trying to authorize someone by replying
user_id = reply_message.from_user.id
if user_id in AUTHORIZED_CHATS:
msg = 'User Already Authorized'
elif DB_URI is not None:
msg = DbManger().db_auth(user_id)
else:
with open('authorized_chats.txt', 'a') as file:
file.write(f'{user_id}\n')
AUTHORIZED_CHATS.add(user_id)
msg = 'User Authorized'
sendMessage(msg, context.bot, update)
def unauthorize(update, context):
reply_message = None
message_ = None
reply_message = update.message.reply_to_message
message_ = update.message.text.split(' ')
if len(message_) == 2:
user_id = int(message_[1])
if user_id in AUTHORIZED_CHATS:
if DB_URI is not None:
msg = DbManger().db_unauth(user_id)
else:
AUTHORIZED_CHATS.remove(user_id)
msg = 'User Unauthorized'
else:
msg = 'User Already Unauthorized'
elif reply_message is None:
# Trying to unauthorize a chat
chat_id = update.effective_chat.id
if chat_id in AUTHORIZED_CHATS:
if DB_URI is not None:
msg = DbManger().db_unauth(chat_id)
else:
AUTHORIZED_CHATS.remove(chat_id)
msg = 'Chat Unauthorized'
else:
msg = 'Chat Already Unauthorized'
else:
# Trying to authorize someone by replying
user_id = reply_message.from_user.id
if user_id in AUTHORIZED_CHATS:
if DB_URI is not None:
msg = DbManger().db_unauth(user_id)
else:
AUTHORIZED_CHATS.remove(user_id)
msg = 'User Unauthorized'
else:
msg = 'User Already Unauthorized'
with open('authorized_chats.txt', 'a') as file:
file.truncate(0)
for i in AUTHORIZED_CHATS:
file.write(f'{i}\n')
sendMessage(msg, context.bot, update)
def addSudo(update, context):
reply_message = None
message_ = None
reply_message = update.message.reply_to_message
message_ = update.message.text.split(' ')
if len(message_) == 2:
user_id = int(message_[1])
if user_id in SUDO_USERS:
msg = 'Already Sudo'
elif DB_URI is not None:
msg = DbManger().db_addsudo(user_id)
else:
with open('sudo_users.txt', 'a') as file:
file.write(f'{user_id}\n')
SUDO_USERS.add(user_id)
msg = 'Promoted as Sudo'
elif reply_message is None:
msg = "Give ID or Reply To message of whom you want to Promote"
else:
# Trying to authorize someone by replying
user_id = reply_message.from_user.id
if user_id in SUDO_USERS:
msg = 'Already Sudo'
elif DB_URI is not None:
msg = DbManger().db_addsudo(user_id)
else:
with open('sudo_users.txt', 'a') as file:
file.write(f'{user_id}\n')
SUDO_USERS.add(user_id)
msg = 'Promoted as Sudo'
sendMessage(msg, context.bot, update)
def removeSudo(update, context):
reply_message = None
message_ = None
reply_message = update.message.reply_to_message
message_ = update.message.text.split(' ')
if len(message_) == 2:
user_id = int(message_[1])
if user_id in SUDO_USERS:
if DB_URI is not None:
msg = DbManger().db_rmsudo(user_id)
else:
SUDO_USERS.remove(user_id)
msg = 'Demoted'
else:
msg = 'Not a Sudo'
elif reply_message is None:
msg = "Give ID or Reply To message of whom you want to remove from Sudo"
else:
user_id = reply_message.from_user.id
if user_id in SUDO_USERS:
if DB_URI is not None:
msg = DbManger().db_rmsudo(user_id)
else:
SUDO_USERS.remove(user_id)
msg = 'Demoted'
else:
msg = 'Not a Sudo'
if DB_URI is None:
with open('sudo_users.txt', 'a') as file:
file.truncate(0)
for i in SUDO_USERS:
file.write(f'{i}\n')
sendMessage(msg, context.bot, update)
def sendAuthChats(update, context):
user = sudo = ''
user += '\n'.join(str(id) for id in AUTHORIZED_CHATS)
sudo += '\n'.join(str(id) for id in SUDO_USERS)
sendMessage(f'<b><u>Authorized Chats</u></b>\n<code>{user}</code>\n<b><u>Sudo Users</u></b>\n<code>{sudo}</code>', context.bot, update)
send_auth_handler = CommandHandler(command=BotCommands.AuthorizedUsersCommand, callback=sendAuthChats,
filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
authorize_handler = CommandHandler(command=BotCommands.AuthorizeCommand, callback=authorize,
filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
unauthorize_handler = CommandHandler(command=BotCommands.UnAuthorizeCommand, callback=unauthorize,
filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
addsudo_handler = CommandHandler(command=BotCommands.AddSudoCommand, callback=addSudo,
filters=CustomFilters.owner_filter, run_async=True)
removesudo_handler = CommandHandler(command=BotCommands.RmSudoCommand, callback=removeSudo,
filters=CustomFilters.owner_filter, run_async=True)
dispatcher.add_handler(send_auth_handler)
dispatcher.add_handler(authorize_handler)
dispatcher.add_handler(unauthorize_handler)
dispatcher.add_handler(addsudo_handler)
dispatcher.add_handler(removesudo_handler)

View File

@ -0,0 +1,69 @@
from telegram.ext import CommandHandler
from bot import download_dict, dispatcher, download_dict_lock, DOWNLOAD_DIR
from bot.helper.ext_utils.fs_utils import clean_download
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.message_utils import sendMessage
from time import sleep
from bot.helper.ext_utils.bot_utils import getDownloadByGid, MirrorStatus, getAllDownload
def cancel_mirror(update, context):
args = update.message.text.split(" ", maxsplit=1)
mirror_message = None
if len(args) > 1:
gid = args[1]
dl = getDownloadByGid(gid)
if not dl:
sendMessage(f"GID: <code>{gid}</code> Not Found.", context.bot, update)
return
mirror_message = dl.message
elif update.message.reply_to_message:
mirror_message = update.message.reply_to_message
with download_dict_lock:
keys = list(download_dict.keys())
try:
dl = download_dict[mirror_message.message_id]
except:
pass
if len(args) == 1:
if not mirror_message or mirror_message and mirror_message.message_id not in keys:
msg = f"Reply to active <code>/{BotCommands.MirrorCommand}</code> message which was used to start the download or send <code>/{BotCommands.CancelMirror} GID</code> to cancel it!"
sendMessage(msg, context.bot, update)
return
if dl.status() == MirrorStatus.STATUS_ARCHIVING:
sendMessage("Archival in Progress, You Can't Cancel It.", context.bot, update)
elif dl.status() == MirrorStatus.STATUS_EXTRACTING:
sendMessage("Extract in Progress, You Can't Cancel It.", context.bot, update)
elif dl.status() == MirrorStatus.STATUS_SPLITTING:
sendMessage("Split in Progress, You Can't Cancel It.", context.bot, update)
else:
dl.download().cancel_download()
sleep(3) # incase of any error with ondownloaderror listener
clean_download(f'{DOWNLOAD_DIR}{mirror_message.message_id}')
def cancel_all(update, context):
count = 0
gid = 0
while True:
dl = getAllDownload()
if dl:
if dl.gid() != gid:
gid = dl.gid()
dl.download().cancel_download()
count += 1
sleep(0.3)
else:
break
sendMessage(f'{count} Download(s) has been Cancelled!', context.bot, update)
cancel_mirror_handler = CommandHandler(BotCommands.CancelMirror, cancel_mirror,
filters=(CustomFilters.authorized_chat | CustomFilters.authorized_user) & CustomFilters.mirror_owner_filter | CustomFilters.sudo_user, run_async=True)
cancel_all_handler = CommandHandler(BotCommands.CancelAllCommand, cancel_all,
filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
dispatcher.add_handler(cancel_all_handler)
dispatcher.add_handler(cancel_mirror_handler)

80
bot/modules/clone.py Normal file
View File

@ -0,0 +1,80 @@
import random
import string
from telegram.ext import CommandHandler
from bot.helper.mirror_utils.upload_utils import gdriveTools
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, deleteMessage, delete_all_messages, update_all_messages, sendStatusMessage
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.mirror_utils.status_utils.clone_status import CloneStatus
from bot import dispatcher, LOGGER, CLONE_LIMIT, STOP_DUPLICATE, download_dict, download_dict_lock, Interval
from bot.helper.ext_utils.bot_utils import get_readable_file_size, is_gdrive_link
def cloneNode(update, context):
args = update.message.text.split(" ", maxsplit=1)
reply_to = update.message.reply_to_message
if len(args) > 1:
link = args[1]
elif reply_to is not None:
link = reply_to.text
else:
link = ''
if is_gdrive_link(link):
gd = gdriveTools.GoogleDriveHelper()
res, size, name, files = gd.helper(link)
if res != "":
sendMessage(res, context.bot, update)
return
if STOP_DUPLICATE:
LOGGER.info('Checking File/Folder if already in Drive...')
smsg, button = gd.drive_list(name, True, True)
if smsg:
msg3 = "File/Folder is already available in Drive.\nHere are the search results:"
sendMarkup(msg3, context.bot, update, button)
return
if CLONE_LIMIT is not None:
LOGGER.info('Checking File/Folder Size...')
if size > CLONE_LIMIT * 1024**3:
msg2 = f'Failed, Clone limit is {CLONE_LIMIT}GB.\nYour File/Folder size is {get_readable_file_size(size)}.'
sendMessage(msg2, context.bot, update)
return
if files <= 10:
msg = sendMessage(f"Cloning: <code>{link}</code>", context.bot, update)
result, button = gd.clone(link)
deleteMessage(context.bot, msg)
else:
drive = gdriveTools.GoogleDriveHelper(name)
gid = ''.join(random.SystemRandom().choices(string.ascii_letters + string.digits, k=12))
clone_status = CloneStatus(drive, size, update, gid)
with download_dict_lock:
download_dict[update.message.message_id] = clone_status
sendStatusMessage(update, context.bot)
result, button = drive.clone(link)
with download_dict_lock:
del download_dict[update.message.message_id]
count = len(download_dict)
try:
if count == 0:
Interval[0].cancel()
del Interval[0]
delete_all_messages()
else:
update_all_messages()
except IndexError:
pass
if update.message.from_user.username:
uname = f'@{update.message.from_user.username}'
else:
uname = f'<a href="tg://user?id={update.message.from_user.id}">{update.message.from_user.first_name}</a>'
if uname is not None:
cc = f'\n\n<b>cc: </b>{uname}'
men = f'{uname} '
if button in ["cancelled", ""]:
sendMessage(men + result, context.bot, update)
else:
sendMarkup(result + cc, context.bot, update, button)
else:
sendMessage('Send Gdrive link along with command or by replying to the link by command', context.bot, update)
clone_handler = CommandHandler(BotCommands.CloneCommand, cloneNode, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
dispatcher.add_handler(clone_handler)

36
bot/modules/count.py Normal file
View File

@ -0,0 +1,36 @@
from telegram.ext import CommandHandler
from bot import dispatcher
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
from bot.helper.telegram_helper.message_utils import deleteMessage, sendMessage
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.ext_utils.bot_utils import is_gdrive_link
def countNode(update, context):
args = update.message.text.split(" ", maxsplit=1)
reply_to = update.message.reply_to_message
if len(args) > 1:
link = args[1]
elif reply_to is not None:
link = reply_to.text
else:
link = ''
if is_gdrive_link(link):
msg = sendMessage(f"Counting: <code>{link}</code>", context.bot, update)
gd = GoogleDriveHelper()
result = gd.count(link)
deleteMessage(context.bot, msg)
if update.message.from_user.username:
uname = f'@{update.message.from_user.username}'
else:
uname = f'<a href="tg://user?id={update.message.from_user.id}">{update.message.from_user.first_name}</a>'
if uname is not None:
cc = f'\n\n<b>cc: </b>{uname}'
sendMessage(result + cc, context.bot, update)
else:
sendMessage('Send Gdrive link along with command or by replying to the link by command', context.bot, update)
count_handler = CommandHandler(BotCommands.CountCommand, countNode, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
dispatcher.add_handler(count_handler)

34
bot/modules/delete.py Normal file
View File

@ -0,0 +1,34 @@
import threading
from telegram import Update
from telegram.ext import CommandHandler
from bot import dispatcher, LOGGER
from bot.helper.telegram_helper.message_utils import auto_delete_message, sendMessage
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.mirror_utils.upload_utils import gdriveTools
from bot.helper.ext_utils.bot_utils import is_gdrive_link
def deletefile(update, context):
args = update.message.text.split(" ", maxsplit=1)
reply_to = update.message.reply_to_message
if len(args) > 1:
link = args[1]
elif reply_to is not None:
link = reply_to.text
else:
link = ''
if is_gdrive_link(link):
LOGGER.info(link)
drive = gdriveTools.GoogleDriveHelper()
msg = drive.deletefile(link)
LOGGER.info(f"Delete Result: {msg}")
else:
msg = 'Send Gdrive link along with command or by replying to the link by command'
reply_message = sendMessage(msg, context.bot, update)
threading.Thread(target=auto_delete_message, args=(context.bot, update.message, reply_message)).start()
delete_handler = CommandHandler(command=BotCommands.DeleteCommand, callback=deletefile, filters=CustomFilters.owner_filter | CustomFilters.sudo_user, run_async=True)
dispatcher.add_handler(delete_handler)

141
bot/modules/eval.py Normal file
View File

@ -0,0 +1,141 @@
import io
import os
import textwrap
import traceback
from contextlib import redirect_stdout
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.message_utils import sendMessage
from bot import LOGGER, dispatcher
from telegram import ParseMode
from telegram.ext import CommandHandler
namespaces = {}
def namespace_of(chat, update, bot):
if chat not in namespaces:
namespaces[chat] = {
'__builtins__': globals()['__builtins__'],
'bot': bot,
'effective_message': update.effective_message,
'effective_user': update.effective_user,
'effective_chat': update.effective_chat,
'update': update
}
return namespaces[chat]
def log_input(update):
user = update.effective_user.id
chat = update.effective_chat.id
LOGGER.info(
f"IN: {update.effective_message.text} (user={user}, chat={chat})")
def send(msg, bot, update):
if len(str(msg)) > 2000:
with io.BytesIO(str.encode(msg)) as out_file:
out_file.name = "output.txt"
bot.send_document(
chat_id=update.effective_chat.id, document=out_file)
else:
LOGGER.info(f"OUT: '{msg}'")
bot.send_message(
chat_id=update.effective_chat.id,
text=f"`{msg}`",
parse_mode=ParseMode.MARKDOWN)
def evaluate(update, context):
bot = context.bot
send(do(eval, bot, update), bot, update)
def execute(update, context):
bot = context.bot
send(do(exec, bot, update), bot, update)
def cleanup_code(code):
if code.startswith('```') and code.endswith('```'):
return '\n'.join(code.split('\n')[1:-1])
return code.strip('` \n')
def do(func, bot, update):
log_input(update)
content = update.message.text.split(' ', 1)[-1]
body = cleanup_code(content)
env = namespace_of(update.message.chat_id, update, bot)
os.chdir(os.getcwd())
with open(
os.path.join(os.getcwd(),
'bot/modules/temp.txt'),
'w') as temp:
temp.write(body)
stdout = io.StringIO()
to_compile = f'def func():\n{textwrap.indent(body, " ")}'
try:
exec(to_compile, env)
except Exception as e:
return f'{e.__class__.__name__}: {e}'
func = env['func']
try:
with redirect_stdout(stdout):
func_return = func()
except Exception as e:
value = stdout.getvalue()
return f'{value}{traceback.format_exc()}'
else:
value = stdout.getvalue()
result = None
if func_return is None:
if value:
result = f'{value}'
else:
try:
result = f'{repr(eval(body, env))}'
except:
pass
else:
result = f'{value}{func_return}'
if result:
return result
def clear(update, context):
bot = context.bot
log_input(update)
global namespaces
if update.message.chat_id in namespaces:
del namespaces[update.message.chat_id]
send("Cleared locals.", bot, update)
def exechelp(update, context):
help_string = '''
<b>Executor</b>
/eval <i>Run Python Code Line | Lines</i>
/exec <i>Run Commands In Exec</i>
/clearlocals <i>Cleared locals</i>
'''
sendMessage(help_string, context.bot, update)
EVAL_HANDLER = CommandHandler(('eval'), evaluate, filters=CustomFilters.owner_filter, run_async=True)
EXEC_HANDLER = CommandHandler(('exec'), execute, filters=CustomFilters.owner_filter, run_async=True)
CLEAR_HANDLER = CommandHandler('clearlocals', clear, filters=CustomFilters.owner_filter, run_async=True)
EXECHELP_HANDLER = CommandHandler(BotCommands.ExecHelpCommand, exechelp, filters=CustomFilters.owner_filter, run_async=True)
dispatcher.add_handler(EVAL_HANDLER)
dispatcher.add_handler(EXEC_HANDLER)
dispatcher.add_handler(CLEAR_HANDLER)
dispatcher.add_handler(EXECHELP_HANDLER)

View File

@ -0,0 +1,125 @@
# Implement By - @anasty17 (https://github.com/SlamDevs/slam-mirrorbot/commit/d888a1e7237f4633c066f7c2bbfba030b83ad616)
# Leech Settings V2 Implement By - @VarnaX-279
# (c) https://github.com/SlamDevs/slam-mirrorbot
# All rights reserved
import os
import threading
from PIL import Image
from telegram.ext import CommandHandler, CallbackQueryHandler
from telegram import InlineKeyboardMarkup
from bot import AS_DOC_USERS, AS_MEDIA_USERS, dispatcher, AS_DOCUMENT, app, AUTO_DELETE_MESSAGE_DURATION
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, editMessage, auto_delete_message
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper import button_build
def getleechinfo(from_user):
user_id = from_user.id
name = from_user.full_name
buttons = button_build.ButtonMaker()
thumbpath = f"Thumbnails/{user_id}.jpg"
if (
user_id in AS_DOC_USERS
or user_id not in AS_MEDIA_USERS
and AS_DOCUMENT
):
ltype = "DOCUMENT"
buttons.sbutton("Send As Media", f"med {user_id}")
else:
ltype = "MEDIA"
buttons.sbutton("Send As Document", f"doc {user_id}")
if os.path.exists(thumbpath):
thumbmsg = "Exists"
buttons.sbutton("Delete Thumbnail", f"thumb {user_id}")
else:
thumbmsg = "Not Exists"
if AUTO_DELETE_MESSAGE_DURATION == -1:
buttons.sbutton("Close", f"closeset {user_id}")
button = InlineKeyboardMarkup(buttons.build_menu(1))
text = f"<u>Leech Settings for <a href='tg://user?id={user_id}'>{name}</a></u>\n"\
f"Leech Type <b>{ltype}</b>\n"\
f"Custom Thumbnail <b>{thumbmsg}</b>"
return text, button
def editLeechType(message, query):
msg, button = getleechinfo(query.from_user)
editMessage(msg, message, button)
def leechSet(update, context):
msg, button = getleechinfo(update.message.from_user)
choose_msg = sendMarkup(msg, context.bot, update, button)
threading.Thread(target=auto_delete_message, args=(context.bot, update.message, choose_msg)).start()
def setLeechType(update, context):
query = update.callback_query
message = query.message
user_id = query.from_user.id
data = query.data
data = data.split(" ")
if user_id != int(data[1]):
query.answer(text="Not Yours!", show_alert=True)
elif data[0] == "doc":
if user_id in AS_MEDIA_USERS:
AS_MEDIA_USERS.remove(user_id)
AS_DOC_USERS.add(user_id)
query.answer(text="Your File Will Deliver As Document!", show_alert=True)
editLeechType(message, query)
elif data[0] == "med":
if user_id in AS_DOC_USERS:
AS_DOC_USERS.remove(user_id)
AS_MEDIA_USERS.add(user_id)
query.answer(text="Your File Will Deliver As Media!", show_alert=True)
editLeechType(message, query)
elif data[0] == "thumb":
path = f"Thumbnails/{user_id}.jpg"
if os.path.lexists(path):
os.remove(path)
query.answer(text="Thumbnail Removed!", show_alert=True)
editLeechType(message, query)
else:
query.answer(text="Old Settings", show_alert=True)
elif data[0] == "closeset":
try:
query.message.delete()
query.message.reply_to_message.delete()
except:
pass
def setThumb(update, context):
user_id = update.message.from_user.id
reply_to = update.message.reply_to_message
if reply_to is not None and reply_to.photo:
path = "Thumbnails/"
if not os.path.isdir(path):
os.mkdir(path)
photo_msg = app.get_messages(update.message.chat.id, reply_to_message_ids=update.message.message_id)
photo_dir = app.download_media(photo_msg, file_name=path)
des_dir = os.path.join(path, str(user_id) + ".jpg")
img = Image.open(photo_dir)
img.thumbnail((480, 320))
img.save(des_dir, "JPEG")
os.remove(photo_dir)
sendMessage(f"Custom thumbnail saved for <a href='tg://user?id={user_id}'>{update.message.from_user.full_name}</a> .", context.bot, update)
else:
sendMessage("Reply to a photo to save custom thumbnail.", context.bot, update)
leech_set_handler = CommandHandler(BotCommands.LeechSetCommand, leechSet, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
set_thumbnail_handler = CommandHandler(BotCommands.SetThumbCommand, setThumb, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
as_doc_handler = CallbackQueryHandler(setLeechType, pattern="doc", run_async=True)
as_media_handler = CallbackQueryHandler(setLeechType, pattern="med", run_async=True)
del_thumb_handler = CallbackQueryHandler(setLeechType, pattern="thumb", run_async=True)
close_set_handler = CallbackQueryHandler(setLeechType, pattern="closeset", run_async=True)
dispatcher.add_handler(leech_set_handler)
dispatcher.add_handler(as_doc_handler)
dispatcher.add_handler(as_media_handler)
dispatcher.add_handler(close_set_handler)
dispatcher.add_handler(set_thumbnail_handler)
dispatcher.add_handler(del_thumb_handler)

70
bot/modules/list.py Normal file
View File

@ -0,0 +1,70 @@
from telegram import InlineKeyboardMarkup
from telegram.ext import CommandHandler, CallbackQueryHandler
from bot.helper.mirror_utils.upload_utils.gdriveTools import GoogleDriveHelper
from bot import LOGGER, dispatcher
from bot.helper.telegram_helper.message_utils import sendMessage, editMessage, sendMarkup
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper import button_build
def list_buttons(update, context):
user_id = update.message.from_user.id
try:
key = update.message.text.split(" ", maxsplit=1)[1]
except IndexError:
return sendMessage('Send a search key along with command', context.bot, update)
buttons = button_build.ButtonMaker()
buttons.sbutton("Drive Root", f"types {user_id} root")
buttons.sbutton("Recursive", f"types {user_id} recu")
buttons.sbutton("Cancel", f"types {user_id} cancel")
button = InlineKeyboardMarkup(buttons.build_menu(2))
sendMarkup('Choose option to list.', context.bot, update, button)
def select_type(update, context):
query = update.callback_query
user_id = query.from_user.id
msg = query.message
key = msg.reply_to_message.text.split(" ", maxsplit=1)[1]
data = query.data
data = data.split(" ")
if user_id != int(data[1]):
query.answer(text="Not Yours!", show_alert=True)
elif data[2] == "root" or data[2] == "recu":
query.answer()
buttons = button_build.ButtonMaker()
buttons.sbutton("Folders", f"types {user_id} folders {data[2]}")
buttons.sbutton("Files", f"types {user_id} files {data[2]}")
buttons.sbutton("Both", f"types {user_id} both {data[2]}")
buttons.sbutton("Cancel", f"types {user_id} cancel")
button = InlineKeyboardMarkup(buttons.build_menu(2))
editMessage('Choose option to list.', msg, button)
elif data[2] == "files" or data[2] == "folders" or data[2] == "both":
query.answer()
list_method = data[3]
item_type = data[2]
editMessage(f"<b>Searching for <i>{key}</i></b>", msg)
list_drive(key, msg, list_method, item_type)
else:
query.answer()
editMessage("list has been canceled!", msg)
def list_drive(key, bmsg, list_method, item_type):
LOGGER.info(f"listing: {key}")
if list_method == "recu":
list_method = True
else:
list_method = False
gdrive = GoogleDriveHelper()
msg, button = gdrive.drive_list(key, isRecursive=list_method, itemType=item_type)
if button:
editMessage(msg, bmsg, button)
else:
editMessage(f'No result found for <i>{key}</i>', bmsg)
list_handler = CommandHandler(BotCommands.ListCommand, list_buttons, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
list_type_handler = CallbackQueryHandler(select_type, pattern="types", run_async=True)
dispatcher.add_handler(list_handler)
dispatcher.add_handler(list_type_handler)

547
bot/modules/mirror.py Normal file
View File

@ -0,0 +1,547 @@
import requests
import urllib
import pathlib
import os
import subprocess
import threading
import re
import random
import string
import time
import shutil
from telegram.ext import CommandHandler
from telegram import InlineKeyboardMarkup
from bot import Interval, INDEX_URL, BUTTON_FOUR_NAME, BUTTON_FOUR_URL, BUTTON_FIVE_NAME, BUTTON_FIVE_URL, \
BUTTON_SIX_NAME, BUTTON_SIX_URL, BLOCK_MEGA_FOLDER, BLOCK_MEGA_LINKS, VIEW_LINK, aria2, \
dispatcher, DOWNLOAD_DIR, download_dict, download_dict_lock, SHORTENER, SHORTENER_API, \
ZIP_UNZIP_LIMIT, TG_SPLIT_SIZE, LOGGER
from bot.helper.ext_utils import fs_utils, bot_utils
from bot.helper.ext_utils.shortenurl import short_url
from bot.helper.ext_utils.exceptions import DirectDownloadLinkException, NotSupportedExtractionArchive
from bot.helper.mirror_utils.download_utils.aria2_download import AriaDownloadHelper
from bot.helper.mirror_utils.download_utils.mega_downloader import MegaDownloadHelper
from bot.helper.mirror_utils.download_utils.qbit_downloader import QbitTorrent
from bot.helper.mirror_utils.download_utils.direct_link_generator import direct_link_generator
from bot.helper.mirror_utils.download_utils.telegram_downloader import TelegramDownloadHelper
from bot.helper.mirror_utils.status_utils import listeners
from bot.helper.mirror_utils.status_utils.extract_status import ExtractStatus
from bot.helper.mirror_utils.status_utils.zip_status import ZipStatus
from bot.helper.mirror_utils.status_utils.split_status import SplitStatus
from bot.helper.mirror_utils.status_utils.upload_status import UploadStatus
from bot.helper.mirror_utils.status_utils.tg_upload_status import TgUploadStatus
from bot.helper.mirror_utils.status_utils.gdownload_status import DownloadStatus
from bot.helper.mirror_utils.upload_utils import gdriveTools, pyrogramEngine
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup, delete_all_messages, update_all_messages, sendStatusMessage
from bot.helper.telegram_helper import button_build
ariaDlManager = AriaDownloadHelper()
ariaDlManager.start_listener()
class MirrorListener(listeners.MirrorListeners):
def __init__(self, bot, update, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None):
super().__init__(bot, update)
self.extract = extract
self.isZip = isZip
self.isQbit = isQbit
self.isLeech = isLeech
self.pswd = pswd
def onDownloadStarted(self):
pass
def onDownloadProgress(self):
# We are handling this on our own!
pass
def clean(self):
try:
aria2.purge()
Interval[0].cancel()
del Interval[0]
delete_all_messages()
except IndexError:
pass
def onDownloadComplete(self):
with download_dict_lock:
LOGGER.info(f"Download completed: {download_dict[self.uid].name()}")
download = download_dict[self.uid]
name = str(download.name()).replace('/', '')
gid = download.gid()
size = download.size_raw()
if name == "None" or self.isQbit:
name = os.listdir(f'{DOWNLOAD_DIR}{self.uid}')[-1]
m_path = f'{DOWNLOAD_DIR}{self.uid}/{name}'
if self.isZip:
try:
with download_dict_lock:
download_dict[self.uid] = ZipStatus(name, m_path, size)
pswd = self.pswd
path = m_path + ".zip"
LOGGER.info(f'Zip: orig_path: {m_path}, zip_path: {path}')
if pswd is not None:
if self.isLeech and int(size) > TG_SPLIT_SIZE:
subprocess.run(["7z", f"-v{TG_SPLIT_SIZE}b", "a", "-mx=0", f"-p{pswd}", path, m_path])
else:
subprocess.run(["7z", "a", "-mx=0", f"-p{pswd}", path, m_path])
else:
if self.isLeech and int(size) > TG_SPLIT_SIZE:
subprocess.run(["7z", f"-v{TG_SPLIT_SIZE}b", "a", "-mx=0", path, m_path])
else:
subprocess.run(["7z", "a", "-mx=0", path, m_path])
except FileNotFoundError:
LOGGER.info('File to archive not found!')
self.onUploadError('Internal error occurred!!')
return
try:
shutil.rmtree(m_path)
except:
os.remove(m_path)
elif self.extract:
try:
if os.path.isfile(m_path):
path = fs_utils.get_base_name(m_path)
LOGGER.info(f"Extracting: {name}")
with download_dict_lock:
download_dict[self.uid] = ExtractStatus(name, m_path, size)
pswd = self.pswd
if os.path.isdir(m_path):
for dirpath, subdir, files in os.walk(m_path, topdown=False):
for filee in files:
if re.search(r'\.part0*1.rar$', filee) or re.search(r'\.7z.0*1$', filee) \
or (filee.endswith(".rar") and not re.search(r'\.part\d+.rar$', filee)) \
or filee.endswith(".zip") or re.search(r'\.zip.0*1$', filee):
m_path = os.path.join(dirpath, filee)
if pswd is not None:
result = subprocess.run(["7z", "x", f"-p{pswd}", m_path, f"-o{dirpath}"])
else:
result = subprocess.run(["7z", "x", m_path, f"-o{dirpath}"])
if result.returncode != 0:
LOGGER.warning('Unable to extract archive!')
break
for filee in files:
if filee.endswith(".rar") or re.search(r'\.r\d+$', filee) \
or re.search(r'\.7z.\d+$', filee) or re.search(r'\.z\d+$', filee) \
or re.search(r'\.zip.\d+$', filee) or filee.endswith(".zip"):
del_path = os.path.join(dirpath, filee)
os.remove(del_path)
path = f'{DOWNLOAD_DIR}{self.uid}/{name}'
else:
if pswd is not None:
result = subprocess.run(["bash", "pextract", m_path, pswd])
else:
result = subprocess.run(["bash", "extract", m_path])
if result.returncode == 0:
LOGGER.info(f"Extract Path: {path}")
os.remove(m_path)
LOGGER.info(f"Deleting archive: {m_path}")
else:
LOGGER.warning('Unable to extract archive! Uploading anyway')
path = f'{DOWNLOAD_DIR}{self.uid}/{name}'
except NotSupportedExtractionArchive:
LOGGER.info("Not any valid archive, uploading file as it is.")
path = f'{DOWNLOAD_DIR}{self.uid}/{name}'
else:
path = f'{DOWNLOAD_DIR}{self.uid}/{name}'
up_name = pathlib.PurePath(path).name
up_path = f'{DOWNLOAD_DIR}{self.uid}/{up_name}'
size = fs_utils.get_path_size(f'{DOWNLOAD_DIR}{self.uid}')
if self.isLeech and not self.isZip:
checked = False
for dirpath, subdir, files in os.walk(f'{DOWNLOAD_DIR}{self.uid}', topdown=False):
for filee in files:
f_path = os.path.join(dirpath, filee)
f_size = os.path.getsize(f_path)
if int(f_size) > TG_SPLIT_SIZE:
if not checked:
checked = True
with download_dict_lock:
download_dict[self.uid] = SplitStatus(up_name, up_path, size)
LOGGER.info(f"Splitting: {up_name}")
fs_utils.split(f_path, f_size, filee, dirpath, TG_SPLIT_SIZE)
os.remove(f_path)
if self.isLeech:
LOGGER.info(f"Leech Name: {up_name}")
tg = pyrogramEngine.TgUploader(up_name, self)
tg_upload_status = TgUploadStatus(tg, size, gid, self)
with download_dict_lock:
download_dict[self.uid] = tg_upload_status
update_all_messages()
tg.upload()
else:
LOGGER.info(f"Upload Name: {up_name}")
drive = gdriveTools.GoogleDriveHelper(up_name, self)
upload_status = UploadStatus(drive, size, gid, self)
with download_dict_lock:
download_dict[self.uid] = upload_status
update_all_messages()
drive.upload(up_name)
def onDownloadError(self, error):
error = error.replace('<', ' ')
error = error.replace('>', ' ')
with download_dict_lock:
try:
download = download_dict[self.uid]
del download_dict[self.uid]
fs_utils.clean_download(download.path())
except Exception as e:
LOGGER.error(str(e))
count = len(download_dict)
if self.message.from_user.username:
uname = f"@{self.message.from_user.username}"
else:
uname = f'<a href="tg://user?id={self.message.from_user.id}">{self.message.from_user.first_name}</a>'
msg = f"{uname} your download has been stopped due to: {error}"
sendMessage(msg, self.bot, self.update)
if count == 0:
self.clean()
else:
update_all_messages()
def onUploadStarted(self):
pass
def onUploadProgress(self):
pass
def onUploadComplete(self, link: str, size, files, folders, typ):
if self.isLeech:
if self.message.from_user.username:
uname = f"@{self.message.from_user.username}"
else:
uname = f'<a href="tg://user?id={self.message.from_user.id}">{self.message.from_user.first_name}</a>'
count = len(files)
msg = f'<b>Name: </b><code>{link}</code>\n\n'
msg += f'<b>Total Files: </b>{count}'
if typ != 0:
msg += f'\n<b>Corrupted Files: </b>{typ}'
if self.message.chat.type == 'private':
sendMessage(msg, self.bot, self.update)
else:
chat_id = str(self.message.chat.id)[4:]
msg += f'\n<b>cc: </b>{uname}\n\n'
fmsg = ''
for index, item in enumerate(list(files), start=1):
msg_id = files[item]
link = f"https://t.me/c/{chat_id}/{msg_id}"
fmsg += f"{index}. <a href='{link}'>{item}</a>\n"
if len(fmsg.encode('utf-8') + msg.encode('utf-8')) > 4000:
time.sleep(1.5)
sendMessage(msg + fmsg, self.bot, self.update)
fmsg = ''
if fmsg != '':
time.sleep(1.5)
sendMessage(msg + fmsg, self.bot, self.update)
with download_dict_lock:
try:
fs_utils.clean_download(download_dict[self.uid].path())
except FileNotFoundError:
pass
del download_dict[self.uid]
count = len(download_dict)
if count == 0:
self.clean()
else:
update_all_messages()
return
with download_dict_lock:
msg = f'<b>Name: </b><code>{download_dict[self.uid].name()}</code>\n\n<b>Size: </b>{size}'
if os.path.isdir(f'{DOWNLOAD_DIR}/{self.uid}/{download_dict[self.uid].name()}'):
msg += '\n\n<b>Type: </b>Folder'
msg += f'\n<b>SubFolders: </b>{folders}'
msg += f'\n<b>Files: </b>{files}'
else:
msg += f'\n\n<b>Type: </b>{typ}'
buttons = button_build.ButtonMaker()
if SHORTENER is not None and SHORTENER_API is not None:
surl = short_url(link)
buttons.buildbutton("☁️ Drive Link", surl)
else:
buttons.buildbutton("☁️ Drive Link", link)
LOGGER.info(f'Done Uploading {download_dict[self.uid].name()}')
if INDEX_URL is not None:
url_path = requests.utils.quote(f'{download_dict[self.uid].name()}')
share_url = f'{INDEX_URL}/{url_path}'
if os.path.isdir(f'{DOWNLOAD_DIR}/{self.uid}/{download_dict[self.uid].name()}'):
share_url += '/'
if SHORTENER is not None and SHORTENER_API is not None:
siurl = short_url(share_url)
buttons.buildbutton("⚡ Index Link", siurl)
else:
buttons.buildbutton("⚡ Index Link", share_url)
else:
share_urls = f'{INDEX_URL}/{url_path}?a=view'
if SHORTENER is not None and SHORTENER_API is not None:
siurl = short_url(share_url)
buttons.buildbutton("⚡ Index Link", siurl)
if VIEW_LINK:
siurls = short_url(share_urls)
buttons.buildbutton("🌐 View Link", siurls)
else:
buttons.buildbutton("⚡ Index Link", share_url)
if VIEW_LINK:
buttons.buildbutton("🌐 View Link", share_urls)
if BUTTON_FOUR_NAME is not None and BUTTON_FOUR_URL is not None:
buttons.buildbutton(f"{BUTTON_FOUR_NAME}", f"{BUTTON_FOUR_URL}")
if BUTTON_FIVE_NAME is not None and BUTTON_FIVE_URL is not None:
buttons.buildbutton(f"{BUTTON_FIVE_NAME}", f"{BUTTON_FIVE_URL}")
if BUTTON_SIX_NAME is not None and BUTTON_SIX_URL is not None:
buttons.buildbutton(f"{BUTTON_SIX_NAME}", f"{BUTTON_SIX_URL}")
if self.message.from_user.username:
uname = f"@{self.message.from_user.username}"
else:
uname = f'<a href="tg://user?id={self.message.from_user.id}">{self.message.from_user.first_name}</a>'
if uname is not None:
msg += f'\n\n<b>cc: </b>{uname}'
try:
fs_utils.clean_download(download_dict[self.uid].path())
except FileNotFoundError:
pass
del download_dict[self.uid]
count = len(download_dict)
sendMarkup(msg, self.bot, self.update, InlineKeyboardMarkup(buttons.build_menu(2)))
if count == 0:
self.clean()
else:
update_all_messages()
def onUploadError(self, error):
e_str = error.replace('<', '').replace('>', '')
with download_dict_lock:
try:
fs_utils.clean_download(download_dict[self.uid].path())
except FileNotFoundError:
pass
del download_dict[self.message.message_id]
count = len(download_dict)
if self.message.from_user.username:
uname = f"@{self.message.from_user.username}"
else:
uname = f'<a href="tg://user?id={self.message.from_user.id}">{self.message.from_user.first_name}</a>'
if uname is not None:
men = f'{uname} '
sendMessage(men + e_str, self.bot, self.update)
if count == 0:
self.clean()
else:
update_all_messages()
def _mirror(bot, update, isZip=False, extract=False, isQbit=False, isLeech=False, pswd=None):
mesg = update.message.text.split('\n')
message_args = mesg[0].split(' ', maxsplit=1)
name_args = mesg[0].split('|', maxsplit=1)
qbitsel = False
try:
link = message_args[1]
if link.startswith("s ") or link == "s":
qbitsel = True
message_args = mesg[0].split(' ', maxsplit=2)
link = message_args[2].strip()
if link.startswith("|") or link.startswith("pswd: "):
link = ''
except IndexError:
link = ''
try:
name = name_args[1]
name = name.split(' pswd: ')[0]
name = name.strip()
except IndexError:
name = ''
link = re.split(r"pswd:|\|", link)[0]
link = link.strip()
pswdMsg = mesg[0].split(' pswd: ')
if len(pswdMsg) > 1:
pswd = pswdMsg[1]
listener = MirrorListener(bot, update, isZip, extract, isQbit, isLeech, pswd)
reply_to = update.message.reply_to_message
if reply_to is not None:
file = None
media_array = [reply_to.document, reply_to.video, reply_to.audio]
for i in media_array:
if i is not None:
file = i
break
if (
not bot_utils.is_url(link)
and not bot_utils.is_magnet(link)
or len(link) == 0
):
if file is None:
reply_text = reply_to.text
if bot_utils.is_url(reply_text) or bot_utils.is_magnet(reply_text):
link = reply_text.strip()
elif isQbit:
file_name = str(time.time()).replace(".", "") + ".torrent"
link = file.get_file().download(custom_path=file_name)
elif file.mime_type != "application/x-bittorrent":
tg_downloader = TelegramDownloadHelper(listener)
ms = update.message
tg_downloader.add_download(ms, f'{DOWNLOAD_DIR}{listener.uid}/', name)
return
else:
link = file.get_file().file_path
if len(mesg) > 1:
try:
ussr = urllib.parse.quote(mesg[1], safe='')
pssw = urllib.parse.quote(mesg[2], safe='')
link = link.split("://", maxsplit=1)
link = f'{link[0]}://{ussr}:{pssw}@{link[1]}'
except IndexError:
pass
LOGGER.info(link)
if not bot_utils.is_url(link) and not bot_utils.is_magnet(link) and not os.path.exists(link):
help_msg = "Send link along with command line or by reply\n"
help_msg += "<b>Examples:</b> \n<code>/command</code> link |newname(TG files or Direct inks) pswd: mypassword(zip/unzip)"
help_msg += "\nBy replying to link: <code>/command</code> |newname(TG files or Direct inks) pswd: mypassword(zip/unzip)"
help_msg += "\nFor Direct Links Authorization: <code>/command</code> link |newname pswd: mypassword\nusername\npassword (Same with by reply)"
return sendMessage(help_msg, bot, update)
elif bot_utils.is_url(link) and not bot_utils.is_magnet(link) and not os.path.exists(link) and isQbit:
try:
resp = requests.get(link)
if resp.status_code == 200:
file_name = str(time.time()).replace(".", "") + ".torrent"
open(file_name, "wb").write(resp.content)
link = f"{file_name}"
else:
sendMessage(f"ERROR: link got HTTP response: {resp.status_code}", bot, update)
return
except Exception as e:
LOGGER.error(str(e))
return
elif not os.path.exists(link) and not bot_utils.is_mega_link(link) and not bot_utils.is_gdrive_link(link) and not bot_utils.is_magnet(link):
try:
link = direct_link_generator(link)
except DirectDownloadLinkException as e:
LOGGER.info(e)
if "ERROR:" in str(e):
sendMessage(f"{e}", bot, update)
return
if "Youtube" in str(e):
sendMessage(f"{e}", bot, update)
return
if bot_utils.is_gdrive_link(link):
if not isZip and not extract and not isLeech:
sendMessage(f"Use /{BotCommands.CloneCommand} to clone Google Drive file/folder\nUse /{BotCommands.ZipMirrorCommand} to make zip of Google Drive folder\nUse /{BotCommands.UnzipMirrorCommand} to extracts archive Google Drive file", bot, update)
return
res, size, name, files = gdriveTools.GoogleDriveHelper().helper(link)
if res != "":
sendMessage(res, bot, update)
return
if ZIP_UNZIP_LIMIT is not None:
LOGGER.info('Checking File/Folder Size...')
if size > ZIP_UNZIP_LIMIT * 1024**3:
msg = f'Failed, Zip/Unzip limit is {ZIP_UNZIP_LIMIT}GB.\nYour File/Folder size is {bot_utils.get_readable_file_size(size)}.'
sendMessage(msg, bot, update)
return
LOGGER.info(f"Download Name: {name}")
drive = gdriveTools.GoogleDriveHelper(name, listener)
gid = ''.join(random.SystemRandom().choices(string.ascii_letters + string.digits, k=12))
download_status = DownloadStatus(drive, size, listener, gid)
with download_dict_lock:
download_dict[listener.uid] = download_status
sendStatusMessage(update, bot)
drive.download(link)
elif bot_utils.is_mega_link(link):
if BLOCK_MEGA_LINKS:
sendMessage("Mega links are blocked!", bot, update)
return
link_type = bot_utils.get_mega_link_type(link)
if link_type == "folder" and BLOCK_MEGA_FOLDER:
sendMessage("Mega folder are blocked!", bot, update)
else:
mega_dl = MegaDownloadHelper()
mega_dl.add_download(link, f'{DOWNLOAD_DIR}{listener.uid}/', listener)
elif isQbit and (bot_utils.is_magnet(link) or os.path.exists(link)):
qbit = QbitTorrent()
qbit.add_torrent(link, f'{DOWNLOAD_DIR}{listener.uid}/', listener, qbitsel)
else:
ariaDlManager.add_download(link, f'{DOWNLOAD_DIR}{listener.uid}/', listener, name)
sendStatusMessage(update, bot)
def mirror(update, context):
_mirror(context.bot, update)
def unzip_mirror(update, context):
_mirror(context.bot, update, extract=True)
def zip_mirror(update, context):
_mirror(context.bot, update, True)
def qb_mirror(update, context):
_mirror(context.bot, update, isQbit=True)
def qb_unzip_mirror(update, context):
_mirror(context.bot, update, extract=True, isQbit=True)
def qb_zip_mirror(update, context):
_mirror(context.bot, update, True, isQbit=True)
def leech(update, context):
_mirror(context.bot, update, isLeech=True)
def unzip_leech(update, context):
_mirror(context.bot, update, extract=True, isLeech=True)
def zip_leech(update, context):
_mirror(context.bot, update, True, isLeech=True)
def qb_leech(update, context):
_mirror(context.bot, update, isQbit=True, isLeech=True)
def qb_unzip_leech(update, context):
_mirror(context.bot, update, extract=True, isQbit=True, isLeech=True)
def qb_zip_leech(update, context):
_mirror(context.bot, update, True, isQbit=True, isLeech=True)
mirror_handler = CommandHandler(BotCommands.MirrorCommand, mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
unzip_mirror_handler = CommandHandler(BotCommands.UnzipMirrorCommand, unzip_mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
zip_mirror_handler = CommandHandler(BotCommands.ZipMirrorCommand, zip_mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_mirror_handler = CommandHandler(BotCommands.QbMirrorCommand, qb_mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_unzip_mirror_handler = CommandHandler(BotCommands.QbUnzipMirrorCommand, qb_unzip_mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_zip_mirror_handler = CommandHandler(BotCommands.QbZipMirrorCommand, qb_zip_mirror,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
leech_handler = CommandHandler(BotCommands.LeechCommand, leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
unzip_leech_handler = CommandHandler(BotCommands.UnzipLeechCommand, unzip_leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
zip_leech_handler = CommandHandler(BotCommands.ZipLeechCommand, zip_leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_leech_handler = CommandHandler(BotCommands.QbLeechCommand, qb_leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_unzip_leech_handler = CommandHandler(BotCommands.QbUnzipLeechCommand, qb_unzip_leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
qb_zip_leech_handler = CommandHandler(BotCommands.QbZipLeechCommand, qb_zip_leech,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
dispatcher.add_handler(mirror_handler)
dispatcher.add_handler(unzip_mirror_handler)
dispatcher.add_handler(zip_mirror_handler)
dispatcher.add_handler(qb_mirror_handler)
dispatcher.add_handler(qb_unzip_mirror_handler)
dispatcher.add_handler(qb_zip_mirror_handler)
dispatcher.add_handler(leech_handler)
dispatcher.add_handler(unzip_leech_handler)
dispatcher.add_handler(zip_leech_handler)
dispatcher.add_handler(qb_leech_handler)
dispatcher.add_handler(qb_unzip_leech_handler)
dispatcher.add_handler(qb_zip_leech_handler)

View File

@ -0,0 +1,36 @@
import threading
import time
import psutil, shutil
from telegram.ext import CommandHandler
from bot import dispatcher, status_reply_dict, status_reply_dict_lock, download_dict, download_dict_lock, botStartTime
from bot.helper.telegram_helper.message_utils import sendMessage, deleteMessage, auto_delete_message, sendStatusMessage
from bot.helper.ext_utils.bot_utils import get_readable_file_size, get_readable_time
from telegram.error import BadRequest
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
def mirror_status(update, context):
with download_dict_lock:
if len(download_dict) == 0:
currentTime = get_readable_time(time.time() - botStartTime)
total, used, free = shutil.disk_usage('.')
free = get_readable_file_size(free)
message = 'No Active Downloads !\n___________________________'
message += f"\n<b>CPU:</b> {psutil.cpu_percent()}% | <b>FREE:</b> {free}" \
f"\n<b>RAM:</b> {psutil.virtual_memory().percent}% | <b>UPTIME:</b> {currentTime}"
reply_message = sendMessage(message, context.bot, update)
threading.Thread(target=auto_delete_message, args=(context.bot, update.message, reply_message)).start()
return
index = update.effective_chat.id
with status_reply_dict_lock:
if index in status_reply_dict.keys():
deleteMessage(context.bot, status_reply_dict[index])
del status_reply_dict[index]
sendStatusMessage(update, context.bot)
deleteMessage(context.bot, update.message)
mirror_status_handler = CommandHandler(BotCommands.StatusCommand, mirror_status,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
dispatcher.add_handler(mirror_status_handler)

164
bot/modules/search.py Normal file
View File

@ -0,0 +1,164 @@
import requests
import itertools
import time
from urllib.parse import quote
from telegram import InlineKeyboardMarkup
from telegram.ext import CommandHandler, CallbackQueryHandler
from bot import dispatcher, LOGGER, SEARCH_API_LINK
from bot.helper.ext_utils.telegraph_helper import telegraph
from bot.helper.telegram_helper.message_utils import editMessage, sendMessage, sendMarkup
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper import button_build
SITES = {
"1337x": "1337x",
"nyaasi": "NyaaSi",
"yts": "YTS",
"piratebay": "PirateBay",
"torlock": "Torlock",
"eztv": "EzTvio",
"tgx": "TorrentGalaxy",
"rarbg": "Rarbg",
"ettv": "Ettv",
"all": "All"
}
SEARCH_LIMIT = 250
def torser(update, context):
user_id = update.message.from_user.id
if SEARCH_API_LINK is None:
return sendMessage("No Torrent Search Api Link. Check readme variables", context.bot, update)
try:
key = update.message.text.split(" ", maxsplit=1)[1]
except IndexError:
return sendMessage("Send a search key along with command", context.bot, update)
buttons = button_build.ButtonMaker()
for data, name in SITES.items():
buttons.sbutton(name, f"torser {user_id} {data}")
buttons.sbutton("Cancel", f"torser {user_id} cancel")
button = InlineKeyboardMarkup(buttons.build_menu(2))
sendMarkup('Choose site to search.', context.bot, update, button)
def torserbut(update, context):
query = update.callback_query
user_id = query.from_user.id
message = query.message
key = message.reply_to_message.text.split(" ", maxsplit=1)[1]
data = query.data
data = data.split(" ")
if user_id != int(data[1]):
query.answer(text="Not Yours!", show_alert=True)
elif data[2] != "cancel":
query.answer()
site = data[2]
editMessage(f"<b>Searching for <i>{key}</i> Torrent Site:- <i>{SITES.get(site)}</i></b>", message)
search(key, site, message)
else:
query.answer()
editMessage("Search has been canceled!", message)
def search(key, site, message):
LOGGER.info(f"Searching: {key} from {site}")
api = f"{SEARCH_API_LINK}/api/{site}/{key}"
try:
resp = requests.get(api)
search_results = resp.json()
if site == "all":
search_results = list(itertools.chain.from_iterable(search_results))
if isinstance(search_results, list):
link = getResult(search_results, key, message)
buttons = button_build.ButtonMaker()
buttons.buildbutton("🔎 VIEW", link)
msg = f"<b>Found {SEARCH_LIMIT if len(search_results) > SEARCH_LIMIT else len(search_results)}</b>"
msg += f" <b>result for <i>{key}</i> Torrent Site:- <i>{SITES.get(site)}</i></b>"
button = InlineKeyboardMarkup(buttons.build_menu(1))
editMessage(msg, message, button)
else:
editMessage(f"No result found for <i>{key}</i> Torrent Site:- <i>{SITES.get(site)}</i>", message)
except Exception as e:
editMessage(str(e), message)
def getResult(search_results, key, message):
telegraph_content = []
path = []
msg = f"<h4>Search Result For {key}</h4><br><br>"
for index, result in enumerate(search_results, start=1):
try:
msg += f"<code><a href='{result['Url']}'>{result['Name']}</a></code><br>"
if "Files" in result.keys():
for subres in result['Files']:
msg += f"<b>Quality: </b>{subres['Quality']} | <b>Size: </b>{subres['Size']}<br>"
try:
msg += f"<b>Share link to</b> <a href='http://t.me/share/url?url={subres['Torrent']}'>Telegram</a><br>"
msg += f"<b>Link: </b><code>{subres['Torrent']}</code><br>"
except KeyError:
msg += f"<b>Share Magnet to</b> <a href='http://t.me/share/url?url={subres['Magnet']}'>Telegram</a><br>"
msg += f"<b>Magnet: </b><code>{quote(subres['Magnet'])}</code><br>"
else:
msg += f"<b>Size: </b>{result['Size']}<br>"
msg += f"<b>Seeders: </b>{result['Seeders']} | <b>Leechers: </b>{result['Leechers']}<br>"
except KeyError:
pass
try:
msg += f"<b>Share Magnet to</b> <a href='http://t.me/share/url?url={quote(result['Magnet'])}'>Telegram</a><br>"
msg += f"<b>Magnet: </b><code>{result['Magnet']}</code><br><br>"
except KeyError:
msg += "<br>"
if len(msg.encode('utf-8')) > 40000 :
telegraph_content.append(msg)
msg = ""
if index == SEARCH_LIMIT:
break
if msg != "":
telegraph_content.append(msg)
editMessage(f"<b>Creating</b> {len(telegraph_content)} <b>Telegraph pages.</b>", message)
for content in telegraph_content :
path.append(
telegraph.create_page(
title='Mirror-leech-bot Torrent Search',
content=content
)["path"]
)
time.sleep(0.5)
if len(path) > 1:
editMessage(f"<b>Editing</b> {len(telegraph_content)} <b>Telegraph pages.</b>", message)
edit_telegraph(path, telegraph_content)
return f"https://telegra.ph/{path[0]}"
def edit_telegraph(path, telegraph_content):
nxt_page = 1
prev_page = 0
num_of_path = len(path)
for content in telegraph_content :
if nxt_page == 1 :
content += f'<b><a href="https://telegra.ph/{path[nxt_page]}">Next</a></b>'
nxt_page += 1
else :
if prev_page <= num_of_path:
content += f'<b><a href="https://telegra.ph/{path[prev_page]}">Prev</a></b>'
prev_page += 1
if nxt_page < num_of_path:
content += f'<b> | <a href="https://telegra.ph/{path[nxt_page]}">Next</a></b>'
nxt_page += 1
telegraph.edit_page(
path = path[prev_page],
title = 'Mirror-leech-bot Torrent Search',
content=content
)
time.sleep(0.5)
return
torser_handler = CommandHandler(BotCommands.SearchCommand, torser, filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
torserbut_handler = CallbackQueryHandler(torserbut, pattern="torser", run_async=True)
dispatcher.add_handler(torser_handler)
dispatcher.add_handler(torserbut_handler)

43
bot/modules/shell.py Normal file
View File

@ -0,0 +1,43 @@
import subprocess
from bot import LOGGER, dispatcher
from telegram import ParseMode
from telegram.ext import CommandHandler
from bot.helper.telegram_helper.filters import CustomFilters
from bot.helper.telegram_helper.bot_commands import BotCommands
def shell(update, context):
message = update.effective_message
cmd = message.text.split(' ', 1)
if len(cmd) == 1:
message.reply_text('No command to execute was given.')
return
cmd = cmd[1]
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = process.communicate()
reply = ''
stderr = stderr.decode()
stdout = stdout.decode()
if stdout:
reply += f"*Stdout*\n`{stdout}`\n"
LOGGER.info(f"Shell - {cmd} - {stdout}")
if stderr:
reply += f"*Stderr*\n`{stderr}`\n"
LOGGER.error(f"Shell - {cmd} - {stderr}")
if len(reply) > 3000:
with open('shell_output.txt', 'w') as file:
file.write(reply)
with open('shell_output.txt', 'rb') as doc:
context.bot.send_document(
document=doc,
filename=doc.name,
reply_to_message_id=message.message_id,
chat_id=message.chat_id)
else:
message.reply_text(reply, parse_mode=ParseMode.MARKDOWN)
SHELL_HANDLER = CommandHandler(BotCommands.ShellCommand, shell,
filters=CustomFilters.owner_filter, run_async=True)
dispatcher.add_handler(SHELL_HANDLER)

47
bot/modules/speedtest.py Normal file
View File

@ -0,0 +1,47 @@
from speedtest import Speedtest
from bot.helper.telegram_helper.filters import CustomFilters
from bot import dispatcher
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.message_utils import sendMessage, editMessage
from telegram.ext import CommandHandler
def speedtest(update, context):
speed = sendMessage("Running Speed Test . . . ", context.bot, update)
test = Speedtest()
test.get_best_server()
test.download()
test.upload()
test.results.share()
result = test.results.dict()
string_speed = f'''
<b>Server</b>
<b>Name:</b> <code>{result['server']['name']}</code>
<b>Country:</b> <code>{result['server']['country']}, {result['server']['cc']}</code>
<b>Sponsor:</b> <code>{result['server']['sponsor']}</code>
<b>ISP:</b> <code>{result['client']['isp']}</code>
<b>SpeedTest Results</b>
<b>Upload:</b> <code>{speed_convert(result['upload'] / 8)}</code>
<b>Download:</b> <code>{speed_convert(result['download'] / 8)}</code>
<b>Ping:</b> <code>{result['ping']} ms</code>
<b>ISP Rating:</b> <code>{result['client']['isprating']}</code>
'''
editMessage(string_speed, speed)
def speed_convert(size):
"""Hi human, you can't read bytes?"""
power = 2 ** 10
zero = 0
units = {0: "", 1: "Kb/s", 2: "MB/s", 3: "Gb/s", 4: "Tb/s"}
while size > power:
size /= power
zero += 1
return f"{round(size, 2)} {units[zero]}"
SPEED_HANDLER = CommandHandler(BotCommands.SpeedCommand, speedtest,
filters=CustomFilters.owner_filter | CustomFilters.authorized_user, run_async=True)
dispatcher.add_handler(SPEED_HANDLER)

153
bot/modules/watch.py Normal file
View File

@ -0,0 +1,153 @@
import threading
import re
from telegram.ext import CommandHandler, CallbackQueryHandler
from telegram import InlineKeyboardMarkup
from bot import DOWNLOAD_DIR, dispatcher
from bot.helper.telegram_helper.message_utils import sendMessage, sendMarkup
from bot.helper.telegram_helper import button_build
from bot.helper.ext_utils.bot_utils import is_url
from bot.helper.ext_utils.bot_utils import get_readable_file_size
from bot.helper.mirror_utils.download_utils.youtube_dl_download_helper import YoutubeDLHelper
from bot.helper.telegram_helper.bot_commands import BotCommands
from bot.helper.telegram_helper.filters import CustomFilters
from .mirror import MirrorListener
listener_dict = {}
def _watch(bot, update, isZip=False, isLeech=False, pswd=None):
mssg = update.message.text
message_args = mssg.split(' ', maxsplit=2)
name_args = mssg.split('|', maxsplit=1)
user_id = update.message.from_user.id
msg_id = update.message.message_id
try:
link = message_args[1].strip()
if link.startswith("|") or link.startswith("pswd: "):
link = ''
except IndexError:
link = ''
link = re.split(r"pswd:|\|", link)[0]
link = link.strip()
try:
name = name_args[1]
name = name.split(' pswd: ')[0]
name = name.strip()
except IndexError:
name = ''
pswdMsg = mssg.split(' pswd: ')
if len(pswdMsg) > 1:
pswd = pswdMsg[1]
reply_to = update.message.reply_to_message
if reply_to is not None:
link = reply_to.text.strip()
if not is_url(link):
help_msg = "Send link along with command line or by reply\n"
help_msg += "<b>Examples:</b> \n<code>/command</code> link |newname pswd: mypassword(zip)"
help_msg += "\nBy replying to link: <code>/command</code> |newname pswd: mypassword(zip)"
return sendMessage(help_msg, bot, update)
listener = MirrorListener(bot, update, isZip, isLeech=isLeech, pswd=pswd)
listener_dict[msg_id] = listener, user_id, link, name
buttons = button_build.ButtonMaker()
best_video = "bv*+ba/b"
best_audio = "ba/b"
ydl = YoutubeDLHelper(listener)
try:
result = ydl.extractMetaData(link, name, True)
except Exception as e:
return sendMessage(str(e), bot, update)
if 'entries' in result:
for i in ['144', '240', '360', '480', '720', '1080', '1440', '2160']:
video_format = f"bv*[height<={i}]+ba/b"
buttons.sbutton(str(i), f"quality {msg_id} {video_format}")
buttons.sbutton("Best Videos", f"quality {msg_id} {best_video}")
buttons.sbutton("Best Audios", f"quality {msg_id} {best_audio}")
else:
formats = result['formats']
formats_dict = {}
for frmt in formats:
if not frmt.get('tbr') or not frmt.get('height'):
continue
if frmt.get('fps'):
quality = f"{frmt['height']}p{frmt['fps']}-{frmt['ext']}"
else:
quality = f"{frmt['height']}p-{frmt['ext']}"
if quality not in formats_dict or quality in formats_dict and formats_dict[quality][1] < frmt['tbr']:
if frmt.get('filesize'):
size = frmt['filesize']
elif frmt.get('filesize_approx'):
size = frmt['filesize_approx']
else:
size = 0
formats_dict[quality] = [size, frmt['tbr']]
for forDict in formats_dict:
qual_fps_ext = re.split(r'p|-', forDict, maxsplit=2)
if qual_fps_ext[1] != '':
video_format = f"bv*[height={qual_fps_ext[0]}][fps={qual_fps_ext[1]}][ext={qual_fps_ext[2]}]+ba/b"
else:
video_format = f"bv*[height={qual_fps_ext[0]}][ext={qual_fps_ext[2]}]+ba/b"
buttonName = f"{forDict} ({get_readable_file_size(formats_dict[forDict][0])})"
buttons.sbutton(str(buttonName), f"quality {msg_id} {video_format}")
buttons.sbutton("Best Video", f"quality {msg_id} {best_video}")
buttons.sbutton("Best Audio", f"quality {msg_id} {best_audio}")
buttons.sbutton("Cancel", f"quality {msg_id} cancel")
YTBUTTONS = InlineKeyboardMarkup(buttons.build_menu(2))
sendMarkup('Choose video/playlist quality', bot, update, YTBUTTONS)
def select_format(update, context):
query = update.callback_query
user_id = query.from_user.id
data = query.data
data = data.split(" ")
task_id = int(data[1])
listener, uid, link, name = listener_dict[task_id]
if user_id != uid:
return query.answer(text="Don't waste your time!", show_alert=True)
elif data[2] != "cancel":
query.answer()
qual = data[2]
ydl = YoutubeDLHelper(listener)
threading.Thread(target=ydl.add_download,args=(link, f'{DOWNLOAD_DIR}{task_id}', name, qual)).start()
del listener_dict[task_id]
query.message.delete()
def watch(update, context):
_watch(context.bot, update)
def watchZip(update, context):
_watch(context.bot, update, True)
def leechWatch(update, context):
_watch(context.bot, update, isLeech=True)
def leechWatchZip(update, context):
_watch(context.bot, update, True, True)
watch_handler = CommandHandler(BotCommands.WatchCommand, watch,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
zip_watch_handler = CommandHandler(BotCommands.ZipWatchCommand, watchZip,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
leech_watch_handler = CommandHandler(BotCommands.LeechWatchCommand, leechWatch,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
leech_zip_watch_handler = CommandHandler(BotCommands.LeechZipWatchCommand, leechWatchZip,
filters=CustomFilters.authorized_chat | CustomFilters.authorized_user, run_async=True)
quality_handler = CallbackQueryHandler(select_format, pattern="quality", run_async=True)
dispatcher.add_handler(watch_handler)
dispatcher.add_handler(zip_watch_handler)
dispatcher.add_handler(leech_watch_handler)
dispatcher.add_handler(leech_zip_watch_handler)
dispatcher.add_handler(quality_handler)

4
captain-definition Normal file
View File

@ -0,0 +1,4 @@
{
"schemaVersion": 2,
"dockerfilePath": "./Dockerfile"
}

61
config_sample.env Normal file
View File

@ -0,0 +1,61 @@
# Remove this line before deploying
_____REMOVE_THIS_LINE_____=True
# REQUIRED CONFIG
BOT_TOKEN = ""
GDRIVE_FOLDER_ID = ""
OWNER_ID =
DOWNLOAD_DIR = "/usr/src/app/downloads"
DOWNLOAD_STATUS_UPDATE_INTERVAL = 7
AUTO_DELETE_MESSAGE_DURATION = 20
IS_TEAM_DRIVE = ""
TELEGRAM_API =
TELEGRAM_HASH = ""
BASE_URL_OF_BOT = "" # Web Link, Required for (Heroku) to avoid sleep or use worker if you don't want to use web (selection)
# OPTIONAL CONFIG
DATABASE_URL = ""
AUTHORIZED_CHATS = "" # Split by space
SUDO_USERS = "" # Split by space
IGNORE_PENDING_REQUESTS = ""
USE_SERVICE_ACCOUNTS = ""
INDEX_URL = ""
STATUS_LIMIT = "" # Recommended limit is 4
UPTOBOX_TOKEN = ""
MEGA_API_KEY = ""
MEGA_EMAIL_ID = ""
MEGA_PASSWORD = ""
BLOCK_MEGA_FOLDER = ""
BLOCK_MEGA_LINKS = ""
STOP_DUPLICATE = ""
SHORTENER = ""
SHORTENER_API = ""
SEARCH_API_LINK = ""
UPSTREAM_REPO = ""
# Leech
TG_SPLIT_SIZE = "" # leave it empty for max size (2GB), or add size in bytes
AS_DOCUMENT = ""
EQUAL_SPLITS = ""
CUSTOM_FILENAME = ""
# qBittorrent
IS_VPS = "" # Don't set this to True even if you're using VPS, unless facing error with web server
SERVER_PORT = "80" # Only For VPS even if IS_VPS is False
# If you want to use Credentials externally from Index Links, fill these vars with the direct links
# These are optional, if you don't know about them, simply leave them empty
ACCOUNTS_ZIP_URL = ""
TOKEN_PICKLE_URL = ""
MULTI_SEARCH_URL = "" # You can use gist raw link (remove commit id from the link, like config raw link check Heroku guide)
# To use limit don't add unit. Default unit is GB.
TORRENT_DIRECT_LIMIT = ""
ZIP_UNZIP_LIMIT = ""
CLONE_LIMIT = ""
MEGA_LIMIT = ""
# View Link button to open file Index Link in browser instead of direct download link
# You can figure out if it's compatible with your Index code or not, open any video from you Index and check if its URL ends with ?a=view, if yes fill True it will work (Compatible with Bhadoo Drive Index)
VIEW_LINK = ""
# Add more buttons (Three buttons are already added including Drive Link, Index Link, and View Link, you can add extra buttons too, these are optional)
# If you don't know what are below entries, simply leave them empty
BUTTON_FOUR_NAME = ""
BUTTON_FOUR_URL = ""
BUTTON_FIVE_NAME = ""
BUTTON_FIVE_URL = ""
BUTTON_SIX_NAME = ""
BUTTON_SIX_URL = ""

9
docker-compose.yml Normal file
View File

@ -0,0 +1,9 @@
version: "3.3"
services:
app:
build: .
command: bash start.sh
restart: on-failure
ports:
- "80:80"

47
driveid.py Normal file
View File

@ -0,0 +1,47 @@
import os
import re
print("\n\n"\
" Bot can search files recursively, but you have to add the list of drives you want to search.\n"\
" Use the following format: (You can use 'root' in the ID in case you wan to use main drive.)\n"\
" teamdrive NAME --> anything that you likes\n"\
" teamdrive ID --> id of teamdrives in which you likes to search ('root' for main drive)\n"\
" teamdrive INDEX URL --> enter index url for this drive.\n" \
" go to the respective drive and copy the url from address bar\n")
msg = ''
if os.path.exists('drive_folder'):
with open('drive_folder', 'r+') as f:
lines = f.read()
if not re.match(r'^\s*$', lines):
print(lines)
print("\n\n"\
" DO YOU WISH TO KEEP THE ABOVE DETAILS THAT YOU PREVIOUSLY ADDED???? ENTER (y/n)\n"\
" IF NOTHING SHOWS ENTER n")
while 1:
choice = input()
if choice in ['y', 'Y']:
msg = f'{lines}'
break
elif choice in ['n', 'N']:
break
else:
print("\n\n DO YOU WISH TO KEEP THE ABOVE DETAILS ???? y/n <=== this is option ..... OPEN YOUR EYES & READ...")
num = int(input(" How Many Drive/Folder You Likes To Add : "))
for count in range(1, num + 1):
print(f"\n > DRIVE - {count}\n")
name = input(" Enter Drive NAME (anything) : ")
id = input(" Enter Drive ID : ")
index = input(" Enter Drive INDEX URL (optional) : ")
if not name or not id:
print("\n\n ERROR : Dont leave the name/id without filling.")
exit(1)
name=name.replace(" ", "_")
if index:
if index[-1] == "/":
index = index[:-1]
else:
index = ''
msg += f"{name} {id} {index}\n"
with open('drive_folder', 'w') as file:
file.truncate(0)
file.write(msg)
print("\n\n Done!")

199
extract Executable file
View File

@ -0,0 +1,199 @@
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $(basename $0) FILES"
exit 1
fi
extract() {
arg="$1"
cd "$(dirname "$arg")" || exit
case "$arg" in
*.tar.bz2)
tar xjf "$arg" --one-top-level
local code=$?
;;
*.tar.gz)
tar xzf "$arg" --one-top-level
local code=$?
;;
*.bz2)
bunzip2 "$arg"
local code=$?
;;
*.gz)
gunzip "$arg"
local code=$?
;;
*.tar)
tar xf "$arg" --one-top-level
local code=$?
;;
*.tbz2)
(tar xjf "$arg" --one-top-level)
local code=$?
;;
*.tgz)
tar xzf "$arg" --one-top-level
local code=$?
;;
*.tar.xz)
a_dir=$(expr "$arg" : '\(.*\).tar.xz')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.zip)
a_dir=$(expr "$arg" : '\(.*\).zip')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.7z)
a_dir=$(expr "$arg" : '\(.*\).7z')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.Z)
uncompress "$arg"
local code=$?
;;
*.rar)
a_dir=$(expr "$arg" : '\(.*\).rar')
mkdir "$a_dir"
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.iso)
a_dir=$(expr "$arg" : '\(.*\).iso')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.wim)
a_dir=$(expr "$arg" : '\(.*\).wim')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.cab)
a_dir=$(expr "$arg" : '\(.*\).cab')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.apm)
a_dir=$(expr "$arg" : '\(.*\).apm')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.arj)
a_dir=$(expr "$arg" : '\(.*\).arj')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.chm)
a_dir=$(expr "$arg" : '\(.*\).chm')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.cpio)
a_dir=$(expr "$arg" : '\(.*\).cpio')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.cramfs)
a_dir=$(expr "$arg" : '\(.*\).cramfs')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.deb)
a_dir=$(expr "$arg" : '\(.*\).deb')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.dmg)
a_dir=$(expr "$arg" : '\(.*\).dmg')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.fat)
a_dir=$(expr "$arg" : '\(.*\).fat')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.hfs)
a_dir=$(expr "$arg" : '\(.*\).hfs')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.lzh)
a_dir=$(expr "$arg" : '\(.*\).lzh')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.lzma)
a_dir=$(expr "$arg" : '\(.*\).lzma')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.lzma2)
a_dir=$(expr "$arg" : '\(.*\).lzma2')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.mbr)
a_dir=$(expr "$arg" : '\(.*\).mbr')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.msi)
a_dir=$(expr "$arg" : '\(.*\).msi')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.mslz)
a_dir=$(expr "$arg" : '\(.*\).mslz')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.nsis)
a_dir=$(expr "$arg" : '\(.*\).nsis')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.ntfs)
a_dir=$(expr "$arg" : '\(.*\).ntfs')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.rpm)
a_dir=$(expr "$arg" : '\(.*\).rpm')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.squashfs)
a_dir=$(expr "$arg" : '\(.*\).squashfs')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.udf)
a_dir=$(expr "$arg" : '\(.*\).udf')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.vhd)
a_dir=$(expr "$arg" : '\(.*\).vhd')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*.xar)
a_dir=$(expr "$arg" : '\(.*\).xar')
7z x "$arg" -o"$a_dir"
local code=$?
;;
*)
echo "'$arg' cannot be extracted via extract()" 1>&2
exit 1
;;
esac
cd - || exit $?
exit $code
}
extract "$1"

351
gen_sa_accounts.py Normal file
View File

@ -0,0 +1,351 @@
import errno
import os
import pickle
import sys
from argparse import ArgumentParser
from base64 import b64decode
from glob import glob
from json import loads
from random import choice
from time import sleep
from google.auth.transport.requests import Request
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
SCOPES = ['https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/iam']
project_create_ops = []
current_key_dump = []
sleep_time = 30
# Create count SAs in project
def _create_accounts(service, project, count):
batch = service.new_batch_http_request(callback=_def_batch_resp)
for _ in range(count):
aid = _generate_id('mfc-')
batch.add(service.projects().serviceAccounts().create(name='projects/' + project, body={'accountId': aid,
'serviceAccount': {
'displayName': aid}}))
batch.execute()
# Create accounts needed to fill project
def _create_remaining_accounts(iam, project):
print('Creating accounts in %s' % project)
sa_count = len(_list_sas(iam, project))
while sa_count != 100:
_create_accounts(iam, project, 100 - sa_count)
sa_count = len(_list_sas(iam, project))
# Generate a random id
def _generate_id(prefix='saf-'):
chars = '-abcdefghijklmnopqrstuvwxyz1234567890'
return prefix + ''.join(choice(chars) for _ in range(25)) + choice(chars[1:])
# List projects using service
def _get_projects(service):
return [i['projectId'] for i in service.projects().list().execute()['projects']]
# Default batch callback handler
def _def_batch_resp(id, resp, exception):
if exception is not None:
if str(exception).startswith('<HttpError 429'):
sleep(sleep_time / 100)
else:
print(str(exception))
# Project Creation Batch Handler
def _pc_resp(id, resp, exception):
global project_create_ops
if exception is not None:
print(str(exception))
else:
for i in resp.values():
project_create_ops.append(i)
# Project Creation
def _create_projects(cloud, count):
global project_create_ops
batch = cloud.new_batch_http_request(callback=_pc_resp)
new_projs = []
for _ in range(count):
new_proj = _generate_id()
new_projs.append(new_proj)
batch.add(cloud.projects().create(body={'project_id': new_proj}))
batch.execute()
for i in project_create_ops:
while True:
resp = cloud.operations().get(name=i).execute()
if 'done' in resp and resp['done']:
break
sleep(3)
return new_projs
# Enable services ste for projects in projects
def _enable_services(service, projects, ste):
batch = service.new_batch_http_request(callback=_def_batch_resp)
for i in projects:
for j in ste:
batch.add(service.services().enable(name='projects/%s/services/%s' % (i, j)))
batch.execute()
# List SAs in project
def _list_sas(iam, project):
resp = iam.projects().serviceAccounts().list(name='projects/' + project, pageSize=100).execute()
if 'accounts' in resp:
return resp['accounts']
return []
# Create Keys Batch Handler
def _batch_keys_resp(id, resp, exception):
global current_key_dump
if exception is not None:
current_key_dump = None
sleep(sleep_time / 100)
elif current_key_dump is None:
sleep(sleep_time / 100)
else:
current_key_dump.append((
resp['name'][resp['name'].rfind('/'):],
b64decode(resp['privateKeyData']).decode('utf-8')
))
# Create Keys
def _create_sa_keys(iam, projects, path):
global current_key_dump
for i in projects:
current_key_dump = []
print('Downloading keys from %s' % i)
while current_key_dump is None or len(current_key_dump) != 100:
batch = iam.new_batch_http_request(callback=_batch_keys_resp)
total_sas = _list_sas(iam, i)
for j in total_sas:
batch.add(iam.projects().serviceAccounts().keys().create(
name='projects/%s/serviceAccounts/%s' % (i, j['uniqueId']),
body={
'privateKeyType': 'TYPE_GOOGLE_CREDENTIALS_FILE',
'keyAlgorithm': 'KEY_ALG_RSA_2048'
}
))
batch.execute()
if current_key_dump is None:
print('Redownloading keys from %s' % i)
current_key_dump = []
else:
for index, j in enumerate(current_key_dump):
with open(f'{path}/{index}.json', 'w+') as f:
f.write(j[1])
# Delete Service Accounts
def _delete_sas(iam, project):
sas = _list_sas(iam, project)
batch = iam.new_batch_http_request(callback=_def_batch_resp)
for i in sas:
batch.add(iam.projects().serviceAccounts().delete(name=i['name']))
batch.execute()
def serviceaccountfactory(
credentials='credentials.json',
token='token_sa.pickle',
path=None,
list_projects=False,
list_sas=None,
create_projects=None,
max_projects=12,
enable_services=None,
services=['iam', 'drive'],
create_sas=None,
delete_sas=None,
download_keys=None
):
selected_projects = []
proj_id = loads(open(credentials, 'r').read())['installed']['project_id']
creds = None
if os.path.exists(token):
with open(token, 'rb') as t:
creds = pickle.load(t)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(credentials, SCOPES)
# creds = flow.run_local_server(port=0)
creds = flow.run_console()
with open(token, 'wb') as t:
pickle.dump(creds, t)
cloud = build('cloudresourcemanager', 'v1', credentials=creds)
iam = build('iam', 'v1', credentials=creds)
serviceusage = build('serviceusage', 'v1', credentials=creds)
projs = None
while projs is None:
try:
projs = _get_projects(cloud)
except HttpError as e:
if loads(e.content.decode('utf-8'))['error']['status'] == 'PERMISSION_DENIED':
try:
serviceusage.services().enable(
name='projects/%s/services/cloudresourcemanager.googleapis.com' % proj_id).execute()
except HttpError as e:
print(e._get_reason())
input('Press Enter to retry.')
if list_projects:
return _get_projects(cloud)
if list_sas:
return _list_sas(iam, list_sas)
if create_projects:
print("creat projects: {}".format(create_projects))
if create_projects > 0:
current_count = len(_get_projects(cloud))
if current_count + create_projects <= max_projects:
print('Creating %d projects' % (create_projects))
nprjs = _create_projects(cloud, create_projects)
selected_projects = nprjs
else:
sys.exit('No, you cannot create %d new project (s).\n'
'Please reduce value of --quick-setup.\n'
'Remember that you can totally create %d projects (%d already).\n'
'Please do not delete existing projects unless you know what you are doing' % (
create_projects, max_projects, current_count))
else:
print('Will overwrite all service accounts in existing projects.\n'
'So make sure you have some projects already.')
input("Press Enter to continue...")
if enable_services:
ste = [enable_services]
if enable_services == '~':
ste = selected_projects
elif enable_services == '*':
ste = _get_projects(cloud)
services = [i + '.googleapis.com' for i in services]
print('Enabling services')
_enable_services(serviceusage, ste, services)
if create_sas:
stc = [create_sas]
if create_sas == '~':
stc = selected_projects
elif create_sas == '*':
stc = _get_projects(cloud)
for i in stc:
_create_remaining_accounts(iam, i)
if download_keys:
try:
os.mkdir(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
std = [download_keys]
if download_keys == '~':
std = selected_projects
elif download_keys == '*':
std = _get_projects(cloud)
_create_sa_keys(iam, std, path)
if delete_sas:
std = []
std.append(delete_sas)
if delete_sas == '~':
std = selected_projects
elif delete_sas == '*':
std = _get_projects(cloud)
for i in std:
print('Deleting service accounts in %s' % i)
_delete_sas(iam, i)
if __name__ == '__main__':
parse = ArgumentParser(description='A tool to create Google service accounts.')
parse.add_argument('--path', '-p', default='accounts',
help='Specify an alternate directory to output the credential files.')
parse.add_argument('--token', default='token_sa.pickle', help='Specify the pickle token file path.')
parse.add_argument('--credentials', default='credentials.json', help='Specify the credentials file path.')
parse.add_argument('--list-projects', default=False, action='store_true',
help='List projects viewable by the user.')
parse.add_argument('--list-sas', default=False, help='List service accounts in a project.')
parse.add_argument('--create-projects', type=int, default=None, help='Creates up to N projects.')
parse.add_argument('--max-projects', type=int, default=12, help='Max amount of project allowed. Default: 12')
parse.add_argument('--enable-services', default=None,
help='Enables services on the project. Default: IAM and Drive')
parse.add_argument('--services', nargs='+', default=['iam', 'drive'],
help='Specify a different set of services to enable. Overrides the default.')
parse.add_argument('--create-sas', default=None, help='Create service accounts in a project.')
parse.add_argument('--delete-sas', default=None, help='Delete service accounts in a project.')
parse.add_argument('--download-keys', default=None, help='Download keys for all the service accounts in a project.')
parse.add_argument('--quick-setup', default=None, type=int,
help='Create projects, enable services, create service accounts and download keys. ')
parse.add_argument('--new-only', default=False, action='store_true', help='Do not use exisiting projects.')
args = parse.parse_args()
# If credentials file is invalid, search for one.
if not os.path.exists(args.credentials):
options = glob('*.json')
print('No credentials found at %s. Please enable the Drive API in:\n'
'https://developers.google.com/drive/api/v3/quickstart/python\n'
'and save the json file as credentials.json' % args.credentials)
if len(options) < 1:
exit(-1)
else:
print('Select a credentials file below.')
inp_options = [str(i) for i in list(range(1, len(options) + 1))] + options
for i in range(len(options)):
print(' %d) %s' % (i + 1, options[i]))
inp = None
while True:
inp = input('> ')
if inp in inp_options:
break
args.credentials = inp if inp in options else options[int(inp) - 1]
print('Use --credentials %s next time to use this credentials file.' % args.credentials)
if args.quick_setup:
opt = '~' if args.new_only else '*'
args.services = ['iam', 'drive']
args.create_projects = args.quick_setup
args.enable_services = opt
args.create_sas = opt
args.download_keys = opt
resp = serviceaccountfactory(
path=args.path,
token=args.token,
credentials=args.credentials,
list_projects=args.list_projects,
list_sas=args.list_sas,
create_projects=args.create_projects,
max_projects=args.max_projects,
create_sas=args.create_sas,
delete_sas=args.delete_sas,
enable_services=args.enable_services,
services=args.services,
download_keys=args.download_keys
)
if resp is not None:
if args.list_projects:
if resp:
print('Projects (%d):' % len(resp))
for i in resp:
print(' ' + i)
else:
print('No projects.')
elif args.list_sas:
if resp:
print('Service accounts in %s (%d):' % (args.list_sas, len(resp)))
for i in resp:
print(' %s (%s)' % (i['email'], i['uniqueId']))
else:
print('No service accounts.')

26
generate_drive_token.py Normal file
View File

@ -0,0 +1,26 @@
import pickle
import os
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
credentials = None
__G_DRIVE_TOKEN_FILE = "token.pickle"
__OAUTH_SCOPE = ["https://www.googleapis.com/auth/drive"]
if os.path.exists(__G_DRIVE_TOKEN_FILE):
with open(__G_DRIVE_TOKEN_FILE, 'rb') as f:
credentials = pickle.load(f)
if (
(credentials is None or not credentials.valid)
and credentials
and credentials.expired
and credentials.refresh_token
):
credentials.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', __OAUTH_SCOPE)
credentials = flow.run_console(port=0)
# Save the credentials for the next run
with open(__G_DRIVE_TOKEN_FILE, 'wb') as token:
pickle.dump(credentials, token)

273
helper.sh Normal file
View File

@ -0,0 +1,273 @@
#! /bin/bash
# Made with ❤ by @SpeedIndeed - Telegram
printf "This is an interactive script that will help you in deploying almost any mirrorbot. What do you want to do?
1) Deploying first time
2) Redeploying but already have credentials.json, token.pickle and SA folder (optional)
3) Check if appname is available
4) Just commiting changes to existing repo\n"
while true; do
read -p "Select one of the following: " choice
case $choice in
"1")
echo -e "Firstly we will make credentials.json"
echo -e "For that, follow the TUTORIAL 2 given in this post: https://telegra.ph/Deploying-your-own-Mirrorbot-10-19#TUTORIAL-2"
echo -e "If this script closes in between then just re-run it. \n"
for (( ; ; ))
do
read -p "After adding credentials.json, Press y : " cred
if [ $cred = y -o $cred = Y ] ; then
break
else
echo -e "Then do it first! \n"
fi
done
echo -e "\nNow we will login to heroku"
echo
for (( ; ; ))
do
echo -e "Enter your Heroku credentials: \n"
heroku login -i
status=$?
if test $status -eq 0; then
echo -e "Signed in successfully \n"
break
fi
echo -e "Invalid credentials, try again \n"
done
for (( ; ; ))
do
read -p "Enter unique appname for your bot: " bname
heroku create $bname
status=$?
if test $status -eq 0; then
echo -e "App created successfully \n"
break
fi
echo -e "Appname is already taken, choose another one \n"
done
heroku git:remote -a $bname
heroku stack:set container -a $bname
pip3 install -r requirements-cli.txt
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
echo -e "\nNow we will create token.pickle. Follow the instructions given. \n"
sleep 5
python -m pip install google-auth-oauthlib
python3 generate_drive_token.py
sleep 5
echo -e "\nService Accounts (SA) help you bypass daily 750GB limit when you want to upload to Shared Drive/Team Drive (TD). Keeping this in mind, select one of the following: \n"
echo -e "1) You don't have SA but want to use them? \n"
echo -e "2) You already have SA and want to use them? \n"
echo -e "3) You don't want to add SA \n"
read -p "Enter your choice: " sa
sleep 3
if [ $sa = 1 ] ; then
python -m pip install progress
python3 gen_sa_accounts.py --list-projects
echo -e "Choose the project id which contains credentails.json, that way you can avoid mess of multiple projects \n"
echo
read -p "Project id: " pid
python3 gen_sa_accounts.py --enable-services $pid
python3 gen_sa_accounts.py --create-sas $pid
python3 gen_sa_accounts.py --download-keys $pid
echo
fi
if [ $sa = 2 ] ; then
python3 gen_sa_accounts.py --list-projects
echo -e "Choose the project id which contains SA \n"
echo
read -p "Project id: " pid
python3 gen_sa_accounts.py --download-keys $pid
echo
fi
if [ $sa = 1 -o $sa = 2 ] ; then
echo -e "As you can see, a folder named 'accounts' has been created and contains 100 SA. Now, how do you want to add these SA to your TD? \n"
echo -e "1) Directly add them to the TD \n"
echo -e "2) Make a Google Group and add all SA to it \n"
while true ; do
read -p "Enter your choice: " way
case $way in
"1")
echo "Enter your Team Drive id"
echo -e "(HINT- If your TD link is like 'https://drive.google.com/drive/folders/0ACYsMW75QbTSUk9PVA' then your TD id = 0ACYsMW75QbTSUk9PVA \n"
read -p "TD id: " id
python3 add_to_team_drive.py -d $id
echo -e "Now you can goto your TD and see that 100 SA have been added \n"
echo -e "Don't forget to set USE_SERVICE_ACCOUNTS to 'True' \n"
break
;;
"2")
cd accounts
grep -oPh '"client_email": "\K[^"]+' *.json > emails.txt
cd -
echo -e "For that, follow TUTORIAL 3 given in this post: https://telegra.ph/Deploying-your-own-Mirrorbot-10-19#TUTORIAL-3 \n"
for (( ; ; ))
do
read -p "After completing Tutorial, delete email.txt and Press y : " tut
if [ $tut = y -o $tut = Y ] ; then
break
else
echo -e "Then complete it first! \n"
fi
done
break
;;
*)
echo -e "Invalid choice \n"
;;
esac
done
fi
if [ $sa = 3 ] ; then
echo -e "\nNo problem, lets proceed further \n"
break
fi
for (( ; ; ))
do
read -p "Confirm that you have filled all required vars in config.env by pressing y : " conf
if [ $conf = y -o $conf = Y ] ; then
echo -e "\nSo lets proceed further \n"
echo
echo -e "Now we will push this repo to heroku, for that \n"
read -p "Enter the mail which you used for heroku account: " mail
read -p "Enter your name: " name
echo -e "\nIt is suggested to deploy bot more than 1 time as it ensures that Heroku does not suspend app."
echo -e "For safety, app will be deployed 2 times. \n"
sleep 3
heroku git:remote -a $bname
heroku stack:set container -a $bname
git add -f .
git config --global user.email "$mail"
git config --global user.name "$name"
git commit -m "Deploy number 1"
git push heroku master --force
heroku apps:destroy -c $bname
echo -e "\nDeploy number 2"
sleep 3
heroku create $bname
heroku git:remote -a $bname
heroku stack:set container -a $bname
git add -f .
git config --global user.email "$mail"
git config --global user.name "$name"
git commit -m "Deploy number 2"
git push heroku master --force
heroku ps:scale web=0 -a $bname
heroku ps:scale web=1 -a $bname
break
else
echo -e "Then do it first! \n"
fi
done
break
;;
"2")
echo -e "Firstly we will login to heroku \n"
echo
for (( ; ; ))
do
echo -e "Enter your Heroku credentials: \n"
heroku login -i
status=$?
if test $status -eq 0; then
echo -e "Signed in successfully \n"
break
fi
echo -e "Invalid credentials, try again \n"
done
for (( ; ; ))
do
read -p "After adding credentials.json, token.pickle, SA folder (optional) and all necessary vars in config.env, press y: " req
if [ $req = y -o $req = Y ] ; then
for (( ; ; ))
do
read -p "Enter unique appname for your bot: " bname
heroku create $bname
status=$?
if test $status -eq 0; then
echo -e "App created successfully \n"
break
fi
echo -e "Appname is already taken, choose another one \n"
done
echo -e "Now we will push this repo to heroku, for that \n"
read -p "Enter the mail which you used for heroku account: " mail
read -p "Enter your name: " name
echo -e "\nIt is suggested to deploy bot more than 1 time as it ensures that Heroku does not suspend app."
echo -e "For safety, app will be deployed 2 times. \n"
sleep 3
heroku git:remote -a $bname
heroku stack:set container -a $bname
git add -f .
git config --global user.email "$mail"
git config --global user.name "$name"
git commit -m "Deploy number 1"
git push heroku master --force
heroku apps:destroy -c $bname
echo -e "\nDeploy number 2"
sleep 3
heroku create $bname
heroku git:remote -a $bname
heroku stack:set container -a $bname
git add -f .
git config --global user.email "$mail"
git config --global user.name "$name"
git commit -m "Deploy number 2"
git push heroku master --force
heroku ps:scale web=0 -a $bname
heroku ps:scale web=1 -a $bname
break
else
echo -e "Then do add it first! \n"
fi
done
break
;;
"3")
echo -e "\nFirst, we will login to heroku"
echo
for (( ; ; ))
do
echo -e "Enter your Heroku credentials: \n"
heroku login -i
status=$?
if test $status -eq 0; then
echo -e "Signed in successfully \n"
break
fi
echo -e "Invalid credentials, try again \n"
done
for (( ; ; ))
do
read -p "Enter unique appname for your bot: " bname
heroku create $bname
status=$?
if test $status -eq 0; then
echo -e "App created successfully \n"
break
fi
echo -e "Appname is already taken, choose another one \n"
done
heroku apps:destroy -c $bname
echo -e "Now use this appname in BASE_URL_OF_BOT var like https://appname.herokuapp.com"
break
;;
"4")
read -p "Enter commit description in one line: " c_des
git add -f .
git commit -m "$c_des"
git push heroku master --force
break
;;
*)
echo -e "Invalid Choice \n"
;;
esac
done
echo "Task completed successfully"

5
heroku.yml Normal file
View File

@ -0,0 +1,5 @@
build:
docker:
web: Dockerfile
run:
web: bash start.sh

119
nodes.py Normal file
View File

@ -0,0 +1,119 @@
# -*- coding: utf-8 -*-
# (c) YashDK [yash-dk@github]
from anytree import NodeMixin, RenderTree
SIZE_UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
class TorNode(NodeMixin):
def __init__(self, name, is_folder=False, is_file=False, parent=None, progress=None, size=None, priority=None, file_id=None):
super().__init__()
self.name = name
self.is_folder = is_folder
self.is_file = is_file
if parent is not None:
self.parent = parent
if progress is not None:
self.progress = progress
if size is not None:
self.size = size
if priority is not None:
self.priority = priority
if file_id is not None:
self.file_id = file_id
def get_folders(path):
path_seperator = "/"
return path.split(path_seperator)
def make_tree(res):
"""This function takes the list of all the torrent files. The files are name hierarchically.
Felt a need to document to save time.
Args:
res (list): Torrent files list.
Returns:
TorNode: Parent node of the tree constructed and can be used further.
"""
parent = TorNode("Torrent")
for l, i in enumerate(res):
# Get the hierarchy of the folders by splitting based on '/'
folders = get_folders(i.name)
# Check if the file is alone for if its in folder
if len(folders) > 1:
# Enter here if in folder
# Set the parent
previous_node = parent
# Traverse till second last assuming the last is a file.
for j in range(len(folders)-1):
current_node = None
# As we are traversing the folder from top to bottom we are searching
# the first folder (folders list) under the parent node in first iteration.
# If the node is found then it becomes the current node else the current node
# is left None.
for k in previous_node.children:
if k.name == folders[j]:
current_node = k
break
# if the node is not found then create the folder node
# if the node is found then use it as base for the next
if current_node is None:
previous_node = TorNode(folders[j],parent=previous_node,is_folder=True)
else:
previous_node = current_node
# at this point the previous_node will contain the deepest folder in it so add the file to it
TorNode(folders[-1],is_file=True,parent=previous_node,progress=i.progress,size=i.size,priority=i.priority,file_id=l)
else:
# at the file to the parent if no folders are there
TorNode(folders[-1],is_file=True,parent=parent,progress=i.progress,size=i.size,priority=i.priority,file_id=l)
return parent
def print_tree(parent):
for pre, _, node in RenderTree(parent):
treestr = u"%s%s" % (pre, node.name)
print(treestr.ljust(8), node.is_folder, node.is_file)
def create_list(par, msg):
if par.name != ".unwanted":
msg[0] += "<ul>"
for i in par.children:
if i.is_folder:
msg[0] += "<li>"
if i.name != ".unwanted":
msg[0] += f"<input type=\"checkbox\" name=\"foldernode_{msg[1]}\"> <label for=\"{i.name}\">{i.name}</label>"
create_list(i,msg)
msg[0] += "</li>"
msg[1] += 1
else:
msg[0] += "<li>"
if i.priority == 0:
msg[0] += f"<input type=\"checkbox\" name=\"filenode_{i.file_id}\"> <label for=\"filenode_{i.file_id}\">{i.name} - {get_readable_file_size(i.size)}</label>"
else:
msg[0] += f"<input type=\"checkbox\" checked name=\"filenode_{i.file_id}\"> <label for=\"filenode_{i.file_id}\">{i.name} - {get_readable_file_size(i.size)}</label>"
msg[0] += f"<input type=\"hidden\" value=\"off\" name=\"filenode_{i.file_id}\">"
msg[0] += "</li>"
if par.name != ".unwanted":
msg[0] += "</ul>"
def get_readable_file_size(size_in_bytes) -> str:
if size_in_bytes is None:
return '0B'
index = 0
while size_in_bytes >= 1024:
size_in_bytes /= 1024
index += 1
try:
return f'{round(size_in_bytes, 2)}{SIZE_UNITS[index]}'
except IndexError:
return 'File too large'

200
pextract Normal file
View File

@ -0,0 +1,200 @@
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $(basename $0) FILES"
exit 1
fi
extract() {
arg="$1"
pswd="$2"
cd "$(dirname "$arg")" || exit
case "$arg" in
*.tar.bz2)
tar xjf "$arg" --one-top-level
local code=$?
;;
*.tar.gz)
tar xzf "$arg" --one-top-level
local code=$?
;;
*.bz2)
bunzip2 "$arg"
local code=$?
;;
*.gz)
gunzip "$arg"
local code=$?
;;
*.tar)
tar xf "$arg" --one-top-level
local code=$?
;;
*.tbz2)
(tar xjf "$arg" --one-top-level)
local code=$?
;;
*.tgz)
tar xzf "$arg" --one-top-level
local code=$?
;;
*.tar.xz)
a_dir=$(expr "$arg" : '\(.*\).tar.xz')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.zip)
a_dir=$(expr "$arg" : '\(.*\).zip')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.7z)
a_dir=$(expr "$arg" : '\(.*\).7z')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.Z)
uncompress "$arg"
local code=$?
;;
*.rar)
a_dir=$(expr "$arg" : '\(.*\).rar')
mkdir "$a_dir"
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.iso)
a_dir=$(expr "$arg" : '\(.*\).iso')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.wim)
a_dir=$(expr "$arg" : '\(.*\).wim')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.cab)
a_dir=$(expr "$arg" : '\(.*\).cab')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.apm)
a_dir=$(expr "$arg" : '\(.*\).apm')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.arj)
a_dir=$(expr "$arg" : '\(.*\).arj')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.chm)
a_dir=$(expr "$arg" : '\(.*\).chm')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.cpio)
a_dir=$(expr "$arg" : '\(.*\).cpio')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.cramfs)
a_dir=$(expr "$arg" : '\(.*\).cramfs')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.deb)
a_dir=$(expr "$arg" : '\(.*\).deb')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.dmg)
a_dir=$(expr "$arg" : '\(.*\).dmg')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.fat)
a_dir=$(expr "$arg" : '\(.*\).fat')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.hfs)
a_dir=$(expr "$arg" : '\(.*\).hfs')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.lzh)
a_dir=$(expr "$arg" : '\(.*\).lzh')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.lzma)
a_dir=$(expr "$arg" : '\(.*\).lzma')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.lzma2)
a_dir=$(expr "$arg" : '\(.*\).lzma2')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.mbr)
a_dir=$(expr "$arg" : '\(.*\).mbr')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.msi)
a_dir=$(expr "$arg" : '\(.*\).msi')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.mslz)
a_dir=$(expr "$arg" : '\(.*\).mslz')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.nsis)
a_dir=$(expr "$arg" : '\(.*\).nsis')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.ntfs)
a_dir=$(expr "$arg" : '\(.*\).ntfs')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.rpm)
a_dir=$(expr "$arg" : '\(.*\).rpm')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.squashfs)
a_dir=$(expr "$arg" : '\(.*\).squashfs')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.udf)
a_dir=$(expr "$arg" : '\(.*\).udf')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.vhd)
a_dir=$(expr "$arg" : '\(.*\).vhd')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*.xar)
a_dir=$(expr "$arg" : '\(.*\).xar')
7z x "$arg" -o"$a_dir" -p"$pswd"
local code=$?
;;
*)
echo "'$arg' cannot be extracted via extract()" 1>&2
exit 1
;;
esac
cd - || exit $?
exit $code
}
extract "$1" "$2"

View File

@ -0,0 +1,36 @@
[LegalNotice]
Accepted=true
[BitTorrent]
Session\AsyncIOThreadsCount=8
Session\SlowTorrentsDownloadRate=100
Session\SlowTorrentsInactivityTimer=600
[Preferences]
Advanced\AnnounceToAllTrackers=true
Advanced\AnonymousMode=false
Advanced\IgnoreLimitsLAN=true
Advanced\RecheckOnCompletion=false
Advanced\LtTrackerExchange=true
Bittorrent\AddTrackers=false
Bittorrent\MaxConnecs=-1
Bittorrent\MaxConnecsPerTorrent=-1
Bittorrent\MaxUploads=-1
Bittorrent\MaxUploadsPerTorrent=-1
Bittorrent\DHT=true
Bittorrent\DHTPort=6881
Bittorrent\PeX=true
Bittorrent\LSD=true
Bittorrent\sameDHTPortAsBT=true
Downloads\DiskWriteCacheSize=32
Downloads\PreAllocation=true
Downloads\UseIncompleteExtension=true
General\PreventFromSuspendWhenDownloading=true
Queueing\IgnoreSlowTorrents=true
Queueing\MaxActiveDownloads=100
Queueing\MaxActiveTorrents=50
Queueing\MaxActiveUploads=50
Queueing\QueueingEnabled=false
WebUI\Enabled=true
WebUI\Port=8090
WebUI\LocalHostAuth=false

7
requirements-cli.txt Normal file
View File

@ -0,0 +1,7 @@
oauth2client
google-api-python-client
progress
progressbar2
httplib2shim
google_auth_oauthlib
pyrogram

33
requirements.txt Normal file
View File

@ -0,0 +1,33 @@
aiohttp
anytree
aria2p
appdirs
attrdict
beautifulsoup4
cloudscrape
feedparser
google-api-python-client
google-auth-httplib2
google-auth-oauthlib
gunicorn
js2py
lk21
lxml
pillow
psutil
psycopg2-binary
pybase64
pyrogram
pyshorteners
python-dotenv
python-magic
python-telegram-bot
qbittorrent-api
requests
speedtest-cli
telegraph
tenacity
TgCrypto
torrentool==1.1.0
urllib3
yt_dlp

1
start.sh Executable file
View File

@ -0,0 +1 @@
python3 update.py && python3 -m bot

64
update.py Normal file
View File

@ -0,0 +1,64 @@
import os
import subprocess
import requests
import logging
from dotenv import load_dotenv
if os.path.exists('log.txt'):
with open('log.txt', 'r+') as f:
f.truncate(0)
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.FileHandler('log.txt'), logging.StreamHandler()],
level=logging.INFO)
CONFIG_FILE_URL = os.environ.get('CONFIG_FILE_URL', None)
try:
if len(CONFIG_FILE_URL) == 0:
raise TypeError
except TypeError:
CONFIG_FILE_URL = None
if CONFIG_FILE_URL is not None:
try:
res = requests.get(CONFIG_FILE_URL)
if res.status_code == 200:
with open('config.env', 'wb+') as f:
f.write(res.content)
f.close()
else:
logging.error(f"Failed to download config.env {res.status_code}")
except Exception as e:
logging.error(str(e))
load_dotenv('config.env', override=True)
UPSTREAM_REPO = os.environ.get('UPSTREAM_REPO', None)
try:
if len(UPSTREAM_REPO) == 0:
raise TypeError
except TypeError:
UPSTREAM_REPO = None
if UPSTREAM_REPO is not None:
if not os.path.exists('.git'):
subprocess.run([f"git init -q \
&& git config --global user.email e.anastayyar@gmail.com \
&& git config --global user.name mltb \
&& git add . \
&& git commit -sm update -q \
&& git remote add origin {UPSTREAM_REPO} \
&& git fetch origin -q \
&& git reset --hard origin/master -q"], shell=True)
else:
subprocess.run([f"git init -q \
&& git config --global user.email e.anastayyar@gmail.com \
&& git config --global user.name mltb \
&& git add . \
&& git commit -m update -q \
&& git remote rm origin \
&& git remote add origin {UPSTREAM_REPO} \
&& git fetch origin -q \
&& git reset --hard origin/master -q"], shell=True)

716
wserver.py Normal file
View File

@ -0,0 +1,716 @@
# -*- coding: utf-8 -*-
# (c) YashDK [yash-dk@github]
# Redesigned By - @bipuldey19 (https://github.com/SlamDevs/slam-mirrorbot/commit/1e572f4fa3625ecceb953ce6d3e7cf7334a4d542#diff-c3d91f56f4c5d8b5af3d856d15a76bd5f00aa38d712691b91501734940761bdd)
import logging
import qbittorrentapi as qba
import asyncio
from aiohttp import web
import nodes
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.FileHandler('log.txt'), logging.StreamHandler()],
level=logging.INFO)
LOGGER = logging.getLogger(__name__)
routes = web.RouteTableDef()
page = """
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Torrent File Selector</title>
<link rel="icon" href="https://telegra.ph/file/cc06d0c613491080cc174.png" type="image/jpg">
<script
src="https://code.jquery.com/jquery-3.5.1.slim.min.js"
integrity="sha256-4+XzXVhsDmqanXGHaHvgh1gMQKX40OUvDEBTu8JcmNs="
crossorigin="anonymous"
></script>
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link
href="https://fonts.googleapis.com/css2?family=Ubuntu:ital,wght@0,300;0,400;0,500;0,700;1,300;1,400;1,500;1,700&display=swap"
rel="stylesheet"
/>
<link
rel="stylesheet"
href="https://pro.fontawesome.com/releases/v5.10.0/css/all.css"
integrity="sha384-AYmEC3Yw5cVb3ZcuHtOA93w35dYTsvhLPVnYs9eStHfGJvOvKxVfELGroGkvsg+p"
crossorigin="anonymous"
/>
<style>
*{
margin: 0;
padding: 0;
box-sizing: border-box;
font-family: "Ubuntu", sans-serif;
list-style: none;
text-decoration: none;
outline: none !important;
color: white;
}
body{
background-color: #0D1117;
}
header{
margin: 3vh 1vw;
padding: 0.5rem 1rem 0.5rem 1rem;
display: flex;
align-items: center;
justify-content: space-between;
border-bottom: #161B22;
border-radius: 30px;
background-color: #161B22;
border: 2px solid rgba(255, 255, 255, 0.11);
}
header:hover, section:hover{
box-shadow: 0px 0px 15px black;
}
.brand{
display: flex;
align-items: center;
}
img{
width: 2.5rem;
height: 2.5rem;
border: 2px solid black;
border-radius: 50%;
}
.name{
margin-left: 1vw;
font-size: 1.5rem;
}
.intro{
text-align: center;
margin-bottom: 2vh;
margin-top: 1vh;
}
.social a{
font-size: 1.5rem;
padding-left: 1vw;
}
.social a:hover, .brand:hover{
filter: invert(0.3);
}
section{
margin: 0vh 1vw;
margin-bottom: 10vh;
padding: 1vh 3vw;
display: flex;
flex-direction: column;
border: 2px solid rgba(255, 255, 255, 0.11);
border-radius: 20px;
background-color: #161B22 ;
}
li:nth-child(1){
padding: 1rem 1rem 0.5rem 1rem;
}
li:nth-child(n+1){
padding-left: 1rem;
}
li label{
padding-left: 0.5rem;
}
li{
padding-bottom: 0.5rem;
}
span{
margin-right: 0.5rem;
cursor: pointer;
user-select: none;
transition: transform 200ms ease-out;
}
span.active{
transform: rotate(90deg);
-ms-transform: rotate(90deg); /* for IE */
-webkit-transform: rotate(90deg);/* for browsers supporting webkit (such as chrome, firefox, safari etc.). */
display: inline-block;
}
ul{
margin: 1vh 1vw 1vh 1vw;
padding: 0 0 0.5rem 0;
border: 2px solid black;
border-radius: 20px;
background-color: #1c2129;
overflow: hidden;
}
input[type="checkbox"]{
cursor: pointer;
user-select: none;
}
input[type="submit"] {
border-radius: 20px;
margin: 2vh auto 1vh auto;
width: 50%;
display: block;
height: 5.5vh;
border: 2px solid rgba(255, 255, 255, 0.11);
background-color: #0D1117;
font-size: 16px;
font-weight: 500;
}
input[type="submit"]:hover, input[type="submit"]:focus{
background-color: rgba(255, 255, 255, 0.068);
cursor: pointer;
}
@media (max-width: 768px){
input[type="submit"]{
width: 100%;
}
}
#treeview .parent {
position: relative;
}
#treeview .parent > ul {
display: none;
}
</style>
</head>
<body>
<!--© Designed and coded by @bipuldey19-Telegram-->
<header>
<div class="brand">
<img
src="https://telegra.ph/file/cc06d0c613491080cc174.png"
alt="logo"
/>
<a href="https://t.me/mirrorLeechGroup">
<h2 class="name">Qbittorrent Selection</h2>
</a>
</div>
<div class="social">
<a href="https://www.github.com/anasty17/mirror-leech-telegram-bot"><i class="fab fa-github"></i></a>
<a href="https://t.me/mirrorLeechGroup"><i class="fab fa-telegram"></i></a>
</div>
</header>
<section>
<h2 class="intro">Select the files you want to download</h2>
<form action="{form_url}" method="POST">
{My_content}
<input type="submit" name="Select these files ;)">
</form>
</section>
<script>
$(document).ready(function () {
var tags = $("li").filter(function () {
return $(this).find("ul").length !== 0;
});
tags.each(function () {
$(this).addClass("parent");
});
$("body").find("ul:first-child").attr("id", "treeview");
$(".parent").prepend("<span>▶</span>");
$("span").click(function (e) {
e.stopPropagation();
e.stopImmediatePropagation();
$(this).parent( ".parent" ).find(">ul").toggle("slow");
if ($(this).hasClass("active")) $(this).removeClass("active");
else $(this).addClass("active");
});
});
if(document.getElementsByTagName("ul").length >= 10){
var labels = document.querySelectorAll("label");
//Shorting the file/folder names
labels.forEach(function (label) {
if (label.innerText.toString().split(" ").length >= 6) {
let FirstPart = label.innerText
.toString()
.split(" ")
.slice(0, 3)
.join(" ");
let SecondPart = label.innerText
.toString()
.split(" ")
.splice(-3)
.join(" ");
label.innerText = `${FirstPart}... ${SecondPart}`;
}
if (label.innerText.toString().split(".").length >= 6) {
let first = label.innerText
.toString()
.split(".")
.slice(0, 3)
.join(" ");
let second = label.innerText
.toString()
.split(".")
.splice(-3)
.join(".");
label.innerText = `${first}... ${second}`;
}
});
}
</script>
<script>
$('input[type="checkbox"]').change(function(e) {
var checked = $(this).prop("checked"),
container = $(this).parent(),
siblings = container.siblings();
/*
$(this).attr('value', function(index, attr){
return attr == 'yes' ? 'noo' : 'yes';
});
*/
container.find('input[type="checkbox"]').prop({
indeterminate: false,
checked: checked
});
function checkSiblings(el) {
var parent = el.parent().parent(),
all = true;
el.siblings().each(function() {
let returnValue = all = ($(this).children('input[type="checkbox"]').prop("checked") === checked);
return returnValue;
});
if (all && checked) {
parent.children('input[type="checkbox"]').prop({
indeterminate: false,
checked: checked
});
checkSiblings(parent);
} else if (all && !checked) {
parent.children('input[type="checkbox"]').prop("checked", checked);
parent.children('input[type="checkbox"]').prop("indeterminate", (parent.find('input[type="checkbox"]:checked').length > 0));
checkSiblings(parent);
} else {
el.parents("li").children('input[type="checkbox"]').prop({
indeterminate: true,
checked: false
});
}
}
checkSiblings(container);
});
</script>
</body>
</html>
"""
code_page = """
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Torrent Code Checker</title>
<link rel="icon" href="https://telegra.ph/file/cc06d0c613491080cc174.png" type="image/jpg">
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link
href="https://fonts.googleapis.com/css2?family=Ubuntu:ital,wght@0,300;0,400;0,500;0,700;1,300;1,400;1,500;1,700&display=swap"
rel="stylesheet"
/>
<link
rel="stylesheet"
href="https://pro.fontawesome.com/releases/v5.10.0/css/all.css"
integrity="sha384-AYmEC3Yw5cVb3ZcuHtOA93w35dYTsvhLPVnYs9eStHfGJvOvKxVfELGroGkvsg+p"
crossorigin="anonymous"
/>
<style>
*{
margin: 0;
padding: 0;
box-sizing: border-box;
font-family: "Ubuntu", sans-serif;
list-style: none;
text-decoration: none;
color: white;
}
body{
background-color: #0D1117;
}
header{
margin: 3vh 1vw;
padding: 0.5rem 1rem 0.5rem 1rem;
display: flex;
align-items: center;
justify-content: space-between;
border-bottom: #161B22;
border-radius: 30px;
background-color: #161B22;
border: 2px solid rgba(255, 255, 255, 0.11);
}
header:hover, section:hover{
box-shadow: 0px 0px 15px black;
}
.brand{
display: flex;
align-items: center;
}
img{
width: 2.5rem;
height: 2.5rem;
border: 2px solid black;
border-radius: 50%;
}
.name{
color: white;
margin-left: 1vw;
font-size: 1.5rem;
}
.intro{
text-align: center;
margin-bottom: 2vh;
margin-top: 1vh;
}
.social a{
font-size: 1.5rem;
color: white;
padding-left: 1vw;
}
.social a:hover, .brand:hover{
filter: invert(0.3);
}
section{
margin: 0vh 1vw;
margin-bottom: 10vh;
padding: 1vh 3vw;
display: flex;
flex-direction: column;
border: 2px solid rgba(255, 255, 255, 0.11);
border-radius: 20px;
background-color: #161B22 ;
color: white;
}
section form{
display: flex;
margin-left: auto;
margin-right: auto;
flex-direction: column;
}
section div{
background-color: #0D1117;
border-radius: 20px;
max-width: fit-content;
padding: 0.7rem;
margin-top: 2vh;
}
section label{
font-size: larger;
font-weight: 500;
margin: 0 0 0.5vh 1.5vw;
display: block;
}
section input[type="text"]{
border-radius: 20px;
outline: none;
width: 50vw;
height: 4vh;
padding: 1rem;
margin: 0.5vh;
border: 2px solid rgba(255, 255, 255, 0.11);
background-color: #3e475531;
box-shadow: inset 0px 0px 10px black;
}
section input[type="text"]:focus{
border-color: rgba(255, 255, 255, 0.404);
}
section button{
border-radius: 20px;
margin-top: 1vh;
width: 100%;
height: 5.5vh;
border: 2px solid rgba(255, 255, 255, 0.11);
background-color: #0D1117;
color: white;
font-size: 16px;
font-weight: 500;
cursor: pointer;
transition: background-color 200ms ease;
}
section button:hover, section button:focus{
background-color: rgba(255, 255, 255, 0.068);
}
section span{
display: block;
font-size: x-small;
margin: 1vh;
font-weight: 100;
font-style: italic;
margin-left: 23%;
margin-right: auto;
margin-bottom: 2vh;
}
@media (max-width: 768px) {
section form{
flex-direction: column;
width: 90vw;
}
section div{
max-width: 100%;
margin-bottom: 1vh;
}
section label{
margin-left: 3vw;
margin-top: 1vh;
}
section input[type="text"]{
width: calc(100% - 0.3rem);
}
section button{
width: 100%;
height: 5vh;
display: block;
margin-left: auto;
margin-right: auto;
}
section span{
margin-left: 5%;
}
}
</style>
</head>
<body>
<!--© Designed and coded by @bipuldey19-Telegram-->
<header>
<div class="brand">
<img
src="https://telegra.ph/file/cc06d0c613491080cc174.png"
alt="logo"
/>
<a href="https://t.me/mirrorLeechGroup">
<h2 class="name">Qbittorrent Selection</h2>
</a>
</div>
<div class="social">
<a href="https://www.github.com/anasty17/mirror-leech-telegram-bot"><i class="fab fa-github"></i></a>
<a href="https://t.me/mirrorLeechGroup"><i class="fab fa-telegram"></i></a>
</div>
</header>
<section>
<form action="{form_url}">
<div>
<label for="pin_code">Pin Code :</label>
<input
type="text"
name="pin_code"
placeholder="Enter the code that you have got from Telegram to access the Torrent"
/>
</div>
<button type="submit" class="btn btn-primary">Submit</button>
</form>
<span
>* Dont mess around. Your download will get messed up.</
>
</section>
</body>
</html>
"""
@routes.get('/app/files/{hash_id}')
async def list_torrent_contents(request):
torr = request.match_info["hash_id"]
gets = request.query
if "pin_code" not in gets.keys():
rend_page = code_page.replace("{form_url}", f"/app/files/{torr}")
return web.Response(text=rend_page, content_type='text/html')
client = qba.Client(host="localhost", port="8090")
try:
res = client.torrents_files(torrent_hash=torr)
except qba.NotFound404Error:
raise web.HTTPNotFound()
passw = ""
for n in str(torr):
if n.isdigit():
passw += str(n)
if len(passw) == 4:
break
if isinstance(passw, bool):
raise web.HTTPNotFound()
pincode = passw
if gets["pin_code"] != pincode:
return web.Response(text="Incorrect pin code")
par = nodes.make_tree(res)
cont = ["", 0]
nodes.create_list(par, cont)
rend_page = page.replace("{My_content}", cont[0])
rend_page = rend_page.replace("{form_url}", f"/app/files/{torr}?pin_code={pincode}")
client.auth_log_out()
return web.Response(text=rend_page, content_type='text/html')
async def re_verfiy(paused, resumed, client, torr):
paused = paused.strip()
resumed = resumed.strip()
if paused:
paused = paused.split("|")
if resumed:
resumed = resumed.split("|")
k = 0
while True:
res = client.torrents_files(torrent_hash=torr)
verify = True
for i in res:
if str(i.id) in paused and i.priority != 0:
verify = False
break
if str(i.id) in resumed and i.priority == 0:
verify = False
break
if verify:
break
LOGGER.info("Reverification Failed: correcting stuff...")
client.auth_log_out()
await asyncio.sleep(1)
client = qba.Client(host="localhost", port="8090")
try:
client.torrents_file_priority(torrent_hash=torr, file_ids=paused, priority=0)
except:
LOGGER.error("Errored in reverification paused")
try:
client.torrents_file_priority(torrent_hash=torr, file_ids=resumed, priority=1)
except:
LOGGER.error("Errored in reverification resumed")
k += 1
if k > 5:
return False
client.auth_log_out()
LOGGER.info("Verified")
return True
@routes.post('/app/files/{hash_id}')
async def set_priority(request):
torr = request.match_info["hash_id"]
client = qba.Client(host="localhost", port="8090")
data = await request.post()
resume = ""
pause = ""
data = dict(data)
for i, value in data.items():
if i.find("filenode") != -1:
node_no = i.split("_")[-1]
if value == "on":
resume += f"{node_no}|"
else:
pause += f"{node_no}|"
pause = pause.strip("|")
resume = resume.strip("|")
try:
client.torrents_file_priority(torrent_hash=torr, file_ids=pause, priority=0)
except qba.NotFound404Error:
raise web.HTTPNotFound()
except:
LOGGER.error("Errored in paused")
try:
client.torrents_file_priority(torrent_hash=torr, file_ids=resume, priority=1)
except qba.NotFound404Error:
raise web.HTTPNotFound()
except:
LOGGER.error("Errored in resumed")
await asyncio.sleep(2)
if not await re_verfiy(pause, resume, client, torr):
LOGGER.error("Verification Failed")
return await list_torrent_contents(request)
@routes.get('/')
async def homepage(request):
return web.Response(text="<h1>See mirror-leech-telegram-bot <a href='https://www.github.com/anasty17/mirror-leech-telegram-bot'>@GitHub</a> By <a href='https://github.com/anasty17'>Anas</a></h1>", content_type="text/html")
async def e404_middleware(app, handler):
async def middleware_handler(request):
try:
response = await handler(request)
if response.status == 404:
return web.Response(text="<h1>404: Page not found</h2><br><h3>mirror-leech-telegram-bot</h3>", content_type="text/html")
return response
except web.HTTPException as ex:
if ex.status == 404:
return web.Response(text="<h1>404: Page not found</h2><br><h3>mirror-leech-telegram-bot</h3>", content_type="text/html")
raise
return middleware_handler
async def start_server():
app = web.Application(middlewares=[e404_middleware])
app.add_routes(routes)
return app
async def start_server_async(port=80):
app = web.Application(middlewares=[e404_middleware])
app.add_routes(routes)
runner = web.AppRunner(app)
await runner.setup()
await web.TCPSite(runner, "0.0.0.0", port).start()