diff --git a/spaces/0x7194633/mbrat-ru-sum/README.md b/spaces/0x7194633/mbrat-ru-sum/README.md
deleted file mode 100644
index 3be7e55dd99d461d88df8763f1af8a1fcaa40155..0000000000000000000000000000000000000000
--- a/spaces/0x7194633/mbrat-ru-sum/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mbrat Ru Sum
-emoji: 🦀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.1.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md
deleted file mode 100644
index 85c0afb5decf993193749684c506dda38699cbec..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
What is BlueSoleil 6.4.275.0WithMobile?
-
BlueSoleil is a Bluetooth driver and software that allows you to easily connect to your Bluetooth devices, such as headsets, mobile phones, mice and GPS.
-
BlueSoleil 6.4.275.0WithMobile is a special version of BlueSoleil that comes with a mobile phone management software called Mobile Phone Tool.
-
BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1
With this version, you can not only connect your Bluetooth devices, but also manage your mobile phone data, such as contacts, messages, photos, music and videos.
-
You can also use your mobile phone as a remote control for your computer, or transfer files between your phone and computer via Bluetooth.
-
In this article, we will show you how to download, install, use and activate BlueSoleil 6.4.275.0WithMobile, as well as some tips and tricks for troubleshooting common problems.
-
How to download and install BlueSoleil 6.4.275.0WithMobile?
-
To download and install BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:
Save the file "BlueSoleil_6_4_275_0_with_Mobile.zip" on your computer.
-
Extract the file using a zip extractor program, such as WinZip or WinRAR.
-
Open the folder "BlueSoleil_6_4_275_0_with_Mobile" and double-click on "setup.exe" file.
-
Follow the instructions on the screen to complete the installation process.
-
Restart your computer after the installation is finished.
-
-
What are the benefits of using BlueSoleil 6.4.275.0WithMobile?
-
Using BlueSoleil 6.4.275.0WithMobile has many benefits, such as:
-
-
It is compatible with most Bluetooth devices and supports various Bluetooth profiles, such as A2DP, AVRCP, HFP, HSP, OPP, FTP and DUN.
-
It is easy to use and has a user-friendly interface that shows all your Bluetooth devices and services in one place.
-
It allows you to connect up to 17 Bluetooth devices at the same time and switch between them easily.
-
It enables you to manage your mobile phone data from your computer and use your phone as a remote control for your computer.
-
It enhances your wireless experience and reduces the clutter of wires and cables on your desk.
-
-
What are the drawbacks of using BlueSoleil 6.4.275.0WithMobile?
-
Using BlueSoleil 6.4.275.0WithMobile also has some drawbacks, such as:
-
How to activate BlueSoleil 6.4.275.0WithMobile with serial number
-BlueSoleil 6.4.275.0WithMobile crack download free
-BlueSoleil 6.4.275.0WithMobile license key generator
-BlueSoleil 6.4.275.0WithMobile full version for Windows 10
-BlueSoleil 6.4.275.0WithMobile bluetooth software review
-BlueSoleil 6.4.275.0WithMobile compatible devices list
-BlueSoleil 6.4.275.0WithMobile user manual pdf
-BlueSoleil 6.4.275.0WithMobile update patch download
-BlueSoleil 6.4.275.0WithMobile alternative software comparison
-BlueSoleil 6.4.275.0WithMobile best price and discount code
-BlueSoleil 6.4.275.0WithMobile installation error and troubleshooting
-BlueSoleil 6.4.275.0WithMobile customer service and support number
-BlueSoleil 6.4.275.0WithMobile features and benefits overview
-BlueSoleil 6.4.275.0WithMobile system requirements and specifications
-BlueSoleil 6.4.275.0WithMobile refund policy and guarantee
-BlueSoleil 6.4.275.0WithMobile testimonials and feedback from users
-BlueSoleil 6.4.275.0WithMobile pros and cons analysis
-BlueSoleil 6.4.275.0WithMobile vs other bluetooth software comparison
-BlueSoleil 6.4.275.0WithMobile official website and download link
-BlueSoleil 6.4.275.0WithMobile serial number validity check online
-How to uninstall BlueSoleil 6.4.275.0WithMobile completely
-BlueSoleil 6.4.275.0WithMobile latest version and changelog
-BlueSoleil 6.4.275.0WithMobile tips and tricks for better performance
-BlueSoleil 6.4.275.0WithMobile FAQ and common questions answered
-BlueSoleil 6.4.275.0WithMobile forum and community discussion
-How to transfer files with BlueSoleil 6.4.275
-
-
It requires a serial number to activate the software and unlock all the features.
-
It may not work well with some Bluetooth devices or drivers that are not compatible with BlueSoleil.
-
It may cause some interference or latency issues with other wireless devices or networks in your vicinity.
-
It may consume more battery power or CPU resources than other Bluetooth software.
-
-
However, these drawbacks can be overcome by following some tips and tricks that we will share in the next sections.
-
How to get a serial number for BlueSoleil 6.4.275.0WithMobile?
-
To get a serial number for BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:
Select the payment method and fill in the required information.
-
Confirm your order and complete the payment process.
-
You will receive an email with your serial number and a download link for the software.
-
Copy the serial number and paste it in the activation window of the software.
-
-
Congratulations! You have successfully activated BlueSoleil 6.4.275.0WithMobile and unlocked all the features.
-
Why do you need a serial number for BlueSoleil 6.4.275.0WithMobile?
-
You need a serial number for BlueSoleil 6.4.275.0WithMobile because:
-
-
It is a way of verifying that you have purchased a legitimate copy of the software and supporting the developers.
-
It is a way of unlocking all the features and functions of the software that are otherwise limited or disabled in the trial version.
-
It is a way of ensuring that you have access to the latest updates and customer support from the official website.
-
-
Where can you find a serial number for BlueSoleil 6.4.275.0WithMobile?
-
You can find a serial number for BlueSoleil 6.4.275.0WithMobile in these places:
-
-
The official website of BlueSoleil, where you can buy a serial number online and receive it by email.
-
The CD-ROM or DVD-ROM that comes with the software package, where you can find a serial number printed on the disc or the cover.
-
The online forums or websites that offer free or discounted serial numbers for BlueSoleil, where you can find a serial number posted by other users or generated by a keygen program.
-
-
However, we recommend that you only use the first option, as it is the safest and most reliable way of getting a serial number for BlueSoleil 6.4.275.0WithMobile.
-
The second option may not work if you have lost or damaged your disc or cover, or if you have bought a pirated copy of the software.
-
The third option may not work if the serial number is invalid, expired, blocked or already used by someone else, or if the keygen program contains viruses or malware that can harm your computer.
-
How to enter a serial number for BlueSoleil 6.4.275.0WithMobile?
-
To enter a serial number for BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:
-
-
Launch BlueSoleil from your desktop or start menu.
-
Click on "Help" menu and select "Activate BlueSoleil".
-
A new window will open asking you to enter your serial number.
-
Copy and paste your serial number in the text box and click on "Activate" button.
-
Wait for the activation process to complete.
-
A message will appear confirming that your activation is successful.
-
-
Congratulations! You have successfully entered your serial number for BlueSoleil 6.4.275.0WithMobile and activated the software.
-
How to troubleshoot common problems with BlueSoleil 6.4.275.0WithMobile?
-
Sometimes, you may encounter some problems with BlueSoleil 6.4.275.0WithMobile, such as:
-
-
Your Bluetooth device is not detected or paired by the software.
-
Your Bluetooth device is connected but not working properly with the software.
-
Your Bluetooth device is disconnected or interrupted by the software.
-
Your software is not activated or shows an error message.
-
-
Don't worry, these problems can be fixed by following some tips and tricks, such as:
-
-
Make sure your Bluetooth device is turned on and discoverable, and has enough battery power and signal strength.
-
Make sure your Bluetooth device is compatible with BlueSoleil and supports the service you want to use.
-
Make sure your computer has a Bluetooth adapter that is compatible with BlueSoleil and has the latest driver installed.
-
Make sure your computer and your Bluetooth device are within the Bluetooth range (usually 10 meters) and free from any interference or obstruction.
-
Make sure your software is updated to the latest version and has a valid serial number entered.
-
Restart your computer and your Bluetooth device and try to connect them again.
-
Uninstall and reinstall your software and try to activate it again.
-
-
If these tips and tricks do not work, you can also contact customer support for BlueSoleil 6.4.275.0WithMobile for further assistance.
-
How to contact customer support for BlueSoleil 6.4.275.0WithMobile?
-
If you have any questions or feedback about BlueSoleil 6.4.275.0WithMobile, you can contact customer support in these ways:
There you can find various resources, such as FAQs, manuals, tutorials, forums and online chat.
-
You can also submit a ticket or send an email to support@bluesoleil.com with your query or feedback.
-
You can also call the customer service hotline at +86-10-6297-8515 from Monday to Friday, 9:00 AM to 6:00 PM (GMT+8).
-
-
The customer support team of BlueSoleil is friendly and professional, and will try to help you as soon as possible.
-
Conclusion
-
BlueSoleil 6.4.275.0WithMobile is a powerful and versatile Bluetooth driver and software that allows you to connect and manage your Bluetooth devices with ease.
-
With this software, you can enjoy wireless audio, file transfer, mobile phone management and remote control functions with your Bluetooth devices.
-
You can also activate the software with a serial number and unlock all the features and functions.
-
If you encounter any problems with the software, you can follow some tips and tricks or contact customer support for help.
-
If you are looking for a Bluetooth solution that is easy to use and has a lot of features, BlueSoleil 6.4.275.0WithMobile is a great choice for you.
-
So what are you waiting for? Download and install BlueSoleil 6.4.275.0WithMobile today and enjoy the wireless freedom!
-
FAQs
-
Q: What are the system requirements for BlueSoleil 6.4.275.0WithMobile?
-
A: The system requirements for BlueSoleil 6.4.275.0WithMobile are:
-
-
Operating system: Windows XP/Vista/7/8/10
-
CPU: Intel Pentium IV or higher
-
RAM: 128 MB or more
-
Disk space: 500 MB or more
-
Bluetooth adapter: Any Bluetooth dongle or built-in Bluetooth device that supports BlueSoleil
-
-
Q: How many Bluetooth devices can I connect with BlueSoleil 6.4.275.0WithMobile?
-
A: You can connect up to 17 Bluetooth devices at the same time with BlueSoleil 6.4.275.0WithMobile.
-
Q: How long is the trial period for BlueSoleil 6.4.275.0WithMobile?
-
A: The trial period for BlueSoleil 6.4.275.0WithMobile is 30 days. During the trial period, you can use all the features and functions of the software, but you will see a watermark on the screen and hear a voice reminder every few minutes.
-
Q: How much does BlueSoleil 6.4.275.0WithMobile cost?
-
A: BlueSoleil 6.4.275.0WithMobile costs $27.99 USD for a single license. You can buy it online from the official website of BlueSoleil or from other authorized resellers.
-
Q: Is BlueSoleil 6.4.275.0WithMobile safe and reliable?
-
A: Yes, BlueSoleil 6.4.275.0WithMobile is safe and reliable. It has been tested and certified by various organizations, such as Microsoft, Intel, Broadcom and IVT Corporation. It has also received positive reviews and ratings from many users and experts.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md
deleted file mode 100644
index 10f484eadd5177aa55da270c00e10a9f9566228c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Darkstalkers Collection (PC) Download: How to Play the Classic Capcom Fighting Games on Your Computer
-
If you are a fan of 2D fighting games, you have probably heard of Darkstalkers, the iconic series by Capcom that features a cast of monstrous and supernatural characters. From vampires and werewolves to zombies and mummies, Darkstalkers has something for everyone who loves dark fantasy and horror themes.
-
Darkstalkers was first released in arcades in 1994, and since then it has spawned several sequels, spin-offs, comics, anime, and merchandise. However, despite its popularity and cult status, Darkstalkers has not seen a new game in over a decade. The last official release was Darkstalkers Resurrection, a compilation of two classic titles that came out in 2013 for PlayStation 3 and Xbox 360.
But don't despair, because there is still a way to enjoy Darkstalkers on your PC. In fact, there are two options that you can choose from depending on your preference and budget. In this article, we will show you how to download Darkstalkers Collection on PC and how to play it like a pro.
-
How to Download Darkstalkers Collection on PC
-
Darkstalkers Collection is not an official name, but rather a term that we use to refer to any compilation of Darkstalkers games that you can play on your PC. There are two main options that you can choose from:
-
Option 1: Buy Capcom Fighting Collection on Steam
-
If you want the most convenient and legal way to play Darkstalkers on your PC, you can buy Capcom Fighting Collection on Steam. This is a bundle of ten arcade games by Capcom that includes four titles from the Darkstalkers series:
-
-
Darkstalkers: The Night Warriors: The first game in the series that introduced the basic gameplay mechanics and 10 playable characters.
-
Night Warriors: Darkstalkers' Revenge: The second game in the series that added four new characters, improved the graphics and sound, and introduced new features such as air blocking, chain combos, and super moves.
-
Vampire Savior: The Lord of Vampire: The third game in the series that added four more characters, revamped the graphics and music, and changed the gameplay system to be faster and more dynamic.
-
Vampire Hunter 2: Darkstalkers' Revenge and Vampire Savior 2: The Lord of Vampire: Two updated versions of Night Warriors and Vampire Savior respectively that swapped some characters and tweaked some balance issues.
-
-
To buy and install Capcom Fighting Collection on Steam, you need to follow these steps:
-
-
Create a Steam account if you don't have one already.
Download and install Capcom Fighting Collection on your PC.
-
Launch Capcom Fighting Collection from your Steam library.
-
-
To switch between different games in Capcom Fighting Collection, you need to follow these steps:
-
How to download Darkstalkers Collection for PC
-Darkstalkers Collection PC game free download
-Darkstalkers Collection PC full version download
-Darkstalkers Collection PC torrent download
-Darkstalkers Collection PC crack download
-Darkstalkers Collection PC iso download
-Darkstalkers Collection PC steam download
-Darkstalkers Collection PC emulator download
-Darkstalkers Collection PC cheats download
-Darkstalkers Collection PC mods download
-Download Darkstalkers Collection for Windows 10
-Download Darkstalkers Collection for Windows 7
-Download Darkstalkers Collection for Mac
-Download Darkstalkers Collection for Linux
-Download Darkstalkers Collection for Android
-Download Darkstalkers Collection for iOS
-Download Darkstalkers Collection for PS4
-Download Darkstalkers Collection for Xbox One
-Download Darkstalkers Collection for Switch
-Download Darkstalkers Collection for PSP
-Best site to download Darkstalkers Collection for PC
-Safe site to download Darkstalkers Collection for PC
-Fast site to download Darkstalkers Collection for PC
-Easy site to download Darkstalkers Collection for PC
-Trusted site to download Darkstalkers Collection for PC
-Where can I download Darkstalkers Collection for PC
-Where to download Darkstalkers Collection for PC free
-Where to download Darkstalkers Collection for PC full version
-Where to download Darkstalkers Collection for PC torrent
-Where to download Darkstalkers Collection for PC crack
-Where to download Darkstalkers Collection for PC iso
-Where to download Darkstalkers Collection for PC steam
-Where to download Darkstalkers Collection for PC emulator
-Where to download Darkstalkers Collection for PC cheats
-Where to download Darkstalkers Collection for PC mods
-How to install Darkstalkers Collection on PC
-How to play Darkstalkers Collection on PC
-How to run Darkstalkers Collection on PC
-How to update Darkstalkers Collection on PC
-How to fix Darkstalkers Collection on PC errors
-How to optimize Darkstalkers Collection on PC performance
-How to unlock all characters in Darkstalkers Collection on PC
-How to use controller in Darkstalkers Collection on PC
-How to change language in Darkstalkers Collection on PC
-How to save progress in Darkstalkers Collection on PC
-How to stream Darkstalkers Collection on PC online
-How to record gameplay of Darkstalkers Collection on PC
-How to make a review of Darkstalkers Collection on PC
-How to get a refund of Darkstalkers Collection on PC
-How to buy a physical copy of Darkstalkers Collection on PC
-
-
Select "Game Select" from the main menu.
-
Select the game that you want to play from the list.
-
Select "Play Game" or "Online Play" depending on whether you want to play offline or online.
-
Select your character and mode from the game menu.
-
Enjoy playing Darkstalkers!
-
-
To play online and access the museum mode in Capcom Fighting Collection, you need to follow these steps:
-
-
Select "Online Play" from the main menu or the game select menu.
-
Select "Ranked Match" or "Lobby Match" depending on whether you want to play competitively or casually with other players.
-
Select your region, game title, character, mode, and other settings.
-
Wait for an opponent or join an existing lobby.
-
Have fun playing online!
-
Select "Museum" from the main menu or the game select menu.
-
Select "Gallery" or "Sound Player" depending on whether you want to view illustrations or listen to music from the games.
-
Browse through hundreds of artworks and tracks from the arcade versions of each title.
-
-
Option 2: Download Darkstalkers Resurrection from Internet Archive
-
If you don't want to spend money or if you prefer a more retro experience, you can download Darkstalkers Resurrection from Internet Archive. This is a compilation of two classic titles that was released in 2013 for PlayStation 3 and Xbox 360:
-
-
Night Warriors: Darkstalkers' Revenge: The second game in the series that added four new characters, improved the graphics and sound, and introduced new features such as air blocking, chain combos, and super moves.
-
Darkstalkers 3: The fourth game in the series that added five more characters (plus two secret ones), enhanced the graphics and music further, and modified the gameplay system with elements such as dark force activation, pursuit attacks, throw escapes, tech hits, etc.
-
-
To download and extract Darkstalkers Resurrection from Internet Archive, you need to follow these steps:
-
-
Create an Internet Archive account if you don't have one already.
Download darkstalkers-ressurection.rar (4.7 GB) using your preferred download manager.
-
Extract darkstalkers-ressurection.rar using WinRAR or any other software that can handle RAR files.
-
You will get two files: DARKSTALKERS_RESSURECTION.iso (4.7 GB) and DARKSTALKERS_RESSURECTION.dvd (4 KB).
-
-
To run Darkstalkers Resurrection on your PC using an emulator, you need to follow these steps:
-
-
Download Xenia, an emulator that can run Xbox 360 games on PC.
-
Extract xenia-master.zip (14 MB) using WinRAR or any other software that can handle ZIP files.
-
You will get a folder called xenia-master with several files inside it.
-the game select menu.
-
Select "Play Game" or "Online Play" from the game select menu.
-
Select "Arcade Mode" or "Versus Mode" from the game menu.
-
Select your character from the character select screen. You can also select a different color scheme by pressing different buttons.
-
Notice that some characters are different from their original versions in Night Warriors or Vampire Savior. For example, Morrigan has a new move called Soul Eraser, and Jedah has a new move called Prova di Servo.
-
Try out their new moves and see how they affect their gameplay and strategies.
-
-
Conclusion
-
Darkstalkers is one of the most beloved and influential 2D fighting games of all time. It has a unique and diverse roster of characters, a fast and fluid gameplay system, and a dark and stylish aesthetic. If you want to experience this classic series on your PC, you have two options: buy Capcom Fighting Collection on Steam or download Darkstalkers Resurrection from Internet Archive.
-
Both options have their pros and cons, but they both allow you to play four titles from the Darkstalkers series: Darkstalkers: The Night Warriors, Night Warriors: Darkstalkers' Revenge, Vampire Savior: The Lord of Vampire, and Vampire Hunter 2: Darkstalkers' Revenge/Vampire Savior 2: The Lord of Vampire. You can also play online with other players and access the museum mode with hundreds of artworks and tracks from the games.
-
Whether you are a beginner or an expert, you can enjoy Darkstalkers on your PC by learning the basics and the advanced techniques of the gameplay. You can also master the unique abilities of each character and their variants by reading their profiles and trying out their moves. Darkstalkers is a game that rewards skill, creativity, and experimentation.
-
If you are ready to enter the world of Darkstalkers, don't hesitate to download Darkstalkers Collection on PC today. You won't regret it!
-
FAQs
-
-
Q: What is the difference between Vampire Savior and Darkstalkers 3?
-
A: Vampire Savior is the original Japanese name of Darkstalkers 3. They are essentially the same game, except for some minor differences in localization and censorship.
-
Q: How many characters are there in Darkstalkers?
-
A: There are 18 playable characters (plus two secret ones) in Darkstalkers. They are: Anakaris, B.B. Hood, Bishamon, Demitri Maximoff, Donovan Baine, Felicia, Hsien-Ko, Huitzil, Jedah Dohma, Jon Talbain, Lilith, Lord Raptor, Morrigan Aensland, Pyron, Q-Bee, Rikuo, Sasquatch, and Victor von Gerdenheim.
-
Q: Who is the strongest character in Darkstalkers?
-
A: There is no definitive answer to this question, as different characters have different strengths and weaknesses. However, some of the characters that are generally considered to be very strong are Jedah Dohma, Q-Bee, Sasquatch, Morrigan Aensland, and B.B. Hood.
-
Q: Who is the main protagonist of Darkstalkers?
-
A: There is no clear-cut main protagonist of Darkstalkers, as each character has their own story and motivation. However, some of the characters that are more prominent in the plot and lore are Demitri Maximoff, Morrigan Aensland, Donovan Baine, Anita, Pyron, and Jedah Dohma.
-
Q: Will there be a new Darkstalkers game?
-
A: There is no official confirmation or announcement of a new Darkstalkers game as of now. However, there have been some hints and rumors that Capcom might be interested in reviving the series in the future. For example, in 2018 Capcom released a survey asking fans about their interest in various franchises including Darkstalkers. In 2020 Capcom registered new trademarks for several titles including Darkstalkers. In 2021 Capcom released a teaser trailer for Project Battle featuring Morrigan Aensland as one of the playable characters.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md
deleted file mode 100644
index b487f3702cf80aa66cc86e4ae971325b8e420a78..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
How to Download and Install the Latest Costume Update for PES 6 to PES 13
-
Introduction
-
Pro Evolution Soccer (PES) is a popular soccer video game series that has been around since 2001. The game features realistic graphics, gameplay, and physics, as well as licensed teams, players, and stadiums from various leagues and competitions around the world.
-
One of the aspects that makes PES stand out from other soccer games is its customization options. You can edit and create your own teams, players, stadiums, logos, balls, boots, and more. You can also download and install updates and mods from other users that enhance or change various aspects of the game.
One of the most common updates that PES fans look for is costume updates. Costumes are the outfits that players wear on the field, such as jerseys, shorts, socks, gloves, etc. Costume updates change the appearance of these outfits to match the latest designs and trends of real-life soccer teams.
-
Updating costumes can make your game look more realistic and up-to-date. It can also make your game more fun and enjoyable by adding variety and diversity to your teams and players. You can choose from different styles, colors, patterns, logos, sponsors, etc.
-
In this article, we will show you how to download and install the latest costume update for PES 6 to PES 13. This update will transform your old PES 6 costumes into new PES 13 costumes. You will be able to play with updated costumes for over 200 teams from various leagues and competitions around the world.
-
Before we start, you will need some requirements for updating costumes. You will need:
-
-
A PC with Windows XP or higher
-
A copy of PES 6 installed on your PC
-
An internet connection
-
A file extractor program such as WinRAR or 7-Zip
-
A file manager program such as Windows Explorer or Total Commander
-
-
How to Download the Update File
-
The first step is to download the update file that contains the new costumes for PES 6. The update file is a large file that weighs about 1 GB. You can find it on various websites that offer PES 6 updates and mods.
-
One of these websites is tribe54.com. Tribe54.com is a community platform that allows users to share their passion for soccer games. You can find many updates and mods for different versions of PES on this website.
-
download patch pes 6 to pes 13 terbaru
-cara update jersey pes 6 ke pes 13 gratis
-download kitserver pes 6 update pes 13 full version
-tutorial mengubah tampilan pes 6 menjadi pes 13
-download option file pes 6 terbaru pes 13
-download stadium pes 6 update pes 13 hd
-cara install update kostum pes 6 ke pes 13
-download face and hair pes 6 update pes 13
-download game pes 6 mod pes 13 pc
-download sound pes 6 update pes 13
-download scoreboard pes 6 update pes 13
-download boots pes 6 update pes 13
-download balls pes 6 update pes 13
-download adboard pes 6 update pes 13
-download logo and emblem pes 6 update pes 13
-download master game pes 6 update pes 13
-download e_text and o_text pes 6 update pes 13
-download commentary and callname pes 6 update pes 13
-download menu and background pes 6 update pes 13
-download net and goalpost pes 6 update pes 13
-download player rating and ability pes 6 update pes 13
-download formation and tactics pes 6 update pes 13
-download referee and linesman pes 6 update pes 13
-download crowd and supporter pes 6 update pes 13
-download banner and flag pes 6 update pes 13
-download replay logo and animation pes 6 update pes 13
-download intro video and music pes 6 update pes 13
-download font and number style pes 6 update pes 13
-download league and cup name and logo pes 6 update pes 13
-download team name and logo and kitpespespespespespespespespespespespespespespespespespespespespespespespespespespesssssssssssssssssssssssssssssssssssssssssssssssss
-
To download the update file from tribe54.com, follow these steps:
Click on the "Download" button at the bottom of the page.
-
Wait for a few seconds until a new page opens.
-
Click on the "Download" button again at the top right corner of the page.
-
Choose a location on your PC where you want to save the file.
-
Wait for the download to finish.
-
-
Alternatively, you can also download the update file from other websites such as pes-patch.com or pesnewupdate.com. Just make sure you download the correct file that matches your version of PES 6.
-
After downloading the file, you should check its size and integrity. The file size should be around 1 GB. The file name should be "Update Kostum Pes 6 Menjadi Pes 13.rar". The file type should be RAR.
-
To check these details, you can right-click on the file icon and select "Properties". A window will pop up that shows you these information.
-
If everything looks fine, you can proceed to extract the file. If not, you may have downloaded a corrupted or incomplete file. In that case, you should delete it and try downloading it again from another source.
-
How to Extract the File
-
The next step is to extract the file that you have downloaded. The file is compressed in RAR format. This means that it contains multiple files inside it that are packed together to reduce its size.
-
To extract these files, you will need a file extractor program such as WinRAR or 7-Zip. These programs allow you to open and decompress RAR files easily.
-
To extract the file using WinRAR, follow these steps:
-
-
Right-click on the file icon and select "Extract Here".
-
Wait for WinRAR to extract all files into a new folder named "Update Kostum Pes 6 Menjadi Pes 13".
-
Open this folder and check its contents. You should see several subfolders named "0_text", "e_text", "0_sound", etc., as well as some files named "PES6.exe", "settings.exe", etc.
-
-
To extract using 7-Zip instead of WinRAR follow these steps:
-
-
Right-click on file icon then select "7-Zip" then select "Extract Here".
-
Wait for 7-Zip extract all files into new folder named "Update Kostum Pes 6 Menjadi Pes 13".
-
Open this folder then check its contents same way as above.
-
-
How To Install The Update File
-
The final step is install update file that you have extracted into your PES 6 folder. This will overwrite original files with new ones that contain updated costumes.
-
Before you do this though make sure backup your original files case something goes wrong or want revert back old costumes later.
-
To backup your original files follow these steps:
-
-
Navigate your PES 6 folder where game installed on PC usually located at C:\Program Files\KONAMI\Pro Evolution Soccer 6\ .
-
Select all files folders inside then copy them another location PC such as Desktop Documents etc.
-
Rename copied folder something like "PES 6 Backup" so know what it is later.
-
-
To install update file follow these steps:
-
-
Navigate folder where extracted update file earlier named "Update Kostum Pes Menjadi Pes".
-
Select all files folders inside then copy them same location where game installed overwriting existing ones when prompted confirm replace .
-finish.
-
Run game enjoy new costumes.
-
-
Conclusion
-
Congratulations! You have successfully downloaded and installed the latest costume update for PES 6 to PES 13. You can now play with updated costumes for over 200 teams from various leagues and competitions around the world.
-
Updating costumes can make your game look more realistic and up-to-date. It can also make your game more fun and enjoyable by adding variety and diversity to your teams and players. You can choose from different styles, colors, patterns, logos, sponsors, etc.
-
Here are some tips and tricks for using the update:
-
-
You can change the language of the game by running the "settings.exe" file in your PES 6 folder and selecting your preferred language.
-
You can adjust the graphics, sound, and controller settings of the game by running the "settings.exe" file in your PES 6 folder and customizing your options.
-
You can switch between different camera angles and zoom levels by pressing the F5 key during gameplay.
-
You can pause the game by pressing the ESC key during gameplay.
-
You can take screenshots of the game by pressing the Print Screen key during gameplay. The screenshots will be saved in your PES 6 folder as BMP files.
-
-
We hope you enjoyed this article and found it helpful. If you have any feedback or questions, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
Q: Can I use this update for other versions of PES?
-
A: No, this update is only compatible with PES 6. If you try to use it for other versions of PES, you may encounter errors or bugs that may damage your game or PC.
-
Q: Will this update affect my saved games or online play?
-
A: No, this update only changes the appearance of the costumes, not the gameplay or data. Your saved games and online play will not be affected by this update.
-
Q: What if I encounter any errors or bugs after installing the update?
-
A: You can try to reinstall the update or restore your original files from the backup. If that does not work, you can contact the creator of the update or visit some websites and forums that offer support and solutions for PES 6 issues.
-
Q: Where can I find more updates and mods for PES 6?
-
A: You can visit some popular websites and forums that offer PES 6 updates and mods, such as pes-patch.com, pesnewupdate.com, or evo-web.co.uk. You can find updates and mods for various aspects of the game, such as teams, players, stadiums, balls, boots, logos, etc.
-
Q: How can I create my own costumes for PES 6?
-
A: You can use some tools and software that allow you to edit and create costumes for PES 6, such as Kitserver, GDB Manager, or Photoshop. You can find tutorials and guides on how to use these tools and software on some websites and forums that offer PES 6 updates and mods.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md
deleted file mode 100644
index 0ab4442ec55dd33328803f1c6cb4acb4a02e16e3..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md
+++ /dev/null
@@ -1,195 +0,0 @@
-
-
Download manley massive passive eq plugin.rar 16: How to Get the Best Tube EQ for Mixing and Mastering
-
If you are looking for a high-end tube EQ that can shape your tracks and masters with musical curves and unparalleled clarity, you might want to consider downloading manley massive passive eq plugin.rar 16. This is a plugin emulation of one of the most popular and sought-after passive EQs in the audio industry, the Manley Massive Passive EQ. In this article, we will explain what this EQ is, why you should download it, and how to use it effectively.
-
What is Manley Massive Passive EQ?
-
The Manley Massive Passive EQ is a two-channel, four-band tube EQ that was designed by Manley Labs in 1998. It is based on the design strengths of various classic EQs, such as console, parametric, graphic, and Pultec EQs. It uses only passive components, such as resistors, inductors, and capacitors, to create all frequency changes. This gives it a natural and organic sound that is different from active or digital EQs.
The Manley Massive Passive EQ was created by EveAnna Manley and Hutch Hutchison, who wanted to make a versatile and musical EQ that could handle any source material. They combined elements from different types of EQs, such as shelving filters, bell curves, resonant filters, and cut filters. They also added some unique features, such as:
-
-
The ability to switch between two modes: standard and mastering. The standard mode offers continuous bandwidth adjustment, while the mastering mode offers 16 steps of recallable bandwidth selections.
-
The ability to switch between two filter types: normal and bandpass. The normal type offers a conventional boost or cut response, while the bandpass type offers a narrower and steeper response that can create resonant peaks or notches.
-
The ability to switch between two curves: bell and shelf. The bell curve offers a symmetrical boost or cut around a center frequency, while the shelf curve offers an asymmetrical boost or cut that affects all frequencies above or below a corner frequency.
-
The ability to switch between two phases: normal and reverse. The normal phase offers a positive phase response that preserves the original phase relationships of the signal, while the reverse phase offers a negative phase response that flips the phase relationships of the signal.
-
-
All these features allow the user to create complex and musical EQ shapes that can enhance or transform any sound source.
-
The benefits and drawbacks of the hardware EQ
-
The Manley Massive Passive EQ has been praised by many engineers and producers for its sound quality, flexibility, and character. Some of the benefits of using this hardware EQ are:
-
-
It can add warmth, color, and harmonics to the signal due to its tube circuitry and transformer-coupled output.
-
It can create smooth and natural frequency changes due to its passive components and gentle filter slopes.
-
It can handle high levels of input without clipping or distorting due to its high headroom and low noise.
-
It can interact with other bands in a musical way due to its interdependent gain and bandwidth controls.
-
-
However, like any hardware device, it also has some drawbacks that might limit its usability or availability. Some of these drawbacks are:
-
-
It is expensive and rare to find due to its high-quality components and craftsmanship.
-
It is large and heavy due to its dual-mono design and robust chassis.
-
It requires maintenance and calibration due to its tube components and sensitive controls.
-
It has limited recallability due to its analog nature and stepped controls.
-
-
These drawbacks might make it difficult or impractical for some users to own or use this hardware EQ in their studios or projects.
-
The official UAD plugin emulation of the hardware EQ
-
To address these drawbacks and make this hardware EQ more accessible and convenient for users, Universal Audio (UAD) developed an official plugin emulation of the Manley Massive Passive EQ in 2010. This plugin was modeled by UAD engineers with the help of Manley Labs, who provided them with schematics, measurements, samples, and feedback. The plugin captures every aspect of the hardware's behavior, from its unique filter curves, to its multiple band interdependencies, right down to the tube amplifier distortion, and all-important transformer/inductor hysteresis.
Downloading manley massive passive eq plugin.rar 16 is a great way to get the best of both worlds: the sound of the hardware EQ and the convenience of the plugin format. Here are some reasons why you should download this plugin:
-
The advantages of using the plugin version over the hardware version
-
While the hardware version of the Manley Massive Passive EQ is undoubtedly a masterpiece of audio engineering, it also has some limitations that might make it less suitable for some users or situations. The plugin version, on the other hand, offers some advantages that can overcome these limitations, such as:
-
How to install manley massive passive eq plugin.rar 16
-Manley massive passive eq plugin.rar 16 free download link
-Manley massive passive eq plugin.rar 16 crack file
-Manley massive passive eq plugin.rar 16 tutorial video
-Manley massive passive eq plugin.rar 16 review and rating
-Manley massive passive eq plugin.rar 16 compatible software
-Manley massive passive eq plugin.rar 16 license key generator
-Manley massive passive eq plugin.rar 16 user manual pdf
-Manley massive passive eq plugin.rar 16 best settings and presets
-Manley massive passive eq plugin.rar 16 comparison with other plugins
-Manley massive passive eq plugin.rar 16 discount code and coupon
-Manley massive passive eq plugin.rar 16 troubleshooting and support
-Manley massive passive eq plugin.rar 16 features and benefits
-Manley massive passive eq plugin.rar 16 testimonials and feedback
-Manley massive passive eq plugin.rar 16 alternatives and competitors
-Manley massive passive eq plugin.rar 16 system requirements and specifications
-Manley massive passive eq plugin.rar 16 update and upgrade
-Manley massive passive eq plugin.rar 16 refund policy and guarantee
-Manley massive passive eq plugin.rar 16 demo and trial version
-Manley massive passive eq plugin.rar 16 tips and tricks
-Manley massive passive eq plugin.rar 16 pros and cons
-Manley massive passive eq plugin.rar 16 history and background
-Manley massive passive eq plugin.rar 16 FAQs and answers
-Manley massive passive eq plugin.rar 16 forum and community
-Manley massive passive eq plugin.rar 16 blog and news
-Manley massive passive eq plugin.rar 16 affiliate program and commission
-Manley massive passive eq plugin.rar 16 case studies and success stories
-Manley massive passive eq plugin.rar 16 awards and recognition
-Manley massive passive eq plugin.rar 16 webinar and training
-Manley massive passive eq plugin.rar 16 podcast and interview
-Manley massive passive eq plugin.rar 16 ebook and guide
-Manley massive passive eq plugin.rar 16 course and certification
-Manley massive passive eq plugin.rar 16 infographic and chart
-Manley massive passive eq plugin.rar 16 checklist and template
-Manley massive passive eq plugin.rar 16 cheat sheet and summary
-Manley massive passive eq plugin.rar 16 glossary and terminology
-Manley massive passive eq plugin.rar 16 calculator and tool
-Manley massive passive eq plugin.rar 16 quiz and survey
-Manley massive passive eq plugin.rar 16 meme and joke
-Manley massive passive eq plugin.rar 16 wallpaper and screensaver
-
-
It is more affordable and accessible than the hardware version, which costs thousands of dollars and is hard to find.
-
It is more portable and flexible than the hardware version, which requires a lot of space and power.
-
It is more consistent and reliable than the hardware version, which can vary in sound quality and performance due to aging or environmental factors.
-
It is more versatile and scalable than the hardware version, which can only process two channels at a time.
-
It is more compatible and integrable than the hardware version, which requires additional hardware and cables to connect to your audio system.
-
-
These advantages make the plugin version more suitable for users who want to use the Manley Massive Passive EQ in different settings, such as home studios, mobile rigs, or live performances.
-
The compatibility and requirements of the plugin version
-
The plugin version of the Manley Massive Passive EQ is available for both Windows and Mac operating systems. It supports VST, AU, AAX, and RTAS plugin formats. It can be used in any DAW that supports these formats, such as Pro Tools, Logic Pro, Cubase, Ableton Live, FL Studio, Reaper, etc. However, there are some requirements that you need to meet in order to use this plugin:
-
-
You need to have a UAD account and a UAD device (such as an Apollo interface or a UAD-2 card) to run this plugin. This is because this plugin uses UAD's proprietary DSP technology to emulate the hardware EQ with high accuracy and low latency.
-
You need to have enough DSP power on your UAD device to run this plugin. This plugin consumes a lot of DSP resources due to its complex modeling algorithms. Depending on your UAD device model and configuration, you might be able to run only one or a few instances of this plugin at a time.
-
You need to have enough disk space on your computer to install this plugin. This plugin requires about 300 MB of disk space for installation.
-
-
These requirements might make it difficult or impossible for some users to use this plugin if they don't have a UAD device or enough DSP power or disk space.
-
The best sources and methods to download the plugin version
-
If you meet the requirements and want to download manley massive passive eq plugin.rar 16 , you have a few options to choose from. Here are some of the best sources and methods to download this plugin:
-
The official UAD website
-
The most reliable and secure way to download manley massive passive eq plugin.rar 16 is to get it from the official UAD website. This way, you can be sure that you are getting the latest and most authentic version of the plugin, as well as the best customer support and updates. To download the plugin from the UAD website, you need to follow these steps:
-
-
Log into your UAD account or create one if you don't have one already.
-
Go to the Manley Massive Passive EQ product page and click on the "Add to Cart" button.
-
Proceed to checkout and complete your payment. The plugin costs $299, but you might be able to get it for a lower price if there is a promotion or a coupon available.
-
After your payment is confirmed, go to the "My Products" section of your account and click on the "Download" button for the plugin.
-
Save the file to your computer and open it to start the installation process.
-
Follow the instructions on the screen to install the plugin and authorize it with your UAD device.
-
-
Note: You can also download a 14-day free trial of the plugin from the UAD website if you want to test it before buying it.
-
The torrent websites
-
Another way to download manley massive passive eq plugin.rar 16 is to use torrent websites. These are websites that allow users to share files with each other using a peer-to-peer network. Torrent websites can offer some advantages over the official UAD website, such as:
-
-
They can provide faster download speeds due to multiple sources of the file.
-
They can offer free access to the plugin without paying for it.
-
They can have older or modified versions of the plugin that might suit your preferences better.
-
-
However, torrent websites also have some disadvantages and risks that you should be aware of, such as:
-
-
They can be illegal or unethical depending on your location and the copyright laws.
-
They can be unsafe or harmful due to viruses, malware, or spyware that might be hidden in the file.
-
They can be unreliable or incompatible due to corrupted, incomplete, or outdated files.
-
They can be unsupported or unupdated due to lack of customer service or official updates from UAD.
-
-
If you decide to use torrent websites to download manley massive passive eq plugin.rar 16, you need to follow these steps:
-
-
Find a reputable and trustworthy torrent website that has the file you are looking for. Some of the most popular torrent websites are The Pirate Bay, 1337x, RARBG, etc.
-
Search for "manley massive passive eq plugin.rar 16" on the website and look for a file that has a high number of seeders (sources) and leechers (downloaders), as well as positive comments and ratings from other users.
-
Download a torrent client software that can open and manage torrent files. Some of the most popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc.
-
Open the torrent file with your torrent client and choose a location to save the file on your computer.
-
Wait for the download to finish and then open the file to start the installation process.
-
Follow the instructions on the screen to install the plugin and crack it if necessary.
-
-
Note: You might need a VPN service or a proxy server to access some torrent websites or files if they are blocked or restricted in your location.
-
How to use manley massive passive eq plugin.rar 16 effectively?
-
Now that you have downloaded manley massive passive eq plugin.rar 16 , you might be wondering how to use it effectively. The Manley Massive Passive EQ plugin is a powerful and versatile tool that can help you shape your sounds in various ways. However, it also requires some knowledge and skill to use it properly. Here are some tips and tricks on how to use manley massive passive eq plugin.rar 16 effectively:
-
The basic controls and functions of the plugin
-
The plugin interface of the Manley Massive Passive EQ is very similar to the hardware version, except for some minor differences. The plugin has two channels: left and right. Each channel has four bands: low, low-mid, high-mid, and high. Each band has four controls: frequency, gain, bandwidth, and filter type. There are also some global controls: input level, output level, phase invert, link mode, and bypass.
-
The frequency control allows you to select the center or corner frequency of each band. You can choose from 11 fixed frequencies for each band, ranging from 22 Hz to 27 kHz. The gain control allows you to boost or cut the selected frequency by up to 20 dB. The bandwidth control allows you to adjust the width or slope of the filter curve. You can choose from 5 fixed values for each band, ranging from narrow to wide. The filter type control allows you to switch between two modes: normal and bandpass. The normal mode offers a conventional boost or cut response, while the bandpass mode offers a narrower and steeper response that can create resonant peaks or notches.
-
The input level control allows you to adjust the level of the incoming signal before it reaches the EQ section. You can boost or attenuate the input level by up to 12 dB. The output level control allows you to adjust the level of the outgoing signal after it passes through the EQ section. You can boost or attenuate the output level by up to 12 dB. The phase invert control allows you to flip the polarity of the signal for each channel. This can help you correct phase issues or create interesting effects. The link mode control allows you to link the left and right channels for stereo operation. You can choose from three modes: off, L/R, and M/S. The off mode allows you to adjust each channel independently. The L/R mode allows you to adjust both channels simultaneously with the same settings. The M/S mode allows you to adjust the mid and side signals separately with different settings. The bypass control allows you to bypass the EQ section for each channel. This can help you compare the processed and unprocessed signals.
-
The tips and tricks to get the most out of the plugin
-
The Manley Massive Passive EQ plugin is a very flexible and musical EQ that can be used for various purposes and genres. However, it also has some quirks and characteristics that you need to be aware of and take advantage of. Here are some tips and tricks to get the most out of this plugin:
-
-
Use the standard version for mixing and the mastering version for mastering. The standard version offers continuous bandwidth adjustment, while the mastering version offers 16 steps of recallable bandwidth selections. The standard version is more suitable for fine-tuning individual tracks or buses, while the mastering version is more suitable for applying broad strokes to a final mix or master.
-
Use the normal filter type for subtle or transparent changes and the bandpass filter type for drastic or creative changes. The normal filter type offers a conventional boost or cut response that preserves the natural tone of the signal, while the bandpass filter type offers a narrower and steeper response that alters the tone of the signal significantly.
-
Use the bell curve for precise or surgical changes and the shelf curve for broad or gentle changes. The bell curve offers a symmetrical boost or cut around a center frequency that affects only a specific range of frequencies, while the shelf curve offers an asymmetrical boost or cut that affects all frequencies above or below a corner frequency.
-
Use the normal phase for clean or accurate changes and the reverse phase for dirty or experimental changes. The normal phase offers a clean and accurate response that preserves the original phase relationships of the signal, while the reverse phase offers a dirty and experimental response that flips the phase relationships of the signal.
-
Use the link mode to process stereo or mid/side signals. The link mode allows you to process stereo or mid/side signals with different settings for each channel. You can use the L/R mode to process stereo signals with the same settings for both channels, or use the M/S mode to process mid/side signals with different settings for each channel.
-
Use the input and output level controls to balance the gain staging. The input and output level controls allow you to adjust the level of the signal before and after the EQ section. You can use these controls to balance the gain staging and avoid clipping or losing headroom.
-
-
The examples and presets of using the plugin on different sources
-
The Manley Massive Passive EQ plugin can be used on different sources, such as vocals, drums, guitars, bass, keyboards, synths, etc. Depending on the source and the desired result, you can use different settings and techniques to achieve various effects. Here are some examples and presets of using the plugin on different sources:
-
Vocals
-
The Manley Massive Passive EQ plugin can be used to enhance or correct vocals in various ways. You can use it to add warmth, presence, brightness, airiness, or smoothness to vocals. You can also use it to remove harshness, sibilance, muddiness, or boominess from vocals. Here are some presets for vocal processing:
-
-
-
Preset Name
-
Description
-
Settings
-
-
-
Vocal Warmth
-
This preset adds some warmth and body to vocals by boosting some low-mid frequencies.
-
Low band: 220 Hz / +6 dB / wide / normal / bell Low-mid band: 390 Hz / +3 dB / wide / normal / bell High-mid band: off High band: off
-
-
-
Vocal Presence
-
This preset adds some presence and clarity to vocals by boosting some high-mid frequencies.
-
Low band: off Low-mid band: off High-mid band: 3.9 kHz / +6 dB / narrow / normal / bell High band: off
-
-
-
Vocal Brightness
-
This preset adds some brightness and sparkle to vocals by boosting some high frequencies.
-
Low band: off Low-mid band: off High-mid band: off High band: 16 kHz / +6 dB / wide / normal / shelf
-
-
-
Vocal Airiness
-
This preset adds some airiness and openness to vocals by boosting some very high frequencies.
-
Low band: off Low-mid band: off High-mid band: off High band: 27 kHz / +6 dB / wide / normal / shelf
-
-
-
Vocal Smoothness
-
This preset adds some smoothness and silkiness to vocals by cutting some harsh frequencies.
-
Low band: off Low-mid band: 1.5 kHz / -6 dB / narrow / normal / bell High-mid band: 6.8 kHz / -6 dB / narrow / normal / bell High band: off
-
-
-
Vocal De-Esser
-
This preset reduces sibilance and harshness from vocals by cutting some high frequencies with a bandpass filter.
-
Low band: off Low-mid band: off High-mid band: 8.2 kHz / -12 dB / narrow / bandpass / bell High band: off
-
-
-
Vocal De-Mud
-
This preset removes muddiness and boominess from vocals by cutting some low frequencies with a shelf filter.
-
Low band: 82 Hz / -12 dB / wide / normal / shelf Low-mid band: off High-mid band: off High band: off
-
-
-
Vocal De-Boom
-
This preset removes boominess and plosives from vocals by cutting some low frequencies with a bell filter.
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md
deleted file mode 100644
index a999532803e4f067e8f7c6fca0af42c9fd49e69d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
Forex Hacked Pro Free Download: A Powerful EA for Scalping and Hedging
-
Forex Hacked Pro is an expert advisor (EA) that can help you make money from forex trading. It is designed for the MetaTrader 4 platform and works with any broker that supports it. Forex Hacked Pro can perform various tasks such as flashing, unlocking, and repairing your forex account. It can also remove FRP lock, reset locks, enable diag mode, and more on your forex account. In this article, we will show you how to download Forex Hacked Pro for free and how to use it to boost your forex profits.
Forex Hacked Pro is a modified version of the original Forex Hacked EA that was released in 2009. It has more features and options than the basic version, such as the ability to trade on multiple currency pairs, use different strategies, and optimize the settings for each pair. Forex Hacked Pro uses a combination of martingale and hedging techniques to increase the chances of winning trades. It also has a built-in news filter that avoids trading during high-impact news events. Forex Hacked Pro is a very profitable EA, but it also comes with a high risk of losing your account if not used properly. Therefore, it is recommended to use it with caution and withdraw your profits regularly.
-
How to Download Forex Hacked Pro for Free?
-
You can download Forex Hacked Pro for free from various websites that offer cracked versions of the EA. However, these versions may not be reliable or safe to use, as they may contain viruses, malware, or hidden codes that can harm your computer or forex account. Therefore, it is better to download Forex Hacked Pro from the official website of Forex Hacked, where you can get the latest and updated version of the EA for a one-time fee of $329.99. This fee includes both the basic and pro versions of Forex Hacked, as well as lifetime support and updates. You can also get access to the members area where you can find detailed guides, tutorials, and optimized settings for each currency pair.
-
How to Use Forex Hacked Pro?
-
To use Forex Hacked Pro, you need to follow these steps:
-
-
-
Install MetaTrader 4 on your computer and create an account with a broker that supports MT4.
-
Download Forex Hacked Pro from the official website or any other source that you trust.
-
Extract the zip file and copy the Forex Hacked Pro.ex4 file to the Experts folder of your MT4 installation directory.
-
Copy the .set files for each currency pair that you want to trade to the Presets folder of your MT4 installation directory.
-
Connect your MT4 account to your broker and make sure you have enough balance to trade.
-
Open MT4 and go to Tools > Options > Expert Advisors. Check the boxes that allow automated trading and DLL imports.
-
Go to the Navigator window and drag the Forex Hacked Pro EA to the chart of the currency pair that you want to trade.
-
A pop-up window will appear with the input parameters of the EA. You can either use the default settings or load the .set file for that pair from the Presets folder.
-
Click OK and make sure there is a smiley face on the top right corner of the chart. This means that the EA is activated and ready to trade.
-
-
You have successfully installed and activated Forex Hacked Pro on your MT4 account. Now you can sit back and watch it trade for you.
-
Tips and Warnings
-
-
Forex Hacked Pro is a high-risk EA that can double your account in a short time but also wipe it out if not used carefully. Therefore, it is advisable to use it with a small lot size, a low leverage, and a stop loss.
-
Forex Hacked Pro works best on EURUSD, GBPUSD, EURCHF, USDCHF, EURJPY, USDJPY, EURGBP, AUDUSD, and USDCAD pairs. You can use it on any time frame but H1 and M15 are recommended.
-
Forex Hacked Pro is not a plug-and-play EA ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Chhota Bheem And The Throne Of Bali ) Experience the Magic and Mystery of Bali with Bheem and Arjun.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Chhota Bheem And The Throne Of Bali ) Experience the Magic and Mystery of Bali with Bheem and Arjun.md
deleted file mode 100644
index 259336fb401f83ebbf969dc795c88c6502f129c1..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Chhota Bheem And The Throne Of Bali ) Experience the Magic and Mystery of Bali with Bheem and Arjun.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Chhota Bheem and the Throne of Bali: A Fun-Filled Adventure for Kids
-
Do you love watching animated movies that take you to a different world? Do you enjoy following the adventures of brave heroes who fight evil villains? Do you like laughing at hilarious jokes and singing along to catchy songs? If you answered yes to any of these questions, then you will love Chhota Bheem and the Throne of Bali, a movie that has all these elements and more.
-
HD Online Player (Chhota Bheem And The Throne Of Bali )
Chhota Bheem and the Throne of Bali is a 2013 Indian animated film based on the popular TV series Chhota Bheem. It is the sixteenth movie in the series and the second one to be released in theatres. It tells the story of how Chhota Bheem, a young boy with superhuman strength, and his friends go to Bali to attend a coronation ceremony, but end up saving the island from an evil witch named Rangda. It is a movie that will keep you entertained from start to finish with its thrilling action, charming characters, beautiful scenery, and heartwarming message.
-
Why is Chhota Bheem and the Throne of Bali a good movie for kids? Because it is not only fun to watch, but also educational. It teaches kids about a different culture, history, mythology, and geography. It also teaches them important values such as friendship, courage, loyalty, honesty, and respect. It is a movie that will make kids laugh, cry, cheer, and learn.
-
The Story of Chhota Bheem and the Throne of Bali
-
The Invitation from Bali
-
The movie begins with Chhota Bheem and his friends Chutki, Raju, Jaggu, Kalia, Dholu, and Bholu living happily in their village Dholakpur. One day, they receive an invitation from King Indravarma's nephew Arjun, who is going to be crowned as the prince of Bali. They are excited to go to Bali and meet Arjun.
-
They board a ship with King Indravarma and sail to Bali. On their way, they encounter a storm that almost sinks their ship. But thanks to Bheem's strength and bravery, they manage to reach Bali safely. There they are welcomed by Arjun's uncle Rajguru Bahula, who is a wise scholar.
-
Bheem and his friends are amazed by the beauty and culture of Bali. They see temples, statues, dances, festivals, food, animals, plants, and people. They also meet Arjun who is friendly and kind. They become good friends with him.
-
The Attack of Rangda
-
However, their happiness is short-lived. On the day of Arjun's coronation ceremony, Rangda attacks Bali with her army of Leyaks. Leyaks are magical creatures that can fly, shape-shift, cast spells, and cause havoc. Rangda is an evil witch who wants to rule over Bali.
-
Rangda captures King Indravarma's ship along with most of his soldiers. She also imprisons Arjun's parents King Agung Prabu Sakti Wira Kertha Wisesa Adi Prabu Agung Ngurah Rai Wira Nagara Ida Dalem Di Made Karangasem III (phew!) And Queen Dewi Sita in her palace. She tries to kill Arjun too but he escapes with Bheem's help.
-
Watch Chhota Bheem And The Throne Of Bali online in HD quality
-Chhota Bheem And The Throne Of Bali full movie streaming online
-How to download Chhota Bheem And The Throne Of Bali HD movie for free
-Chhota Bheem And The Throne Of Bali HD video player for PC
-Best sites to watch Chhota Bheem And The Throne Of Bali online
-Chhota Bheem And The Throne Of Bali movie review and ratings
-Chhota Bheem And The Throne Of Bali HD wallpapers and images
-Chhota Bheem And The Throne Of Bali movie songs and lyrics
-Chhota Bheem And The Throne Of Bali cast and crew details
-Chhota Bheem And The Throne Of Bali trivia and facts
-Chhota Bheem And The Throne Of Bali behind the scenes and making of
-Chhota Bheem And The Throne Of Bali movie merchandise and toys
-Chhota Bheem And The Throne Of Bali games and activities online
-Chhota Bheem And The Throne Of Bali book and comic adaptation
-Chhota Bheem And The Throne Of Bali sequel and spin-off news
-Chhota Bheem And The Throne Of Bali fan art and videos
-Chhota Bheem And The Throne Of Bali movie quotes and dialogues
-Chhota Bheem And The Throne Of Bali movie awards and nominations
-Chhota Bheem And The Throne Of Bali movie tickets and showtimes
-Chhota Bheem And The Throne Of Bali movie subtitles and dubbing
-Chhota Bheem And The Throne Of Bali movie plot and summary
-Chhota Bheem And The Throne Of Bali movie genre and audience
-Chhota Bheem And The Throne Of Bali movie budget and box office collection
-Chhota Bheem And The Throne Of Bali movie release date and country
-Chhota Bheem And The Throne Of Bali movie trailer and teaser
-Chhota Bheem And The Throne Of Bali HD online player for Android
-Chhota Bheem And The Throne Of Bali HD online player for iOS
-Chhota Bheem And The Throne Of Bali HD online player for Mac
-Chhota Bheem And The Throne Of Bali HD online player for Linux
-Chhota Bheem And The Throne Of Bali HD online player for Windows
-Chhota Bheem And The Throne Of Bali HD online player for Chromebook
-Chhota Bheem And The Throne Of Bali HD online player for Smart TV
-Chhota Bheem And The Throne Of Bali HD online player for Roku
-Chhota Bheem And The Throne Of Bali HD online player for Firestick
-Chhota Bheem And The Throne Of Bali HD online player for Xbox One
-Chhota Bheem And The Throne Of Bali HD online player for PS4
-Chhota Bheem And The Throne Of Bali HD online player for Nintendo Switch
-Chhota Bheem And The Throne Of Bali HD online player for VR headset
-Chhota Beh
-
Bheem and his friends run away from Rangda's Leyaks. They hide in a village where they meet two girls named Aci And Ayu who are sisters. They offer them shelter And food And tell them about Rangda's history And power.
-
The Quest for Barong
-
Bheem learns that there is only one way to defeat Rangda And that is by getting Barong's blessing And power. Barong is The supreme god Of Bali who protects The island from evil forces. He lives in The sacred mountain Of Agung where only The pure Of heart can reach him.
-
Bheem decides To go To find Barong And ask for his help To save Bali. He takes Aci And Ayu with him as they know The way To The mountain. He leaves his friends behind To take care Of Arjun And Rajguru Bahula.
-
Bheem faces many challenges And dangers on The way To Barong. He encounters wild animals such as tigers And snakes; natural obstacles such as rivers And cliffs; And supernatural enemies such as ghosts And demons. He also meets some friendly creatures such as monkeys And birds who help him along The way.
-
The Final Battle
-
Bheem finally reaches Barong's temple where he meets The god himself. Barong is impressed by Bheem's courage And purity And grants him his blessing And power. He gives him a magical sword called Kris that can cut through anything And a magical shield called Perisai that can protect him from anything.
-
Bheem returns To The village where he reunites with his friends And Arjun. He tells them about Barong's gift And prepares To fight Rangda And her Leyaks. He leads them To Rangda's palace where they face her army.
-
Bheem fights Rangda in a fierce battle where he uses his Kris And Perisai To counter her spells And attacks. He manages To free King Indravarma's ship And soldiers; King Agung Prabu Sakti Wira Kertha Wisesa Adi Prabu Agung Ngurah Rai Wira Nagara Ida Dalem Di Made Karangasem III (phew!) Again!; And Queen Dewi Sita from their prison; And destroy Rangda's Leyaks one by one.
-
Bheem finally defeats Rangda by cutting off her hair which is The source Of her power. He then throws her into The volcano where she burns up And dies.
-
Bheem frees Bali from Rangda's tyranny And restores peace And happiness To The island. He celebrates with his friends And Arjun who thanks him for saving his life And kingdom. He also thanks Barong for his help And returns his Kris And Perisai To him.
-
The Features Of Chhota Bheem And The Throne Of Bali
-
The Animation and Music
-
One of the features that makes Chhota Bheem and the Throne of Bali a great movie is the animation and music. The movie has colorful and lively animation that captures the essence of Bali. The movie shows the beauty and diversity of Bali's nature, culture, and architecture. The movie also has realistic and expressive characters that make you feel their emotions and personalities.
-
The movie also has catchy and upbeat songs that enhance the mood and theme of the movie. The movie has songs that are sung by the characters and songs that are played in the background. The songs are in different languages such as Hindi, Tamil, Telugu, and English. The songs are also in different genres such as pop, rock, folk, and classical. The songs are fun to listen to and sing along with.
-
The movie also has realistic sound effects that add to the excitement and fun of the movie. The movie has sound effects that match the actions and events of the movie. The movie has sound effects such as thunder, waves, fire, wind, animals, weapons, magic, and more. The sound effects make you feel like you are in the movie.
-
The Humor and Emotion
-
Another feature that makes Chhota Bheem and the Throne of Bali a great movie is the humor and emotion. The movie has funny moments that make kids laugh. The movie has jokes that are based on the characters' personalities, situations, dialogues, and actions. The movie also has funny scenes that involve slapstick comedy, wordplay, puns, and references.
-
The movie also has emotional moments that make kids feel for the characters. The movie has scenes that show the characters' feelings such as happiness, sadness, anger, fear, love, and more. The movie also has scenes that show the characters' relationships such as friendship, family, loyalty, trust, and more. The movie also has scenes that show the characters' growth such as learning, changing, overcoming, and more.
-
The movie also has positive messages that inspire kids to be brave, loyal, and kind. The movie shows how Bheem and his friends face their challenges with courage and determination. The movie shows how Bheem and his friends help each other with loyalty and teamwork. The movie shows how Bheem and his friends treat others with kindness and respect.
-
Conclusion
-
Chhota Bheem and the Throne of Bali is a fun-filled adventure for kids that will keep them entertained and educated. It is a movie that has a thrilling story, charming characters, beautiful scenery, heartwarming message, colorful animation, catchy music, realistic sound effects, funny humor, and emotional emotion. It is a movie that will make kids laugh, cry, cheer, and learn.
-
If you are looking for a movie that will take you to a different world where you can enjoy adventure, fantasy, and comedy with your favorite hero Chhota Bheem and his friends then you should watch Chhota Bheem and the Throne of Bali. It is a movie that you will not regret watching.
-
So what are you waiting for? Grab your popcorn And get ready To watch Chhota Bheem And The Throne Of Bali, A Movie That Will Make You Say "Wow!"
-
Frequently Asked Questions
- - Q: Where can I watch Chhota Bheem And The Throne Of Bali? - A: You can watch Chhota Bheem And The Throne Of Bali on Prime Video or on YouTube. - Q: Who are the voice actors of Chhota Bheem And The Throne Of Bali? - A: Some of the voice actors of Chhota Bheem And The Throne Of Bali are Jigna Bharadwaj as Chutki; Rupa Bhimani as Indumati; Rajesh Kava as Jaggu; Vatsal Dubey as Bheem; Julie Tejwani as Kalia; Sabina Malik as Dholu/Bholu; Sonal Kaushal as Raju; Arun Shekhar as King Indravarma; Rishi Khurana as Arjun; Anamaya Verma as Rajguru Bahula; Nandita Sharma as Rangda; Shanoor Mirza as Aci; Pinky Rajput as Ayu; Sanket Mhatre as Barong; And others. - Q: What is the budget And box office collection of Chhota Bheem And The Throne Of Bali? - A: According to Wikipedia , the budget of Chhota Bheem And The Throne Of Bali was ₹5 crore (US$630000) And the box office collection was ₹5.38 crore (US$670000). - Q: Is Chhota Bheem And The Throne Of Bali based on a true story? - A: No Chhota Bheem And The Throne Of Bali is not based on a true story. It is a fictional story inspired by Balinese mythology And culture. - Q: Is there a sequel to Chhota Bheem And The Throne Of Bali? - A: No there is no sequel to Chhota Bheem And The Throne Of Bali yet But there are other movies in the Chhota Bheem series such as Chhota Bheem And Krishna In Mayanagari Chhota Bheem Himalayan Adventure Chhota Bheem Kung Fu Dhamaka And others. 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/((FULL)) Download Just Dance Now MOD APK V3.2.0 (Unlimited Money) Free ((FULL)) Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/((FULL)) Download Just Dance Now MOD APK V3.2.0 (Unlimited Money) Free ((FULL)) Download.md
deleted file mode 100644
index 55231a50c45c91543203bef25f4f05d5a3cbba05..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/((FULL)) Download Just Dance Now MOD APK V3.2.0 (Unlimited Money) Free ((FULL)) Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Download Just Dance Now MOD APK v3.2.0 (Unlimited Money) Free Download
-
-Description of Home Makeover : My Perfect House free download .Apk file. Welcome to ... Then this ultimate home design makeover game is just perfect for you. 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CINEMA 4D Studio R19.024 Multilingual 2017 Full Version [PORTABLE].md b/spaces/1gistliPinn/ChatGPT4/Examples/CINEMA 4D Studio R19.024 Multilingual 2017 Full Version [PORTABLE].md
deleted file mode 100644
index a39be9963772033b37ceed9f6fc6fa991acf4134..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/CINEMA 4D Studio R19.024 Multilingual 2017 Full Version [PORTABLE].md
+++ /dev/null
@@ -1,26 +0,0 @@
-
CINEMA 4D Studio R19.024 Multilingual 2017 full version
-
-exe […]
-
-If you like to create video game-quality videos, video editing tool VideoStudio Ultimate 2017 is a program that is useful. It comes with lots of tools. In fact, you will get a chance to play with the different video editing functions. Besides, you will get ready-to-use templates that you can drag […]
-
-PDF3D Converter is a useful application that will convert PDF files into 3D files. The 3D files can be read by any 3D software. You will be able to convert standard PDF files into 3D. This will make your PDF documents more interesting and interactive. The original PDF files will […]
-
-If you like to create 3D videos, you will need to use the 3D Movie Maker for Windows. The 3D movie maker will allow you to make various 3D video effects. You can also use the other 3D tools for the creation of 3D videos. Besides, you will get a chance to […]
-
-DXF Converter is a handy tool which will help you convert your AutoCAD drawings into DXF drawings. In fact, it will allow you to convert AutoCAD DXF files into AutoCAD drawings. Also, you will get a chance to convert a DXF drawing into AutoCAD DXF format. DXF […]
-
-Indy Converter Deluxe for PC is a useful converter which will convert between different video formats. You will get a chance to convert between popular formats such as AVI, WMV, MP4, MOV, MKV, MPG, MPEG, MP3, WMA, FLV, ASF, and etc. The video converter will provide you with […]
-
-What are the best Video To DVD Converter for Mac? Since Video To DVD Converter is available on Mac, the developers had to create a compatible application to convert video to DVD on Mac. I’ve tested all the best Video To DVD Converter for Mac that are available online […]
-
-Windows 8 is already launched and people have to live with Windows 8 and the desktop environment. However, there are still quite a lot of programs which are not ready for Windows 8. Most of the Windows 8 compatible program is available in the Windows 7 version. Therefore, if you are […]
-
-Download to Computer:
-
-About us:
-
-SoftShare - Your Download Link for Software & PC Games Free is a website like other websites. We look day and night to serve you as many as the links are available. We 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Essl Time Track Lite 6.5 [BETTER] Cracked.md b/spaces/1gistliPinn/ChatGPT4/Examples/Essl Time Track Lite 6.5 [BETTER] Cracked.md
deleted file mode 100644
index 1f6ab2c7a7c59a95cea0d518817b9af127ea9c49..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Essl Time Track Lite 6.5 [BETTER] Cracked.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
essl time track lite 6.5 cracked: How to Download and Use It for Free
-
-
If you are looking for a web-based software that can help you manage your time and attendance, you may have heard of essl time track lite 6.5. This software is designed to simplify the time and attendance process for various organizations. It allows you to track employee attendance accurately against the approved leaves and allocated shifts. It also provides reports and logs to better maintain access to a certain area.
However, essl time track lite 6.5 is not a free software. You need to purchase a license key from essl security, the developer and provider of essl time track lite 6.5, to use it without any limitations. The license key is a unique code that verifies that you have a legitimate copy of the software.
-
-
That's why many people are looking for essl time track lite 6.5 cracked, which is a software that bypasses the license key verification and allows you to use essl time track lite 6.5 for free without purchasing a license key.
-
-
What is essl time track lite 6.5 cracked?
-
-
essl time track lite 6.5 cracked is a software that cracks or hacks essl time track lite 6.5 and enables you to use it for free without purchasing a license key. It is also known as a crack, which is short for crack software. A crack software is a software that modifies or disables the security features of another software or game.
-
-
-
essl time track lite 6.5 cracked works by using an algorithm that mimics the one used by the official license key generator of essl time track lite 6.5. It produces license keys that match the format and criteria of the official ones. By using essl time track lite 6.5 cracked, you can generate as many license keys as you want for essl time track lite 6.5.
-
-
essl time track lite 6.5 cracked is created by hackers who crack and distribute pirated software and games. They have cracked and released many popular software and games, such as Microsoft Office, Adobe Photoshop, GTA V, FIFA 21, and more.
-
-
How to Download essl time track lite 6.5 cracked?
-
-
There are many websites that claim to offer essl time track lite 6.5 cracked for free download, but not all of them are reliable or safe. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing where to download essl time track lite 6.5 cracked.
-
-
One of the best sources to download essl time track lite 6.5 cracked is YouTube, a video-sharing platform that provides verified and high-quality videos for various topics, such as software, games, movies, music, and more. You can find essl time track lite 6.5 cracked by searching for it on YouTube or by following this link: https://www.youtube.com/watch?v=YYIweRiLpI0
-
-
To download essl time track lite 6.5 cracked from YouTube, you need to have a YouTube downloader installed on your computer, such as YTD Video Downloader or 4K Video Downloader. A YouTube downloader is a software that allows you to download videos from YouTube and other websites.
-
-
Once you have a YouTube downloader installed, you can follow these steps:
-
-
-
Open YouTube and search for essl time track lite 6.5 cracked or click on the link above.
-
Select the video that shows how to download and use essl time track lite 6.5 cracked.
-
Copy the URL of the video from the address bar.
-
Open your YouTube downloader and paste the URL into the input field.
-
Select the format and quality of the video that you want to download.
-
Click on the download button and wait for the video to be downloaded.
-
Open the downloaded video and follow the instructions on how to download and use essl time track lite 6.5 cracked.
-
-
-
How to Use essl time track lite 6.5 cracked?
-
-
After you have downloaded essl time track lite 6.5 cracked from YouTube, you need to extract it using a software like WinRAR or 7-Zip.
-
-
You will find two files inside the extracted folder: License Key Generator.exe and eTimeTrackLite.exe.
-
-
The License Key Generator.exe file is the crack software itself that will generate license keys for essl time track lite 6.5.
-
-
The eTimeTrackLite.exe file is the original software that will be modified by the crack software.
-
-
To use essl time track lite 6.5 cracked to generate license keys for essl time track lite 6.5, follow these steps:
-
-
-
Run License Key Generator.exe as administrator.
-
A window will pop up with two fields: Hardware ID and License Key.
-
The Hardware ID field will show your computer's unique identification code.
-
The License Key field will be empty at first.
-
Copy your Hardware ID and paste it into License Key Generator.exe.
-
Click on Generate button.
-
A license key will appear in the License Key field.
-
Copy your License Key and paste it into eTimeTrackLite.exe activation window.
-
Click on Activate button.
-
You can now use essl time track lite 6.5 for free without any limitations.
-
-
-
Conclusion
-
-
In this article, we have shown you how to download, use, and improve essl time track lite 6.5 cracked to use essl time track lite 6.5 for free without buying a license key. We have also discussed the benefits and drawbacks of using essl time track lite 6.5 cracked for this purpose.
-
-
However, using essl time track lite 6.5 cracked is illegal and unethical as it violates the terms and conditions of essl security, the developer and provider of essl time track lite 6.5. You may face legal consequences or penalties if you are caught using essl time track lite 6.5 cracked.
-
-
It may also expose your computer to security risks and performance issues as cracked software may contain viruses or malware that can damage your system or steal your data.
-
-
Therefore, we recommend that you use essl time track lite 6.5 legally by buying a license key from essl security's official website: http://www.etimetracklite.com/
-
-
This way, you can enjoy all the features and benefits of essl time track lite 6.5 without any worries or limitations.
-
-
We hope that this article has been helpful for you in learning more about essl time track lite 6.5 cracked.
-
-
If you have any questions or feedback about this article or about essl time track lite 6.5 cracked in general,
-
What are the Risks of Using essl time track lite 6.5 cracked?
-
-
While using essl time track lite 6.5 cracked may seem tempting and convenient, it is not without risks and drawbacks. There are some dangers and disadvantages of using essl time track lite 6.5 cracked that you should be aware of before you decide to use it.
-
-
Some of these risks and drawbacks are:
-
-
-
It is illegal and unethical. Using essl time track lite 6.5 cracked is a form of software piracy, which is a crime that violates the intellectual property rights of essl security, the developer and provider of essl time track lite 6.5. You may face legal consequences or penalties if you are caught using essl time track lite 6.5 cracked.
-
It is insecure and unreliable. Using essl time track lite 6.5 cracked may expose your computer to security risks and performance issues as cracked software may contain viruses or malware that can damage your system or steal your data. You may also experience errors or crashes while using essl time track lite 6.5 cracked as it may not be compatible with your operating system or hardware.
-
It is unsupported and outdated. Using essl time track lite 6.5 cracked may prevent you from accessing official support and updates from essl security, which may affect the quality and functionality of essl time track lite 6.5. You may miss out on important bug fixes, security patches, or new features that are available in the official version of essl time track lite 6.5.
-
-
-
How to Uninstall essl time track lite 6.5 cracked?
-
-
If you have used essl time track lite 6.5 cracked and want to uninstall it from your computer, you can follow these steps:
-
-
-
Open Control Panel and select Programs and Features.
-
Find eTimeTrackLite in the list of installed programs and click on Uninstall.
-
Follow the on-screen instructions to complete the uninstallation process.
-
Delete the License Key Generator.exe and eTimeTrackLite.exe files from your computer.
-
Scan your computer with an antivirus or anti-malware software to remove any traces of viruses or malware that may have been installed by essl time track lite 6.5 cracked.
-
-
-
Conclusion
-
-
In this article, we have shown you how to download, use, and improve essl time track lite 6.5 cracked to use essl time track lite 6.5 for free without buying a license key. We have also discussed the benefits and drawbacks of using essl time track lite 6.5 cracked for this purpose.
-
-
However, using essl time track lite 6.5 cracked is illegal and unethical as it violates the terms and conditions of essl security, the developer and provider of essl time track lite 6.5. You may face legal consequences or penalties if you are caught using essl time track lite 6.5 cracked.
-
-
It may also expose your computer to security risks and performance issues as cracked software may contain viruses or malware that can damage your system or steal your data.
-
-
Therefore, we recommend that you use essl time track lite 6.5 legally by buying a license key from essl security's official website: http://www.etimetracklite.com/
-
-
This way, you can enjoy all the features and benefits of essl time track lite 6.5 without any worries or limitations.
-
-
We hope that this article has been helpful for you in learning more about essl time track lite 6.5 cracked.
-
-
If you have any questions or feedback about this article or about essl time track lite 6.5 cracked in general,
-
In conclusion, essl time track lite 6.5 cracked is a software that cracks or hacks essl time track lite 6.5 and enables you to use it for free without buying a license key. It works by using an algorithm that mimics the one used by the official license key generator of essl time track lite 6.5. It produces license keys that match the format and criteria of the official ones.
-
-
However, using essl time track lite 6.5 cracked is illegal and unethical as it violates the terms and conditions of essl security, the developer and provider of essl time track lite 6.5. You may face legal consequences or penalties if you are caught using essl time track lite 6.5 cracked.
-
-
It may also expose your computer to security risks and performance issues as cracked software may contain viruses or malware that can damage your system or steal your data.
-
-
Therefore, we recommend that you use essl time track lite 6.5 legally by buying a license key from essl security's official website: http://www.etimetracklite.com/
-
-
This way, you can enjoy all the features and benefits of essl time track lite 6.5 without any worries or limitations.
-
-
We hope that this article has been helpful for you in learning more about essl time track lite 6.5 cracked.
-
-
If you have any questions or feedback about this article or about essl time track lite 6.5 cracked in general,
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Unlimited APK Download and Enjoy the Epic Shooter Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Unlimited APK Download and Enjoy the Epic Shooter Game.md
deleted file mode 100644
index baa96f435dbd0d5809f16cace2c3e6e202ef8c96..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Unlimited APK Download and Enjoy the Epic Shooter Game.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Brawl Stars Unlimited Apk: How to Play Brawl Stars with Unlimited Features
-
If you are a fan of fast-paced multiplayer games with colorful characters and exciting modes, you should definitely check out Brawl Stars. Brawl Stars is a mobile game developed by Supercell, the makers of Clash of Clans and Clash Royale. In this game, you can team up with your friends or play solo in various game modes, such as Gem Grab, Showdown, Brawl Ball, Bounty, Heist, and more. You can also unlock and upgrade dozens of brawlers with different abilities, star powers, and gadgets. Collect unique skins to stand out and show off your style.
However, if you want to enjoy Brawl Stars to the fullest, you might need a lot of coins and gems to unlock new brawlers, skins, and other items. Coins and gems are the in-game currencies that can be earned by playing the game or purchased with real money. But what if you don't want to spend hours grinding or money buying? Is there a way to get unlimited coins and gems in Brawl Stars?
-
The answer is yes! You can download and install brawl stars unlimited apk on your Android device and get access to unlimited features in Brawl Stars. In this article, we will show you how to do that step by step. We will also tell you what are the features of brawl stars unlimited apk and why you should try it out.
-
How to Download and Install Brawl Stars Unlimited Apk
-
Brawl stars unlimited apk is a modified version of the original Brawl Stars game that gives you unlimited coins and gems. It also unlocks all brawlers and skins for you. You can use this apk file to install Brawl Stars on your Android device without using the Google Play Store. Here are the steps you need to follow:
-
brawl stars mod apk unlimited money and gems
-brawl stars hack apk download unlimited coins
-brawl stars apk free download with unlimited resources
-brawl stars latest version mod apk unlimited everything
-brawl stars private server apk unlimited brawlers
-brawl stars mod menu apk unlimited health
-brawl stars cracked apk unlimited gems and coins
-brawl stars cheat apk unlimited tickets and tokens
-brawl stars premium apk unlimited skins and characters
-brawl stars unlocked apk unlimited power points and star points
-brawl stars modded apk unlimited trophies and rewards
-brawl stars generator apk unlimited gold and elixir
-brawl stars full apk unlimited gameplay and modes
-brawl stars pro apk unlimited features and functions
-brawl stars mega mod apk unlimited weapons and items
-brawl stars hacked version apk unlimited fun and action
-brawl stars online apk unlimited multiplayer and co-op
-brawl stars offline apk unlimited solo and duo
-brawl stars update apk unlimited new content and events
-brawl stars beta apk unlimited access and testing
-brawl stars original apk unlimited quality and performance
-brawl stars 3d mod apk unlimited graphics and effects
-brawl stars 2d mod apk unlimited retro and nostalgia
-brawl stars anime mod apk unlimited cute and kawaii
-brawl stars zombie mod apk unlimited horror and gore
-brawl stars superhero mod apk unlimited powers and abilities
-brawl stars fantasy mod apk unlimited magic and spells
-brawl stars sci-fi mod apk unlimited technology and gadgets
-brawl stars western mod apk unlimited cowboys and outlaws
-brawl stars pirate mod apk unlimited ships and treasure
-brawl stars ninja mod apk unlimited stealth and shuriken
-brawl stars robot mod apk unlimited metal and gears
-brawl stars dinosaur mod apk unlimited fossils and eggs
-brawl stars alien mod apk unlimited planets and spaceships
-brawl stars animal mod apk unlimited fur and claws
-brawl stars cartoon mod apk unlimited humor and jokes
-brawl stars realistic mod apk unlimited physics and realism
-brawl stars dark mod apk unlimited evil and villains
-brawl stars light mod apk unlimited good and heroes
-brawl stars rainbow mod apk unlimited colors and effects
-
Step 1: Find a reliable source for the apk file
-
The first thing you need to do is find a trustworthy website that offers brawl stars unlimited apk for download. There are many websites that claim to provide this file, but some of them might be fake or malicious. You should always check the reviews and ratings of the website before downloading anything from it.
-
One of the websites that we recommend is [Brawl Stars Mod](^1^). This website has a high reputation and provides safe and working apk files for various games, including Brawl Stars. You can visit this website and search for brawl stars unlimited apk.
-
Step 2: Enable unknown sources on your device
-
Before you can install any apk file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store.
-
To enable unknown sources, go to your device settings > security > unknown sources > toggle on. You might see a warning message that installing apps from unknown sources might harm your device. Don't worry, this is just a precautionary measure. As long as you download from a reliable source, you should be fine.
-
Step 3: Download and install the apk file
-
Now that you have enabled unknown sources, you can proceed to download and install brawl stars unlimited apk on your device. Go back to the website where you found the apk file and tap on the download button. Wait for the download to finish.
-
Once the download is complete, go to your file manager and locate the downloaded file. Tap on it and follow the instructions to install the app. You might need to grant some permissions to the app during the installation process.
-
Step 4: Launch the game and enjoy unlimited features
-
Congratulations! You have successfully installed brawl stars unlimited apk on your device. Now you can launch the game and start playing with unlimited coins and gems. You can also unlock all brawlers and skins and access all game modes and maps. Have fun!
-
What are the Features of Brawl Stars Unlimited Apk
-
Brawl stars unlimited apk is not just a regular version of Brawl Stars. It has some amazing features that make it more enjoyable and rewarding. Here are some of the features that you can expect from brawl stars unlimited apk:
-
Unlimited Coins and Gems
-
Coins and gems are the main currencies in Brawl Stars. You need them to unlock new brawlers, skins, star powers, gadgets, and other items. You can earn them by playing the game or buying them with real money. However, with brawl stars unlimited apk, you don't have to worry about running out of coins and gems. You will get unlimited amounts of them as soon as you start the game. You can use them to buy anything you want in the game without any restrictions.
-
Unlock All Brawlers and Skins
-
Brawlers are the characters that you can play as in Brawl Stars. There are over 40 brawlers in the game, each with their own unique abilities, star powers, gadgets, and personalities. Skins are cosmetic items that change the appearance of your brawlers. Some skins are exclusive to certain events or seasons. To unlock new brawlers and skins, you need to open boxes or use coins and gems. However, with brawl stars unlimited apk, you don't have to wait or spend anything to unlock all brawlers and skins. You will have access to all of them from the beginning of the game. You can choose any brawler and skin that you like and customize your look.
-
Access to All Game Modes and Maps
-
Brawl Stars has various game modes that offer different challenges and objectives. Some of the game modes are Gem Grab, Showdown, Brawl Ball, Bounty, Heist, Siege, Hot Zone, Knockout, Duo Showdown, Solo Showdown, Big Game, Robo Rumble, Boss Fight, Power Play, and more. Each game mode has different maps that change the layout and strategy of the game. Some maps are only available for a limited time or for certain events or seasons. To access all game modes and maps, you need to play the game regularly and reach certain trophies or levels. However, with brawl stars unlimited apk, you don't have to worry about missing any game mode or map. You will have access to all of them from the start of the game. You can play any game mode and map that you want and enjoy the variety and fun.
-
No Ads or Root Required
-
One of the best things about brawl stars unlimited apk is that it does not require any ads or root to work. Ads are annoying pop-ups that interrupt your gameplay and try to make you watch videos or download other apps. Root is a process that gives you full control over your device's system settings and allows you to modify or delete system files. However, rooting your device can also void your warranty, expose your device to security risks, or cause compatibility issues with some apps. With brawl stars unlimited apk, you don't have to deal with any ads or root your device. You can simply download and install the apk file and enjoy the game without any hassle.
-
Conclusion
-
Brawl Stars is a fun and addictive multiplayer game that offers a lot of action and excitement. However, if you want to experience more features and benefits in Brawl Stars, you should try brawl stars unlimited apk. This is a modified version of the original game that gives you unlimited coins and gems, unlocks all brawlers and skins, accesses all game modes and maps, and does not require any ads or root.
-
If you want to download and install brawl stars unlimited apk on your Android device, you can follow the steps we have provided in this article. We have also shown you what are the features of brawl stars unlimited apk and why you should try it out.
-
We hope you found this article helpful and informative. If you have any questions or feedback about brawl stars unlimited apk, feel free to leave a comment below. We would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about brawl stars unlimited apk:
-
Is brawl stars unlimited apk safe?
-
Brawl stars unlimited apk is safe as long as you download it from a reliable source, such as [Brawl Stars Mod]. However, you should always be careful when downloading and installing any apk file from the internet. You should scan the file with an antivirus software before opening it. You should also backup your data and device before installing the apk file. You should also be aware that using brawl stars unlimited apk might violate the terms and conditions of the original game and result in a ban or suspension of your account. Use it at your own risk.
-
Is brawl stars unlimited apk compatible with my device?
-
Brawl stars unlimited apk is compatible with most Android devices that run on Android 4.3 or higher. However, some devices might have compatibility issues due to different hardware or software specifications. If you encounter any problems while installing or playing brawl stars unlimited apk, you can try to update your device, clear your cache, or reinstall the apk file. You can also contact the website where you downloaded the apk file for support.
-
Can I play brawl stars unlimited apk with my friends?
-
Yes, you can play brawl stars unlimited apk with your friends who also have the same apk file installed on their devices. You can join or create a club and invite your friends to play together. You can also chat with your friends and send them friend requests. However, you cannot play brawl stars unlimited apk with players who have the original game installed on their devices. You will only be matched with players who have the same version of the game as you.
-
Can I update brawl stars unlimited apk?
-
Brawl stars unlimited apk is not updated automatically like the original game. You will need to manually download and install the latest version of the apk file whenever there is a new update available. You can check the website where you downloaded the apk file for updates or notifications. You can also follow their social media accounts or blogs for news and updates.
-
Can I use brawl stars unlimited apk offline?
-
No, you cannot use brawl stars unlimited apk offline. Brawl Stars is an online game that requires an internet connection to play. You will need to connect to a stable Wi-Fi or mobile data network to access the game servers and play with other players. If you lose your connection or go offline, you will not be able to play the game or access your account.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cricket Device Unlock - The Official and Safe Way to Unlock Your Phone.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cricket Device Unlock - The Official and Safe Way to Unlock Your Phone.md
deleted file mode 100644
index f8cc65c3b336728eff018a5a8eb553ba9739735f..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cricket Device Unlock - The Official and Safe Way to Unlock Your Phone.md
+++ /dev/null
@@ -1,232 +0,0 @@
-
-
How to Unlock a Device from Cricket Wireless
-
Cricket Wireless is a prepaid wireless service provider that operates in the United States. It offers affordable plans and phones that work on its nationwide 4G LTE network. However, if you have a device from Cricket Wireless and you want to use it with another carrier or travel abroad, you might need to unlock it first.
Unlocking your device from Cricket Wireless means that you can use it with any compatible SIM card from another carrier. This can give you more flexibility and save you money on roaming fees or switching plans. In this article, we will show you how to unlock your device from Cricket Wireless using different methods and steps.
-
Device Unlock Requirements
-
Before you can unlock your device from Cricket Wireless, you need to meet some requirements that are set by the carrier. These include:
-
-
Your device must be designed for use on and locked to the Cricket network.
-
Your device must be active for at least six months of paid service on that device.
-
Your phone number must not be reported lost or stolen or suspended for fraud.
-
Your account must be in good standing with no past due payments or fees.
-
-
If you meet these requirements, you can request an unlock code from Cricket Wireless. The unlock code is a unique number that allows you to unlock your device from the carrier's network.
-
Device Unlock Methods
-
There are different methods that you can use to unlock your device from Cricket Wireless. These include:
There are different methods that you can use to unlock your device from Cricket Wireless. These include:
-
-
Using the My Account website: You can log in to your Cricket account online and request an unlock code from the Device Unlock page.
-
Using the myCricket app: You can download the myCricket app on your device and request an unlock code from the Settings menu.
-
Calling customer support: You can call 1-800-CRICKET (1-800-274-2538) and speak to a representative who can help you with the unlock process.
-
-
Depending on the method you choose, you might need to provide some information, such as your phone number, IMEI number, account PIN, etc. You will also receive an email confirmation with the unlock code and instructions.
-
How to unlock Cricket phone with device unlock app
-Cricket device unlock apk download free
-Cricket device unlock code generator
-Cricket device unlock app not working
-Cricket device unlock app error
-Cricket device unlock app bypass
-Cricket device unlock app hack
-Cricket device unlock app alternative
-Cricket device unlock app for iPhone
-Cricket device unlock app for Samsung
-Cricket device unlock app for LG
-Cricket device unlock app for ZTE
-Cricket device unlock app for Motorola
-Cricket device unlock app for Huawei
-Cricket device unlock app for Kyocera
-Cricket device unlock app for Sony Xperia
-Cricket device unlock app for Nokia
-Cricket device unlock app for Blackberry
-Cricket device unlock app for HTC
-Cricket device unlock app for Doro
-Cricket device unlock app for Cat
-Best cricket device unlock service online
-How to get cricket device unlock code free
-How to remove cricket device lock without code
-How to use cricket phone with other carriers after unlocking
-Benefits of unlocking cricket phone with device unlock app
-Requirements for unlocking cricket phone with device unlock app
-How to check if cricket phone is eligible for unlocking with device unlock app
-How to contact cricket customer support for unlocking phone with device unlock app
-How to troubleshoot cricket phone unlocking issues with device unlock app
-How to update cricket phone software after unlocking with device unlock app
-How to backup and restore cricket phone data before and after unlocking with device unlock app
-How to factory reset cricket phone after unlocking with device unlock app
-How to activate cricket phone after unlocking with device unlock app
-How to switch SIM cards on cricket phone after unlocking with device unlock app
-How to use international SIM cards on cricket phone after unlocking with device unlock app
-How to use mobile hotspot on cricket phone after unlocking with device unlock app
-How to use Wi-Fi calling on cricket phone after unlocking with device unlock app
-How to use VoLTE on cricket phone after unlocking with device unlock app
-How to use 5G on cricket phone after unlocking with device unlock app
-How to improve battery life on cricket phone after unlocking with device unlock app
-How to optimize performance on cricket phone after unlocking with device unlock app
-How to customize settings on cricket phone after unlocking with device unlock app
-How to install apps on cricket phone after unlocking with device unlock app
-How to uninstall apps on cricket phone after unlocking with device unlock app
-How to root cricket phone after unlocking with device unlock app
-How to unroot cricket phone after unlocking with device unlock app
-How to flash custom ROM on cricket phone after unlocking with device unlock app
-How to flash stock ROM on cricket phone after unlocking with device unlock app
-
Device Unlock Steps
-
Once you have received your unlock code from Cricket Wireless, you can follow these steps to unlock your device from the carrier's network:
-
-
Power off your device and remove the Cricket SIM card.
-
Insert a new SIM card from another carrier into your device.
-
Power on your device and enter the unlock code when prompted.
-
If the unlock code is accepted, you will see a message that says "Network Unlock Successful".
-
If the unlock code is rejected, you will see a message that says "Network Unlock Failed". In this case, you might need to contact Cricket Wireless or try another method.
-
-
Here are some screenshots and tips for each method of unlocking your device from Cricket Wireless:
-
Using the My Account website
-
To use the My Account website to unlock your device from Cricket Wireless, follow these steps:
Tap on the menu icon on the top left corner and select "Settings".
-
Tap on "Unlock Device" under the Device Settings section.
-
Review the unlock requirements and agree to the terms and conditions.
-
Tap on "Submit" and wait for an email confirmation with the unlock code and instructions.
-
-
-
Calling customer support
-
To call customer support to unlock your device from Cricket Wireless, follow these steps:
-
-
Dial 1-800-CRICKET (1-800-274-2538) from any phone and follow the prompts.
-
Select the option for "Device Unlock" and enter your phone number when asked.
-
Provide your account PIN and IMEI number when asked. You can find your IMEI number by dialing *#06# on your device or by checking the label under the battery or on the box.
-
Review the unlock requirements and agree to the terms and conditions.
-
Wait for an email confirmation with the unlock code and instructions.
-
-
Device Unlock Troubleshooting
-
Sometimes, you might encounter some problems or issues when unlocking your device from Cricket Wireless. Here are some common ones and how to solve them:
-
Error message: "Invalid SIM card"
-
This error message means that your device does not recognize or accept the new SIM card that you inserted. This could be because:
-
-
The new SIM card is not compatible with your device. You need to check if your device supports the network frequency bands of the new carrier. You can use online tools like Will My Phone Work or contact the carrier
Error message: "Network Unlock Failed"
-
This error message means that your device did not accept the unlock code that you entered. This could be because:
-
-
The unlock code is incorrect or expired. You need to double-check the unlock code that you received from Cricket Wireless and make sure you enter it correctly and within 24 hours of receiving it.
-
The device is not eligible for unlocking. You need to check if your device meets the unlock requirements that we mentioned earlier and contact Cricket Wireless if you have any questions.
-
The device is already unlocked. You need to check if your device is already unlocked by inserting a different SIM card and seeing if it works. If it does, then you don't need to enter the unlock code.
-
-
Problem: No service or signal after unlocking
-
This problem means that your device is not connecting to the new carrier's network after unlocking. This could be because:
-
-
The new SIM card is not activated or compatible. You need to contact the new carrier and make sure that the SIM card is activated and compatible with your device.
-
The new carrier's network is not available or supported. You need to check the network coverage and frequency bands of the new carrier and make sure that they match with your device.
-
The device's APN settings are not correct. You need to update the APN settings of your device according to the new carrier's instructions.
-
-
How to Use an Unlocked Device from Cricket Wireless
-
Now that you have unlocked your device from Cricket Wireless, you can enjoy the benefits of using it with any compatible SIM card from another carrier. This can allow you to travel abroad, switch carriers, or save money on your phone bills. In this section, we will show you how to use your unlocked device from Cricket Wireless with a new SIM card.
-
How to Insert a New SIM Card in an Unlocked Device from Cricket Wireless
-
To insert a new SIM card in an unlocked device from Cricket Wireless, follow these steps:
-
-
Power off your device and remove the back cover and battery (if applicable).
-
Locate the SIM card slot and gently slide out the old SIM card.
-
Insert the new SIM card into the slot, making sure that it fits securely and correctly.
-
Replace the battery and back cover and power on your device.
-
-
-
How to Activate a New SIM Card in an Unlocked Device from Cricket Wireless
-
To activate a new SIM card in an unlocked device from Cricket Wireless, follow these steps:
-
-
Power on your device and wait for it to recognize the new SIM card.
-
If prompted, enter the PIN code of the new SIM card (if applicable).
-
If prompted, select the network mode (GSM, CDMA, LTE, etc.) that matches the new carrier's network.
-
If prompted, select the network operator (AT&T, T-Mobile, Verizon, etc.) that matches the new carrier's name.
-
If prompted, restart your device to complete the activation process.
-
-
-
How to Check the Network Compatibility of an Unlocked Device from Cricket Wireless
-
To check the network compatibility of an unlocked device from Cricket Wireless, follow these steps:
To check the network compatibility of an unlocked device from Cricket Wireless, follow these steps:
-
-
Find the IMEI number of your device by dialing *#06# on your device or by checking the label under the battery or on the box.
Enter the IMEI number of your device and select the carrier, country, and network that you want to use.
-
Click on "Check Compatibility" and see the results. The tool will tell you if your device is compatible with the selected network and what features are supported.
-
-
-
How to Access the APN Settings of an Unlocked Device from Cricket Wireless
-
The APN settings are the configurations that allow your device to connect to the internet and send/receive multimedia messages (MMS) on a specific network. If you use an unlocked device from Cricket Wireless with a new SIM card, you might need to update the APN settings according to the new carrier's instructions.
-
To access the APN settings of an unlocked device from Cricket Wireless, follow these steps:
-
-
Go to the Settings menu on your device and tap on "Network & Internet" or "Connections".
-
Tap on "Mobile Network" or "Mobile Data" and then on "Access Point Names" or "APN".
-
You will see a list of APNs that are available on your device. You can either edit an existing APN or add a new one by tapping on the "+" icon.
-
Enter the APN settings that are provided by your new carrier. You can find them on their website or by contacting their customer support.
-
Save the changes and restart your device to apply the new APN settings.
-
-
-
How to Download and Install Unlock Device Cricket APK
-
If you are looking for an alternative way to unlock your device from Cricket Wireless, you might want to try Unlock Device Cricket APK. This is an app that claims to be able to unlock any device from Cricket Wireless without requiring an unlock code or meeting any requirements. In this section, we will show you what is Unlock Device Cricket APK, where to download it, how to install it, and how to use it.
-
What is Unlock Device Cricket APK?
-
Unlock Device Cricket APK is an app that claims to be able to unlock any device from Cricket Wireless in a matter of minutes. It works by bypassing the carrier's lock and allowing you to use any compatible SIM card from another carrier. It also claims to have some features and benefits, such as:
-
-
It is free and easy to use.
-
It supports all models and brands of devices from Cricket Wireless.
-
It does not require root access or any technical skills.
-
It does not affect the warranty or performance of your device.
-
It allows you to switch carriers, travel abroad, or save money on your phone bills.
-
-
Where to Download Unlock Device Cricket APK?
-
Unlock Device Cricket APK is not available on the official Google Play Store or Apple App Store. This is because it is not a verified or authorized app by Cricket Wireless or any other carrier. Therefore, you need to download it from other sources, such as:
-
-
The official website of Unlock Device Cricket APK: You can go to https://unlockdevicecricketapk.com/ and download the latest version of the app for Android or iOS devices.
-
The third-party platforms that host Unlock Device Cricket APK: You can also find Unlock Device Cricket APK on some platforms that offer free apps and games, such as APKPure, APKMirror, Aptoide, etc. However, you need to be careful when downloading apps from these sources, as they might contain malware or viruses that can harm your device.
-
-
How to Install Unlock Device Cricket APK?
-
To install Unlock Device Cricket APK on your device, follow these steps:
-
-
Download the Unlock Device Cricket APK file from one of the sources mentioned above.
-
If you are using an Android device, go to the Settings menu and enable the option "Unknown Sources" under Security or Applications. This
To install Unlock Device Cricket APK on your device, follow these steps:
-
-
Download the Unlock Device Cricket APK file from one of the sources mentioned above.
-
If you are using an Android device, go to the Settings menu and enable the option "Unknown Sources" under Security or Applications. This will allow you to install apps from sources other than the Google Play Store.
-
If you are using an iOS device, go to the Settings menu and tap on "General". Then, tap on "Profiles & Device Management" and trust the profile that belongs to Unlock Device Cricket APK. This will allow you to install apps from sources other than the Apple App Store.
-
Locate the Unlock Device Cricket APK file on your device and tap on it to start the installation process.
-
Follow the on-screen instructions and grant the necessary permissions to the app.
-
Wait for the installation to finish and launch the app from your home screen or app drawer.
-
-
-
How to Use Unlock Device Cricket APK?
-
To use Unlock Device Cricket APK to unlock your device from Cricket Wireless, follow these steps:
-
-
Launch the app and agree to the terms and conditions.
-
Select the model and brand of your device from the list or enter it manually.
-
Enter the IMEI number of your device. You can find it by dialing *#06# on your device or by checking the label under the battery or on the box.
-
Tap on "Unlock Now" and wait for the app to generate an unlock code for your device.
-
Power off your device and remove the Cricket SIM card.
-
Insert a new SIM card from another carrier into your device.
-
Power on your device and enter the unlock code when prompted.
-
If the unlock code is accepted, you will see a message that says "Network Unlock Successful".
-
If the unlock code is rejected, you will see a message that says "Network Unlock Failed". In this case, you might need to contact Unlock Device Cricket APK support or try another method.
-
-
-
Conclusion
-
In this article, we have shown you how to unlock your device from Cricket Wireless using different methods and steps. We have also explained how to use your unlocked device from Cricket Wireless with a new SIM card. Finally, we have introduced you to Unlock Device Cricket APK, an app that claims to be able to unlock any device from Cricket Wireless without requiring an unlock code or meeting any requirements. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
Frequently Asked Questions
-
Here are some frequently asked questions and answers about unlocking a device from Cricket Wireless:
-
Q: How long does it take to receive an unlock code from Cricket Wireless?
-
A: It usually takes up to two business days to receive an unlock code from Cricket Wireless after submitting a request. However, it might take longer depending on the availability of the unlock code or other factors.
-
Q: How many times can I enter an unlock code on my device?
-
A: You can enter an unlock code up to 10 times on your device. If you enter an incorrect unlock code more than 10 times, your device will be permanently locked to Cricket Wireless and you will not be able to unlock it again.
-
Q: What if I forget my account PIN or IMEI number?
-
A: If you forget your account PIN or IMEI number, you can contact Cricket Wireless customer support at 1-800-CRICKET (1-800-274-2538) and they will help you with the unlock process. You might need to provide some information, such as your phone number, email address, etc.
-
Q: Is Unlock Device Cricket APK safe and legal?
-
A: Unlock Device Cricket APK is not a verified or authorized app by Cricket Wireless or any other carrier. Therefore, we cannot guarantee its safety or legality. Use it at your own risk and discretion. We are not responsible for any damage or loss that may result from using this app.
-
Q: Can I use my unlocked device from Cricket Wireless with any carrier?
-
A: You can use your unlocked device from Cricket Wireless with any compatible SIM card from another carrier. However, you need to check if your device supports the network frequency bands of the new carrier. You also need to update the APN settings of your device according to the new carrier's instructions. You can find them on their website or by contacting their customer support.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Super Bear Adventure MOD APK and Join the Fun with Your Friends.md b/spaces/1phancelerku/anime-remove-background/Download Super Bear Adventure MOD APK and Join the Fun with Your Friends.md
deleted file mode 100644
index 5184411ce958e9e7150fcf53dc200d0188ebc9e8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Super Bear Adventure MOD APK and Join the Fun with Your Friends.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Download Game 3D Platformer Super Bear Adventure Mod APK
-
If you are looking for a fun and nostalgic platformer game that will remind you of the late 90s classics, then you should try Super Bear Adventure. This game is a 3D platformer game that lets you explore six open world levels, discover their secrets, talk to the kingdom's inhabitants, collect coins, unlock hats, fight enemies, and free your friends. And if you want to enhance your gaming experience, you can download the mod APK version of Super Bear Adventure and play it on your PC. In this article, we will tell you everything you need to know about Super Bear Adventure and how to download and install its mod APK on your PC.
-
What is Super Bear Adventure?
-
Super Bear Adventure is a 3D platformer game developed by Earthkwak Games. It is inspired by the late 90s games such as Super Mario 64, Banjo-Kazooie, and Spyro the Dragon. The game has a colorful and cartoonish graphics style that will appeal to both kids and adults. The game also has a catchy soundtrack that matches the mood of each level.
-
download game 3d platformer super bear adventure mod apk
The story of the game is that a mysterious being has arrived in the peaceful realm of the bears and has locked your bear friends in cages and mind-controlled the other animals. It is your duty as a brave bear to free them all and restore peace to the world. You will have to explore six different levels, each with its own theme, challenges, secrets, and boss. You will also have to collect coins that you can use to buy hats that will give you different abilities. You can also talk to the other characters in the game and learn more about the lore and history of the kingdom.
-
Features of Super Bear Adventure
-
-
6 open world levels with different themes, challenges, secrets, and bosses
-
Over 50 hats to unlock and customize your bear
-
A variety of enemies and obstacles to overcome
-
A charming and humorous dialogue with the other characters
-
A simple and intuitive control scheme that works well on mobile devices
-
A nostalgic and retro graphics style that pays homage to the late 90s games
-
A catchy and upbeat soundtrack that matches the mood of each level
-
-
Why play Super Bear Adventure on PC?
-
While Super Bear Adventure is a great game to play on your mobile device, you might want to play it on your PC for several reasons. First of all, playing on a bigger screen will give you a better view of the game's beautiful graphics and details. Second, playing on a PC will allow you to use a keyboard and mouse or a controller for more precise and comfortable controls. Third, playing on a PC will ensure a smoother and faster performance without any lag or crashes. And finally, playing on a PC will let you enjoy the game's full potential with the mod APK version.
-
What is Mod APK?
-
Mod APK is a modified version of an original APK file that has been altered by third-party developers to add or remove some features from the original game or app. Mod APKs are usually created for various purposes such as unlocking premium features, removing ads, adding cheats, enhancing graphics, or improving performance.
-
Benefits of Mod APK
-
-
You can access premium features that are normally locked or require in-app purchases
-
You can remove annoying ads that interrupt your gameplay or consume your data
-
You can add cheats or hacks that will make the game easier or more fun
-
You can enhance the graphics or sound quality
You can improve the performance or compatibility of the game or app
-
-
Risks of Mod APK
-
-
You can expose your device to malware or viruses that can harm your data or privacy
-
You can violate the terms and conditions of the original game or app and get banned or suspended
-
You can lose your progress or data if the mod APK is not updated or compatible with the original version
-
You can miss out on the official updates or features that the original developers provide
-
-
How to download and install Super Bear Adventure Mod APK on PC?
-
If you want to download and install Super Bear Adventure Mod APK on your PC, you will need to use an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available, but we recommend using BlueStacks, as it is one of the most popular and reliable ones. Here are the steps to download and install Super Bear Adventure Mod APK on your PC using BlueStacks:
-
Step 1: Download and install BlueStacks
-
-
Go to the official website of BlueStacks and download the latest version of the software for your PC.
-
Run the installer and follow the instructions to install BlueStacks on your PC.
-
Launch BlueStacks and sign in with your Google account or create a new one.
-
-
Step 2: Download Super Bear Adventure Mod APK from a trusted source
-
-
Search for a reputable website that offers Super Bear Adventure Mod APK for download. Make sure to check the reviews and ratings of the website and the mod APK before downloading.
-
Download the Super Bear Adventure Mod APK file to your PC.
-
Locate the downloaded file and right-click on it. Select "Open with" and choose "BlueStacks" as the app to open it.
-
-
Step 3: Install Super Bear Adventure Mod APK on BlueStacks
-
-
BlueStacks will automatically install the Super Bear Adventure Mod APK on its platform.
-
Wait for the installation to finish and then go to the "My Apps" tab on BlueStacks.
-
You will see the icon of Super Bear Adventure on the screen. Click on it to launch the game.
-
-
Step 4: Launch Super Bear Adventure and enjoy the game
-
-
You can now play Super Bear Adventure on your PC with all the mod features enabled.
-
You can use your keyboard and mouse or a controller to control the game.
-
You can also adjust the settings and preferences of the game according to your liking.
-
-
Conclusion
-
Super Bear Adventure is a 3D platformer game that will bring back memories of the late 90s classics. It is a fun and engaging game that will keep you entertained for hours. You can download the mod APK version of Super Bear Adventure and play it on your PC for a better gaming experience. All you need is an Android emulator like BlueStacks and a trusted source for downloading the mod APK file. Follow our guide above and you will be able to download and install Super Bear Adventure Mod APK on your PC in no time.
-
FAQs
-
-
What are the minimum system requirements for playing Super Bear Adventure on PC?
-
The minimum system requirements for playing Super Bear Adventure on PC are:
-
download super bear adventure mod apk unlocked all
-3d platformer game super bear adventure mod apk free
-how to download super bear adventure mod apk for android
-super bear adventure 3d platformer mod apk latest version
-download game super bear adventure mod apk unlimited coins
-super bear adventure mod apk 3d platformer game offline
-super bear adventure mod apk download for pc
-3d platformer super bear adventure mod apk no ads
-download game super bear adventure mod apk full version
-super bear adventure 3d platformer mod apk android 1
-download super bear adventure mod apk hack
-3d platformer game super bear adventure mod apk online
-super bear adventure mod apk download game 3d platformer
-download game super bear adventure mod apk revdl
-super bear adventure 3d platformer mod apk rexdl
-download super bear adventure mod apk premium
-3d platformer game super bear adventure mod apk update
-super bear adventure mod apk download game 3d platformer for free
-download game super bear adventure mod apk pro
-super bear adventure 3d platformer mod apk unlimited money
-download super bear adventure mod apk cracked
-3d platformer game super bear adventure mod apk new version
-super bear adventure mod apk download game 3d platformer offline
-download game super bear adventure mod apk mega mod
-super bear adventure 3d platformer mod apk obb data
-download super bear adventure mod apk cheat
-3d platformer game super bear adventure mod apk old version
-super bear adventure mod apk download game 3d platformer online
-download game super bear adventure mod apk vip
-super bear adventure 3d platformer mod apk all levels unlocked
-download super bear adventure mod apk unlimited gems
-3d platformer game super bear adventure mod apk original
-super bear adventure mod apk download game 3d platformer hack
-download game super bear adventure mod apk no root
-super bear adventure 3d platformer mod apk pure
-
-
Operating System: Windows 7 or higher, Mac OS X 10.11 or higher
-
CPU: Intel or AMD processor with virtualization support
-
RAM: At least 4 GB
-
HDD: At least 5 GB of free disk space
-
Graphics: Intel/Nvidia/ATI, Onboard or Discrete controller with OpenGL 2.1 support
-
-
Is Super Bear Adventure Mod APK safe to use?
-
Super Bear Adventure Mod APK is generally safe to use, as long as you download it from a reliable source. However, you should always be careful when downloading any mod APK files, as they may contain malware or viruses that can harm your device or data. You should also scan the mod APK file with an antivirus software before installing it.
-
Can I play Super Bear Adventure online with other players?
-
No, Super Bear Adventure is not an online game. It is a single-player game that does not require an internet connection to play. You can play it offline anytime and anywhere you want.
-
How can I update
How can I update Super Bear Adventure Mod APK on my PC?
-
To update Super Bear Adventure Mod APK on your PC, you will need to download the latest version of the mod APK file from the same source that you downloaded it from before. Then, you will need to uninstall the previous version of the mod APK from BlueStacks and install the new one. Alternatively, you can check if there is an update option within the game itself and follow the instructions to update it.
-
What are some similar games to Super Bear Adventure?
-
If you like Super Bear Adventure, you might also enjoy some other 3D platformer games that are available on PC. Some of them are:
-
-
Crash Bandicoot N. Sane Trilogy: A remastered collection of the first three Crash Bandicoot games that feature the same gameplay and style as Super Bear Adventure.
-
A Hat in Time: A cute and charming 3D platformer game that follows the adventures of a young girl who travels across various worlds using her magical hat.
-
Yooka-Laylee: A spiritual successor to Banjo-Kazooie that features two animal protagonists who explore colorful and whimsical worlds full of puzzles and secrets.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/3B-Group/ConvRe-Leaderboard/README.md b/spaces/3B-Group/ConvRe-Leaderboard/README.md
deleted file mode 100644
index 819207ca118e220043a0abe116c3e81cb05f272a..0000000000000000000000000000000000000000
--- a/spaces/3B-Group/ConvRe-Leaderboard/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ConvRe Leaderboard
-emoji: 🦀
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.46.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Development Lifecycle e20a5470e52f49e9bbc4f255cf81db4b.md b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Development Lifecycle e20a5470e52f49e9bbc4f255cf81db4b.md
deleted file mode 100644
index b3b72fb84cce7f2e0e3dff3742de0559534f419f..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Development Lifecycle e20a5470e52f49e9bbc4f255cf81db4b.md
+++ /dev/null
@@ -1,34 +0,0 @@
-# Development Lifecycle
-
-Last edited time: March 31, 2023 1:49 PM
-Owner: Anonymous
-Tags: Guides and Processes
-
-
-
-# 1. Create a branch off of `master`
-
-Name the branch with your first name pre-pended:
-`leslie/cool-feature`
-
-# 2. Writing code
-
-You can link to other pages in your workspace in a couple ways:
-
-- Type `/link`, press `enter`, and type the name of the page you want. This creates a link like this one:
-
-[Engineering Guidelines](Engineering%20Guidelines%204208cbd4733d4f6f94982f3fb24f6379.md)
-
-- To create a link inline, type `@` followed by the title of the page, then press `enter`. The result looks like this: [Engineering Guidelines](Engineering%20Guidelines%204208cbd4733d4f6f94982f3fb24f6379.md)
-
-# 3. Create a pull request on Github
-
-Include the Notion task link in your PR description.
-
-# 4. Submit for review
-
-- Assign the task in Notion to the appropriate reviewer.
-- You can always tag a person on a Notion page by typing `@` followed by their name.
\ No newline at end of file
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/train/data_utils.py b/spaces/AI-Hobbyist/Hoyo-RVC/train/data_utils.py
deleted file mode 100644
index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/train/data_utils.py
+++ /dev/null
@@ -1,512 +0,0 @@
-import os, traceback
-import numpy as np
-import torch
-import torch.utils.data
-
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-
-class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- pitch = audiopath_and_text[2]
- pitchf = audiopath_and_text[3]
- dv = audiopath_and_text[4]
-
- phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- # print(123,phone.shape,pitch.shape,spec.shape)
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- # amor
- len_wav = len_min * self.hop_length
-
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
-
- phone = phone[:len_min, :]
- pitch = pitch[:len_min]
- pitchf = pitchf[:len_min]
-
- return (spec, wav, phone, pitch, pitchf, dv)
-
- def get_labels(self, phone, pitch, pitchf):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- pitch = np.load(pitch)
- pitchf = np.load(pitchf)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- # print(234,phone.shape,pitch.shape)
- phone = phone[:n_num, :]
- pitch = pitch[:n_num]
- pitchf = pitchf[:n_num]
- phone = torch.FloatTensor(phone)
- pitch = torch.LongTensor(pitch)
- pitchf = torch.FloatTensor(pitchf)
- return phone, pitch, pitchf
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollateMultiNSFsid:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- ) # (spec, wav, phone, pitch)
- pitch_padded = torch.LongTensor(len(batch), max_phone_len)
- pitchf_padded = torch.FloatTensor(len(batch), max_phone_len)
- phone_padded.zero_()
- pitch_padded.zero_()
- pitchf_padded.zero_()
- # dv = torch.FloatTensor(len(batch), 256)#gin=256
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- pitch = row[3]
- pitch_padded[i, : pitch.size(0)] = pitch
- pitchf = row[4]
- pitchf_padded[i, : pitchf.size(0)] = pitchf
-
- # dv[i] = row[5]
- sid[i] = row[5]
-
- return (
- phone_padded,
- phone_lengths,
- pitch_padded,
- pitchf_padded,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- # dv
- sid,
- )
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- dv = audiopath_and_text[2]
-
- phone = self.get_labels(phone)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- len_wav = len_min * self.hop_length
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
- phone = phone[:len_min, :]
- return (spec, wav, phone, dv)
-
- def get_labels(self, phone):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- phone = phone[:n_num, :]
- phone = torch.FloatTensor(phone)
- return phone
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- )
- phone_padded.zero_()
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- sid[i] = row[3]
-
- return (
- phone_padded,
- phone_lengths,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- sid,
- )
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(
- self,
- dataset,
- batch_size,
- boundaries,
- num_replicas=None,
- rank=None,
- shuffle=True,
- ):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, -1, -1): #
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (
- total_batch_size - (len_bucket % total_batch_size)
- ) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = (
- ids_bucket
- + ids_bucket * (rem // len_bucket)
- + ids_bucket[: (rem % len_bucket)]
- )
-
- # subsample
- ids_bucket = ids_bucket[self.rank :: self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [
- bucket[idx]
- for idx in ids_bucket[
- j * self.batch_size : (j + 1) * self.batch_size
- ]
- ]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/models/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/models/__init__.py
deleted file mode 100644
index 4803ba6b2a0afc8022e756ae5b3f4c7403c3c1bd..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/models/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .melgan import * # NOQA
-from .parallel_wavegan import * # NOQA
diff --git a/spaces/AIWaves/Software_Company/src/agents/template.py b/spaces/AIWaves/Software_Company/src/agents/template.py
deleted file mode 100644
index 194c9f2c3bad4be9589b72f520660971e2bc4e5a..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/src/agents/template.py
+++ /dev/null
@@ -1,111 +0,0 @@
-## default { "temperature": 0.3, "model": "gpt-3.5-turbo-16k-0613","log_path": "logs/{your name}"}
-LLM = {
- "temperature": 0.0,
- "model": "gpt-3.5-turbo-16k-0613",
- "log_path": "logs/god"
-}
-
-
-Agents = {
- "Lilong" : {
- "style" : "professional",
- "roles" : {
- "company" : "coder",
- "state2" : "role2",
- },
- "name2" : {
- "style" : "professional",
- "roles" : {
- "company" : "coder",
- "state2" : "role2",
- },
- }
- }
-}
-
-# indispensable parameter: "controller_type"("order","random","rule")
-# default extract words: "end". You can choose not to fill in this parameter
-controller = {
- "controller_type": "order",
- "max_chat_nums" : 12,
- "judge_system_prompt": "",
- "judge_last_prompt": "",
- "judge_extract_words": "end",
- "call_system_prompt" : "",
- "call_last_prompt": "",
- "call_extract_words": ""
-}
-
-#
-Agent_state = {
- "role": {
- "LLM_type": "OpenAI",
- "LLM": LLM,
- "style": {
- "role": "Opening Advocate for the Affirmative",
- "style": "professional"
- },
- "task": {
- "task": ""
- },
- "rule": {
- "rule": ""
- }
- },
-}
-
-
-# indispensable parameter: "agent_states","controller"
-# "roles" determines the speaking order when the rule is order. If not set, it is the default order.
-# "begin_query" & "begin_role" determines the first speaker.It often determines the direction of the next speech. If you do not set it, it will default to the first agent.
-# "environment_prompt" : Responsible for setting the scene for the current environment
-State = {
- "controller": controller,
- "begin_role": "",
- "begin_query": "",
- "environment_prompt": "",
- "roles": ["role1","role2"],
- "LLM_type": "OpenAI",
- "LLM": LLM,
- "agent_state" : Agent_state,
-}
-
-
-
-States = {
- "end_state":{
- "agent_states":{}
- },
- "state1" : State
-
-}
-
-
-# default finish_state_name is "end_state"
-# "environment_type" : "competive" : different states not share the memory; "cooperative":diffrent states share the memory
-SOP = {
- "config" : {
- "API_KEY" : "Your key",
- "PROXY" : "Your PROXY",
- "MAX_CHAT_HISTORY" : "5",
- "User_Names" : "[\"alexander\"]"
- },
- "environment_type" : "competive",
- "LLM_type": "OpenAI",
- "LLM" :LLM,
- "root": "state1",
- "finish_state_name" : "end_state",
- "relations": {
- "state1": {
- "0": "state1",
- "1": "state2"
- },
- "state2":{
- "0":"state2",
- "1":"end_state"
- }
- },
- "agents": Agents,
- "states": States,
-}
-
diff --git a/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/README.md b/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/README.md
deleted file mode 100644
index e9461a0e1df60ac37a09f7247944ffa0fe20b801..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 9 Seq2SeqQAGenerator GR
-emoji: 👁
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/README.md b/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/README.md
deleted file mode 100644
index ed1b94dcfe6b5a2a2ede437a8cf1ce44354aef58..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGPTandLangchain
-emoji: 😻
-colorFrom: indigo
-colorTo: red
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIxPha/Real-CUGAN/app.py b/spaces/AIxPha/Real-CUGAN/app.py
deleted file mode 100644
index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000
--- a/spaces/AIxPha/Real-CUGAN/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from upcunet_v3 import RealWaifuUpScaler
-import gradio as gr
-import time
-import logging
-import os
-from PIL import ImageOps
-import numpy as np
-import math
-
-
-def greet(input_img, input_model_name, input_tile_mode):
- # if input_img.size[0] * input_img.size[1] > 256 * 256:
- # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1]))
- # x = int(input_img.size[0]/input_img.size[1]*y)
- # input_img = ImageOps.fit(input_img, (x, y))
- input_img = np.array(input_img)
- if input_model_name not in model_cache:
- t1 = time.time()
- upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu")
- t2 = time.time()
- logger.info(f'load model time, {t2 - t1}')
- model_cache[input_model_name] = upscaler
- else:
- upscaler = model_cache[input_model_name]
- logger.info(f'load model from cache')
-
- start = time.time()
- result = upscaler(input_img, tile_mode=input_tile_mode)
- end = time.time()
- logger.info(f'input_model_name, {input_model_name}')
- logger.info(f'input_tile_mode, {input_tile_mode}')
- logger.info(f'input shape, {input_img.shape}')
- logger.info(f'output shape, {result.shape}')
- logger.info(f'speed time, {end - start}')
- return result
-
-
-if __name__ == '__main__':
- logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s")
- logger = logging.getLogger()
-
- ModelPath = "weights_v3/"
- model_cache = {}
-
- input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model')
- input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode')
- input_img = gr.inputs.Image(label='image', type='pil')
-
- inputs = [input_img, input_model_name, input_tile_mode]
- outputs = "image"
- iface = gr.Interface(fn=greet,
- inputs=inputs,
- outputs=outputs,
- allow_screenshot=False,
- allow_flagging='never',
- examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]],
- article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN) '
- '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。 '
- '修改bbb'
- 'The large image will lead to memory limit exceeded. So I crop and resize image. '
- 'If you want to experience the large image, please go to the link above.')
- iface.launch()
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/transforms.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/transforms.py
deleted file mode 100644
index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/transforms.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import numpy as np
-import cv2
-import math
-
-
-def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
- """Rezise the sample to ensure the given size. Keeps aspect ratio.
-
- Args:
- sample (dict): sample
- size (tuple): image size
-
- Returns:
- tuple: new size
- """
- shape = list(sample["disparity"].shape)
-
- if shape[0] >= size[0] and shape[1] >= size[1]:
- return sample
-
- scale = [0, 0]
- scale[0] = size[0] / shape[0]
- scale[1] = size[1] / shape[1]
-
- scale = max(scale)
-
- shape[0] = math.ceil(scale * shape[0])
- shape[1] = math.ceil(scale * shape[1])
-
- # resize
- sample["image"] = cv2.resize(
- sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method
- )
-
- sample["disparity"] = cv2.resize(
- sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST
- )
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- tuple(shape[::-1]),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return tuple(shape)
-
-
-class Resize(object):
- """Resize sample to given size (width, height).
- """
-
- def __init__(
- self,
- width,
- height,
- resize_target=True,
- keep_aspect_ratio=False,
- ensure_multiple_of=1,
- resize_method="lower_bound",
- image_interpolation_method=cv2.INTER_AREA,
- ):
- """Init.
-
- Args:
- width (int): desired output width
- height (int): desired output height
- resize_target (bool, optional):
- True: Resize the full sample (image, mask, target).
- False: Resize image only.
- Defaults to True.
- keep_aspect_ratio (bool, optional):
- True: Keep the aspect ratio of the input sample.
- Output sample might not have the given width and height, and
- resize behaviour depends on the parameter 'resize_method'.
- Defaults to False.
- ensure_multiple_of (int, optional):
- Output width and height is constrained to be multiple of this parameter.
- Defaults to 1.
- resize_method (str, optional):
- "lower_bound": Output will be at least as large as the given size.
- "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.)
- "minimal": Scale as least as possible. (Output size might be smaller than given size.)
- Defaults to "lower_bound".
- """
- self.__width = width
- self.__height = height
-
- self.__resize_target = resize_target
- self.__keep_aspect_ratio = keep_aspect_ratio
- self.__multiple_of = ensure_multiple_of
- self.__resize_method = resize_method
- self.__image_interpolation_method = image_interpolation_method
-
- def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
- y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if max_val is not None and y > max_val:
- y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if y < min_val:
- y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- return y
-
- def get_size(self, width, height):
- # determine new height and width
- scale_height = self.__height / height
- scale_width = self.__width / width
-
- if self.__keep_aspect_ratio:
- if self.__resize_method == "lower_bound":
- # scale such that output size is lower bound
- if scale_width > scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "upper_bound":
- # scale such that output size is upper bound
- if scale_width < scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "minimal":
- # scale as least as possbile
- if abs(1 - scale_width) < abs(1 - scale_height):
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- else:
- raise ValueError(
- f"resize_method {self.__resize_method} not implemented"
- )
-
- if self.__resize_method == "lower_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, min_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, min_val=self.__width
- )
- elif self.__resize_method == "upper_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, max_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, max_val=self.__width
- )
- elif self.__resize_method == "minimal":
- new_height = self.constrain_to_multiple_of(scale_height * height)
- new_width = self.constrain_to_multiple_of(scale_width * width)
- else:
- raise ValueError(f"resize_method {self.__resize_method} not implemented")
-
- return (new_width, new_height)
-
- def __call__(self, sample):
- width, height = self.get_size(
- sample["image"].shape[1], sample["image"].shape[0]
- )
-
- # resize sample
- sample["image"] = cv2.resize(
- sample["image"],
- (width, height),
- interpolation=self.__image_interpolation_method,
- )
-
- if self.__resize_target:
- if "disparity" in sample:
- sample["disparity"] = cv2.resize(
- sample["disparity"],
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
-
- if "depth" in sample:
- sample["depth"] = cv2.resize(
- sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST
- )
-
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return sample
-
-
-class NormalizeImage(object):
- """Normlize image by given mean and std.
- """
-
- def __init__(self, mean, std):
- self.__mean = mean
- self.__std = std
-
- def __call__(self, sample):
- sample["image"] = (sample["image"] - self.__mean) / self.__std
-
- return sample
-
-
-class PrepareForNet(object):
- """Prepare sample for usage as network input.
- """
-
- def __init__(self):
- pass
-
- def __call__(self, sample):
- image = np.transpose(sample["image"], (2, 0, 1))
- sample["image"] = np.ascontiguousarray(image).astype(np.float32)
-
- if "mask" in sample:
- sample["mask"] = sample["mask"].astype(np.float32)
- sample["mask"] = np.ascontiguousarray(sample["mask"])
-
- if "disparity" in sample:
- disparity = sample["disparity"].astype(np.float32)
- sample["disparity"] = np.ascontiguousarray(disparity)
-
- if "depth" in sample:
- depth = sample["depth"].astype(np.float32)
- sample["depth"] = np.ascontiguousarray(depth)
-
- return sample
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/facebook/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/facebook/Factory.js
deleted file mode 100644
index 83564c35a3f25fc726d71742fd6388ed6d8cc7db..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/facebook/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Facebook from './Facebook.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('facebook', function (config) {
- var gameObject = new Facebook(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.Spinner.Facebook', Facebook);
-
-export default Facebook;
\ No newline at end of file
diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/utils/data_utils.py b/spaces/AlekseyKorshuk/instagram-filter-removal/utils/data_utils.py
deleted file mode 100644
index 6db12b39f62920bfdab26c9a467a357d9c26719f..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/instagram-filter-removal/utils/data_utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def linear_scaling(x):
- return (x * 255.) / 127.5 - 1.
-
-
-def linear_unscaling(x):
- return (x + 1.) * 127.5 / 255.
\ No newline at end of file
diff --git a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/augmentation.py b/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/augmentation.py
deleted file mode 100644
index df77004a1b7093c0992c970ed0a337b073ddfe86..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/augmentation.py
+++ /dev/null
@@ -1,344 +0,0 @@
-"""
-Code from https://github.com/hassony2/torch_videovision
-"""
-
-import numbers
-
-import random
-import numpy as np
-import PIL
-
-from skimage.transform import resize, rotate
-import torchvision
-
-import warnings
-
-from skimage import img_as_ubyte, img_as_float
-
-
-def crop_clip(clip, min_h, min_w, h, w):
- if isinstance(clip[0], np.ndarray):
- cropped = [img[min_h:min_h + h, min_w:min_w + w, :] for img in clip]
-
- elif isinstance(clip[0], PIL.Image.Image):
- cropped = [
- img.crop((min_w, min_h, min_w + w, min_h + h)) for img in clip
- ]
- else:
- raise TypeError('Expected numpy.ndarray or PIL.Image' +
- 'but got list of {0}'.format(type(clip[0])))
- return cropped
-
-
-def pad_clip(clip, h, w):
- im_h, im_w = clip[0].shape[:2]
- pad_h = (0, 0) if h < im_h else ((h - im_h) // 2, (h - im_h + 1) // 2)
- pad_w = (0, 0) if w < im_w else ((w - im_w) // 2, (w - im_w + 1) // 2)
-
- return np.pad(clip, ((0, 0), pad_h, pad_w, (0, 0)), mode='edge')
-
-
-def resize_clip(clip, size, interpolation='bilinear'):
- if isinstance(clip[0], np.ndarray):
- if isinstance(size, numbers.Number):
- im_h, im_w, im_c = clip[0].shape
- # Min spatial dim already matches minimal size
- if (im_w <= im_h and im_w == size) or (im_h <= im_w
- and im_h == size):
- return clip
- new_h, new_w = get_resize_sizes(im_h, im_w, size)
- size = (new_w, new_h)
- else:
- size = size[1], size[0]
-
- scaled = [
- resize(img, size, order=1 if interpolation == 'bilinear' else 0, preserve_range=True,
- mode='constant', anti_aliasing=True) for img in clip
- ]
- elif isinstance(clip[0], PIL.Image.Image):
- if isinstance(size, numbers.Number):
- im_w, im_h = clip[0].size
- # Min spatial dim already matches minimal size
- if (im_w <= im_h and im_w == size) or (im_h <= im_w
- and im_h == size):
- return clip
- new_h, new_w = get_resize_sizes(im_h, im_w, size)
- size = (new_w, new_h)
- else:
- size = size[1], size[0]
- if interpolation == 'bilinear':
- pil_inter = PIL.Image.NEAREST
- else:
- pil_inter = PIL.Image.BILINEAR
- scaled = [img.resize(size, pil_inter) for img in clip]
- else:
- raise TypeError('Expected numpy.ndarray or PIL.Image' +
- 'but got list of {0}'.format(type(clip[0])))
- return scaled
-
-
-def get_resize_sizes(im_h, im_w, size):
- if im_w < im_h:
- ow = size
- oh = int(size * im_h / im_w)
- else:
- oh = size
- ow = int(size * im_w / im_h)
- return oh, ow
-
-
-class RandomFlip(object):
- def __init__(self, time_flip=False, horizontal_flip=False):
- self.time_flip = time_flip
- self.horizontal_flip = horizontal_flip
-
- def __call__(self, clip):
- if random.random() < 0.5 and self.time_flip:
- return clip[::-1]
- if random.random() < 0.5 and self.horizontal_flip:
- return [np.fliplr(img) for img in clip]
-
- return clip
-
-
-class RandomResize(object):
- """Resizes a list of (H x W x C) numpy.ndarray to the final size
- The larger the original image is, the more times it takes to
- interpolate
- Args:
- interpolation (str): Can be one of 'nearest', 'bilinear'
- defaults to nearest
- size (tuple): (widht, height)
- """
-
- def __init__(self, ratio=(3. / 4., 4. / 3.), interpolation='nearest'):
- self.ratio = ratio
- self.interpolation = interpolation
-
- def __call__(self, clip):
- scaling_factor = random.uniform(self.ratio[0], self.ratio[1])
-
- if isinstance(clip[0], np.ndarray):
- im_h, im_w, im_c = clip[0].shape
- elif isinstance(clip[0], PIL.Image.Image):
- im_w, im_h = clip[0].size
-
- new_w = int(im_w * scaling_factor)
- new_h = int(im_h * scaling_factor)
- new_size = (new_w, new_h)
- resized = resize_clip(
- clip, new_size, interpolation=self.interpolation)
-
- return resized
-
-
-class RandomCrop(object):
- """Extract random crop at the same location for a list of videos
- Args:
- size (sequence or int): Desired output size for the
- crop in format (h, w)
- """
-
- def __init__(self, size):
- if isinstance(size, numbers.Number):
- size = (size, size)
-
- self.size = size
-
- def __call__(self, clip):
- """
- Args:
- img (PIL.Image or numpy.ndarray): List of videos to be cropped
- in format (h, w, c) in numpy.ndarray
- Returns:
- PIL.Image or numpy.ndarray: Cropped list of videos
- """
- h, w = self.size
- if isinstance(clip[0], np.ndarray):
- im_h, im_w, im_c = clip[0].shape
- elif isinstance(clip[0], PIL.Image.Image):
- im_w, im_h = clip[0].size
- else:
- raise TypeError('Expected numpy.ndarray or PIL.Image' +
- 'but got list of {0}'.format(type(clip[0])))
-
- clip = pad_clip(clip, h, w)
- im_h, im_w = clip.shape[1:3]
- x1 = 0 if h == im_h else random.randint(0, im_w - w)
- y1 = 0 if w == im_w else random.randint(0, im_h - h)
- cropped = crop_clip(clip, y1, x1, h, w)
-
- return cropped
-
-
-class RandomRotation(object):
- """Rotate entire clip randomly by a random angle within
- given bounds
- Args:
- degrees (sequence or int): Range of degrees to select from
- If degrees is a number instead of sequence like (min, max),
- the range of degrees, will be (-degrees, +degrees).
- """
-
- def __init__(self, degrees):
- if isinstance(degrees, numbers.Number):
- if degrees < 0:
- raise ValueError('If degrees is a single number,'
- 'must be positive')
- degrees = (-degrees, degrees)
- else:
- if len(degrees) != 2:
- raise ValueError('If degrees is a sequence,'
- 'it must be of len 2.')
-
- self.degrees = degrees
-
- def __call__(self, clip):
- """
- Args:
- img (PIL.Image or numpy.ndarray): List of videos to be cropped
- in format (h, w, c) in numpy.ndarray
- Returns:
- PIL.Image or numpy.ndarray: Cropped list of videos
- """
- angle = random.uniform(self.degrees[0], self.degrees[1])
- if isinstance(clip[0], np.ndarray):
- rotated = [rotate(image=img, angle=angle, preserve_range=True) for img in clip]
- elif isinstance(clip[0], PIL.Image.Image):
- rotated = [img.rotate(angle) for img in clip]
- else:
- raise TypeError('Expected numpy.ndarray or PIL.Image' +
- 'but got list of {0}'.format(type(clip[0])))
-
- return rotated
-
-
-class ColorJitter(object):
- """Randomly change the brightness, contrast and saturation and hue of the clip
- Args:
- brightness (float): How much to jitter brightness. brightness_factor
- is chosen uniformly from [max(0, 1 - brightness), 1 + brightness].
- contrast (float): How much to jitter contrast. contrast_factor
- is chosen uniformly from [max(0, 1 - contrast), 1 + contrast].
- saturation (float): How much to jitter saturation. saturation_factor
- is chosen uniformly from [max(0, 1 - saturation), 1 + saturation].
- hue(float): How much to jitter hue. hue_factor is chosen uniformly from
- [-hue, hue]. Should be >=0 and <= 0.5.
- """
-
- def __init__(self, brightness=0, contrast=0, saturation=0, hue=0):
- self.brightness = brightness
- self.contrast = contrast
- self.saturation = saturation
- self.hue = hue
-
- def get_params(self, brightness, contrast, saturation, hue):
- if brightness > 0:
- brightness_factor = random.uniform(
- max(0, 1 - brightness), 1 + brightness)
- else:
- brightness_factor = None
-
- if contrast > 0:
- contrast_factor = random.uniform(
- max(0, 1 - contrast), 1 + contrast)
- else:
- contrast_factor = None
-
- if saturation > 0:
- saturation_factor = random.uniform(
- max(0, 1 - saturation), 1 + saturation)
- else:
- saturation_factor = None
-
- if hue > 0:
- hue_factor = random.uniform(-hue, hue)
- else:
- hue_factor = None
- return brightness_factor, contrast_factor, saturation_factor, hue_factor
-
- def __call__(self, clip):
- """
- Args:
- clip (list): list of PIL.Image
- Returns:
- list PIL.Image : list of transformed PIL.Image
- """
- if isinstance(clip[0], np.ndarray):
- brightness, contrast, saturation, hue = self.get_params(
- self.brightness, self.contrast, self.saturation, self.hue)
-
- # Create img transform function sequence
- img_transforms = []
- if brightness is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_brightness(img, brightness))
- if saturation is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_saturation(img, saturation))
- if hue is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_hue(img, hue))
- if contrast is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_contrast(img, contrast))
- random.shuffle(img_transforms)
- img_transforms = [img_as_ubyte, torchvision.transforms.ToPILImage()] + img_transforms + [np.array,
- img_as_float]
-
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- jittered_clip = []
- for img in clip:
- jittered_img = img
- for func in img_transforms:
- jittered_img = func(jittered_img)
- jittered_clip.append(jittered_img.astype('float32'))
- elif isinstance(clip[0], PIL.Image.Image):
- brightness, contrast, saturation, hue = self.get_params(
- self.brightness, self.contrast, self.saturation, self.hue)
-
- # Create img transform function sequence
- img_transforms = []
- if brightness is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_brightness(img, brightness))
- if saturation is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_saturation(img, saturation))
- if hue is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_hue(img, hue))
- if contrast is not None:
- img_transforms.append(lambda img: torchvision.transforms.functional.adjust_contrast(img, contrast))
- random.shuffle(img_transforms)
-
- # Apply to all videos
- jittered_clip = []
- for img in clip:
- for func in img_transforms:
- jittered_img = func(img)
- jittered_clip.append(jittered_img)
-
- else:
- raise TypeError('Expected numpy.ndarray or PIL.Image' +
- 'but got list of {0}'.format(type(clip[0])))
- return jittered_clip
-
-
-class AllAugmentationTransform:
- def __init__(self, resize_param=None, rotation_param=None, flip_param=None, crop_param=None, jitter_param=None):
- self.transforms = []
-
- if flip_param is not None:
- self.transforms.append(RandomFlip(**flip_param))
-
- if rotation_param is not None:
- self.transforms.append(RandomRotation(**rotation_param))
-
- if resize_param is not None:
- self.transforms.append(RandomResize(**resize_param))
-
- if crop_param is not None:
- self.transforms.append(RandomCrop(**crop_param))
-
- if jitter_param is not None:
- self.transforms.append(ColorJitter(**jitter_param))
-
- def __call__(self, clip):
- for t in self.transforms:
- clip = t(clip)
- return clip
diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/base_loss.py b/spaces/AlexWang/lama/saicinpainting/evaluation/losses/base_loss.py
deleted file mode 100644
index 391191ce2ed8665f1f15bd3877dc22bb85b147d6..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/base_loss.py
+++ /dev/null
@@ -1,528 +0,0 @@
-import logging
-from abc import abstractmethod, ABC
-
-import numpy as np
-import sklearn
-import sklearn.svm
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from joblib import Parallel, delayed
-from scipy import linalg
-
-from models.ade20k import SegmentationModule, NUM_CLASS, segm_options
-from .fid.inception import InceptionV3
-from .lpips import PerceptualLoss
-from .ssim import SSIM
-
-LOGGER = logging.getLogger(__name__)
-
-
-def get_groupings(groups):
- """
- :param groups: group numbers for respective elements
- :return: dict of kind {group_idx: indices of the corresponding group elements}
- """
- label_groups, count_groups = np.unique(groups, return_counts=True)
-
- indices = np.argsort(groups)
-
- grouping = dict()
- cur_start = 0
- for label, count in zip(label_groups, count_groups):
- cur_end = cur_start + count
- cur_indices = indices[cur_start:cur_end]
- grouping[label] = cur_indices
- cur_start = cur_end
- return grouping
-
-
-class EvaluatorScore(nn.Module):
- @abstractmethod
- def forward(self, pred_batch, target_batch, mask):
- pass
-
- @abstractmethod
- def get_value(self, groups=None, states=None):
- pass
-
- @abstractmethod
- def reset(self):
- pass
-
-
-class PairwiseScore(EvaluatorScore, ABC):
- def __init__(self):
- super().__init__()
- self.individual_values = None
-
- def get_value(self, groups=None, states=None):
- """
- :param groups:
- :return:
- total_results: dict of kind {'mean': score mean, 'std': score std}
- group_results: None, if groups is None;
- else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
- """
- individual_values = torch.stack(states, dim=0).reshape(-1).cpu().numpy() if states is not None \
- else self.individual_values
-
- total_results = {
- 'mean': individual_values.mean(),
- 'std': individual_values.std()
- }
-
- if groups is None:
- return total_results, None
-
- group_results = dict()
- grouping = get_groupings(groups)
- for label, index in grouping.items():
- group_scores = individual_values[index]
- group_results[label] = {
- 'mean': group_scores.mean(),
- 'std': group_scores.std()
- }
- return total_results, group_results
-
- def reset(self):
- self.individual_values = []
-
-
-class SSIMScore(PairwiseScore):
- def __init__(self, window_size=11):
- super().__init__()
- self.score = SSIM(window_size=window_size, size_average=False).eval()
- self.reset()
-
- def forward(self, pred_batch, target_batch, mask=None):
- batch_values = self.score(pred_batch, target_batch)
- self.individual_values = np.hstack([
- self.individual_values, batch_values.detach().cpu().numpy()
- ])
- return batch_values
-
-
-class LPIPSScore(PairwiseScore):
- def __init__(self, model='net-lin', net='vgg', model_path=None, use_gpu=True):
- super().__init__()
- self.score = PerceptualLoss(model=model, net=net, model_path=model_path,
- use_gpu=use_gpu, spatial=False).eval()
- self.reset()
-
- def forward(self, pred_batch, target_batch, mask=None):
- batch_values = self.score(pred_batch, target_batch).flatten()
- self.individual_values = np.hstack([
- self.individual_values, batch_values.detach().cpu().numpy()
- ])
- return batch_values
-
-
-def fid_calculate_activation_statistics(act):
- mu = np.mean(act, axis=0)
- sigma = np.cov(act, rowvar=False)
- return mu, sigma
-
-
-def calculate_frechet_distance(activations_pred, activations_target, eps=1e-6):
- mu1, sigma1 = fid_calculate_activation_statistics(activations_pred)
- mu2, sigma2 = fid_calculate_activation_statistics(activations_target)
-
- diff = mu1 - mu2
-
- # Product might be almost singular
- covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
- if not np.isfinite(covmean).all():
- msg = ('fid calculation produces singular product; '
- 'adding %s to diagonal of cov estimates') % eps
- LOGGER.warning(msg)
- offset = np.eye(sigma1.shape[0]) * eps
- covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
-
- # Numerical error might give slight imaginary component
- if np.iscomplexobj(covmean):
- # if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
- if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-2):
- m = np.max(np.abs(covmean.imag))
- raise ValueError('Imaginary component {}'.format(m))
- covmean = covmean.real
-
- tr_covmean = np.trace(covmean)
-
- return (diff.dot(diff) + np.trace(sigma1) +
- np.trace(sigma2) - 2 * tr_covmean)
-
-
-class FIDScore(EvaluatorScore):
- def __init__(self, dims=2048, eps=1e-6):
- LOGGER.info("FIDscore init called")
- super().__init__()
- if getattr(FIDScore, '_MODEL', None) is None:
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
- FIDScore._MODEL = InceptionV3([block_idx]).eval()
- self.model = FIDScore._MODEL
- self.eps = eps
- self.reset()
- LOGGER.info("FIDscore init done")
-
- def forward(self, pred_batch, target_batch, mask=None):
- activations_pred = self._get_activations(pred_batch)
- activations_target = self._get_activations(target_batch)
-
- self.activations_pred.append(activations_pred.detach().cpu())
- self.activations_target.append(activations_target.detach().cpu())
-
- return activations_pred, activations_target
-
- def get_value(self, groups=None, states=None):
- LOGGER.info("FIDscore get_value called")
- activations_pred, activations_target = zip(*states) if states is not None \
- else (self.activations_pred, self.activations_target)
- activations_pred = torch.cat(activations_pred).cpu().numpy()
- activations_target = torch.cat(activations_target).cpu().numpy()
-
- total_distance = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps)
- total_results = dict(mean=total_distance)
-
- if groups is None:
- group_results = None
- else:
- group_results = dict()
- grouping = get_groupings(groups)
- for label, index in grouping.items():
- if len(index) > 1:
- group_distance = calculate_frechet_distance(activations_pred[index], activations_target[index],
- eps=self.eps)
- group_results[label] = dict(mean=group_distance)
-
- else:
- group_results[label] = dict(mean=float('nan'))
-
- self.reset()
-
- LOGGER.info("FIDscore get_value done")
-
- return total_results, group_results
-
- def reset(self):
- self.activations_pred = []
- self.activations_target = []
-
- def _get_activations(self, batch):
- activations = self.model(batch)[0]
- if activations.shape[2] != 1 or activations.shape[3] != 1:
- assert False, \
- 'We should not have got here, because Inception always scales inputs to 299x299'
- # activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1))
- activations = activations.squeeze(-1).squeeze(-1)
- return activations
-
-
-class SegmentationAwareScore(EvaluatorScore):
- def __init__(self, weights_path):
- super().__init__()
- self.segm_network = SegmentationModule(weights_path=weights_path, use_default_normalization=True).eval()
- self.target_class_freq_by_image_total = []
- self.target_class_freq_by_image_mask = []
- self.pred_class_freq_by_image_mask = []
-
- def forward(self, pred_batch, target_batch, mask):
- pred_segm_flat = self.segm_network.predict(pred_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy()
- target_segm_flat = self.segm_network.predict(target_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy()
- mask_flat = (mask.view(mask.shape[0], -1) > 0.5).detach().cpu().numpy()
-
- batch_target_class_freq_total = []
- batch_target_class_freq_mask = []
- batch_pred_class_freq_mask = []
-
- for cur_pred_segm, cur_target_segm, cur_mask in zip(pred_segm_flat, target_segm_flat, mask_flat):
- cur_target_class_freq_total = np.bincount(cur_target_segm, minlength=NUM_CLASS)[None, ...]
- cur_target_class_freq_mask = np.bincount(cur_target_segm[cur_mask], minlength=NUM_CLASS)[None, ...]
- cur_pred_class_freq_mask = np.bincount(cur_pred_segm[cur_mask], minlength=NUM_CLASS)[None, ...]
-
- self.target_class_freq_by_image_total.append(cur_target_class_freq_total)
- self.target_class_freq_by_image_mask.append(cur_target_class_freq_mask)
- self.pred_class_freq_by_image_mask.append(cur_pred_class_freq_mask)
-
- batch_target_class_freq_total.append(cur_target_class_freq_total)
- batch_target_class_freq_mask.append(cur_target_class_freq_mask)
- batch_pred_class_freq_mask.append(cur_pred_class_freq_mask)
-
- batch_target_class_freq_total = np.concatenate(batch_target_class_freq_total, axis=0)
- batch_target_class_freq_mask = np.concatenate(batch_target_class_freq_mask, axis=0)
- batch_pred_class_freq_mask = np.concatenate(batch_pred_class_freq_mask, axis=0)
- return batch_target_class_freq_total, batch_target_class_freq_mask, batch_pred_class_freq_mask
-
- def reset(self):
- super().reset()
- self.target_class_freq_by_image_total = []
- self.target_class_freq_by_image_mask = []
- self.pred_class_freq_by_image_mask = []
-
-
-def distribute_values_to_classes(target_class_freq_by_image_mask, values, idx2name):
- assert target_class_freq_by_image_mask.ndim == 2 and target_class_freq_by_image_mask.shape[0] == values.shape[0]
- total_class_freq = target_class_freq_by_image_mask.sum(0)
- distr_values = (target_class_freq_by_image_mask * values[..., None]).sum(0)
- result = distr_values / (total_class_freq + 1e-3)
- return {idx2name[i]: val for i, val in enumerate(result) if total_class_freq[i] > 0}
-
-
-def get_segmentation_idx2name():
- return {i - 1: name for i, name in segm_options['classes'].set_index('Idx', drop=True)['Name'].to_dict().items()}
-
-
-class SegmentationAwarePairwiseScore(SegmentationAwareScore):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.individual_values = []
- self.segm_idx2name = get_segmentation_idx2name()
-
- def forward(self, pred_batch, target_batch, mask):
- cur_class_stats = super().forward(pred_batch, target_batch, mask)
- score_values = self.calc_score(pred_batch, target_batch, mask)
- self.individual_values.append(score_values)
- return cur_class_stats + (score_values,)
-
- @abstractmethod
- def calc_score(self, pred_batch, target_batch, mask):
- raise NotImplementedError()
-
- def get_value(self, groups=None, states=None):
- """
- :param groups:
- :return:
- total_results: dict of kind {'mean': score mean, 'std': score std}
- group_results: None, if groups is None;
- else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
- """
- if states is not None:
- (target_class_freq_by_image_total,
- target_class_freq_by_image_mask,
- pred_class_freq_by_image_mask,
- individual_values) = states
- else:
- target_class_freq_by_image_total = self.target_class_freq_by_image_total
- target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
- pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
- individual_values = self.individual_values
-
- target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
- target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
- pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
- individual_values = np.concatenate(individual_values, axis=0)
-
- total_results = {
- 'mean': individual_values.mean(),
- 'std': individual_values.std(),
- **distribute_values_to_classes(target_class_freq_by_image_mask, individual_values, self.segm_idx2name)
- }
-
- if groups is None:
- return total_results, None
-
- group_results = dict()
- grouping = get_groupings(groups)
- for label, index in grouping.items():
- group_class_freq = target_class_freq_by_image_mask[index]
- group_scores = individual_values[index]
- group_results[label] = {
- 'mean': group_scores.mean(),
- 'std': group_scores.std(),
- ** distribute_values_to_classes(group_class_freq, group_scores, self.segm_idx2name)
- }
- return total_results, group_results
-
- def reset(self):
- super().reset()
- self.individual_values = []
-
-
-class SegmentationClassStats(SegmentationAwarePairwiseScore):
- def calc_score(self, pred_batch, target_batch, mask):
- return 0
-
- def get_value(self, groups=None, states=None):
- """
- :param groups:
- :return:
- total_results: dict of kind {'mean': score mean, 'std': score std}
- group_results: None, if groups is None;
- else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
- """
- if states is not None:
- (target_class_freq_by_image_total,
- target_class_freq_by_image_mask,
- pred_class_freq_by_image_mask,
- _) = states
- else:
- target_class_freq_by_image_total = self.target_class_freq_by_image_total
- target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
- pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
-
- target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
- target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
- pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
-
- target_class_freq_by_image_total_marginal = target_class_freq_by_image_total.sum(0).astype('float32')
- target_class_freq_by_image_total_marginal /= target_class_freq_by_image_total_marginal.sum()
-
- target_class_freq_by_image_mask_marginal = target_class_freq_by_image_mask.sum(0).astype('float32')
- target_class_freq_by_image_mask_marginal /= target_class_freq_by_image_mask_marginal.sum()
-
- pred_class_freq_diff = (pred_class_freq_by_image_mask - target_class_freq_by_image_mask).sum(0) / (target_class_freq_by_image_mask.sum(0) + 1e-3)
-
- total_results = dict()
- total_results.update({f'total_freq/{self.segm_idx2name[i]}': v
- for i, v in enumerate(target_class_freq_by_image_total_marginal)
- if v > 0})
- total_results.update({f'mask_freq/{self.segm_idx2name[i]}': v
- for i, v in enumerate(target_class_freq_by_image_mask_marginal)
- if v > 0})
- total_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v
- for i, v in enumerate(pred_class_freq_diff)
- if target_class_freq_by_image_total_marginal[i] > 0})
-
- if groups is None:
- return total_results, None
-
- group_results = dict()
- grouping = get_groupings(groups)
- for label, index in grouping.items():
- group_target_class_freq_by_image_total = target_class_freq_by_image_total[index]
- group_target_class_freq_by_image_mask = target_class_freq_by_image_mask[index]
- group_pred_class_freq_by_image_mask = pred_class_freq_by_image_mask[index]
-
- group_target_class_freq_by_image_total_marginal = group_target_class_freq_by_image_total.sum(0).astype('float32')
- group_target_class_freq_by_image_total_marginal /= group_target_class_freq_by_image_total_marginal.sum()
-
- group_target_class_freq_by_image_mask_marginal = group_target_class_freq_by_image_mask.sum(0).astype('float32')
- group_target_class_freq_by_image_mask_marginal /= group_target_class_freq_by_image_mask_marginal.sum()
-
- group_pred_class_freq_diff = (group_pred_class_freq_by_image_mask - group_target_class_freq_by_image_mask).sum(0) / (
- group_target_class_freq_by_image_mask.sum(0) + 1e-3)
-
- cur_group_results = dict()
- cur_group_results.update({f'total_freq/{self.segm_idx2name[i]}': v
- for i, v in enumerate(group_target_class_freq_by_image_total_marginal)
- if v > 0})
- cur_group_results.update({f'mask_freq/{self.segm_idx2name[i]}': v
- for i, v in enumerate(group_target_class_freq_by_image_mask_marginal)
- if v > 0})
- cur_group_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v
- for i, v in enumerate(group_pred_class_freq_diff)
- if group_target_class_freq_by_image_total_marginal[i] > 0})
-
- group_results[label] = cur_group_results
- return total_results, group_results
-
-
-class SegmentationAwareSSIM(SegmentationAwarePairwiseScore):
- def __init__(self, *args, window_size=11, **kwargs):
- super().__init__(*args, **kwargs)
- self.score_impl = SSIM(window_size=window_size, size_average=False).eval()
-
- def calc_score(self, pred_batch, target_batch, mask):
- return self.score_impl(pred_batch, target_batch).detach().cpu().numpy()
-
-
-class SegmentationAwareLPIPS(SegmentationAwarePairwiseScore):
- def __init__(self, *args, model='net-lin', net='vgg', model_path=None, use_gpu=True, **kwargs):
- super().__init__(*args, **kwargs)
- self.score_impl = PerceptualLoss(model=model, net=net, model_path=model_path,
- use_gpu=use_gpu, spatial=False).eval()
-
- def calc_score(self, pred_batch, target_batch, mask):
- return self.score_impl(pred_batch, target_batch).flatten().detach().cpu().numpy()
-
-
-def calculade_fid_no_img(img_i, activations_pred, activations_target, eps=1e-6):
- activations_pred = activations_pred.copy()
- activations_pred[img_i] = activations_target[img_i]
- return calculate_frechet_distance(activations_pred, activations_target, eps=eps)
-
-
-class SegmentationAwareFID(SegmentationAwarePairwiseScore):
- def __init__(self, *args, dims=2048, eps=1e-6, n_jobs=-1, **kwargs):
- super().__init__(*args, **kwargs)
- if getattr(FIDScore, '_MODEL', None) is None:
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
- FIDScore._MODEL = InceptionV3([block_idx]).eval()
- self.model = FIDScore._MODEL
- self.eps = eps
- self.n_jobs = n_jobs
-
- def calc_score(self, pred_batch, target_batch, mask):
- activations_pred = self._get_activations(pred_batch)
- activations_target = self._get_activations(target_batch)
- return activations_pred, activations_target
-
- def get_value(self, groups=None, states=None):
- """
- :param groups:
- :return:
- total_results: dict of kind {'mean': score mean, 'std': score std}
- group_results: None, if groups is None;
- else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
- """
- if states is not None:
- (target_class_freq_by_image_total,
- target_class_freq_by_image_mask,
- pred_class_freq_by_image_mask,
- activation_pairs) = states
- else:
- target_class_freq_by_image_total = self.target_class_freq_by_image_total
- target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
- pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
- activation_pairs = self.individual_values
-
- target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
- target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
- pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
- activations_pred, activations_target = zip(*activation_pairs)
- activations_pred = np.concatenate(activations_pred, axis=0)
- activations_target = np.concatenate(activations_target, axis=0)
-
- total_results = {
- 'mean': calculate_frechet_distance(activations_pred, activations_target, eps=self.eps),
- 'std': 0,
- **self.distribute_fid_to_classes(target_class_freq_by_image_mask, activations_pred, activations_target)
- }
-
- if groups is None:
- return total_results, None
-
- group_results = dict()
- grouping = get_groupings(groups)
- for label, index in grouping.items():
- if len(index) > 1:
- group_activations_pred = activations_pred[index]
- group_activations_target = activations_target[index]
- group_class_freq = target_class_freq_by_image_mask[index]
- group_results[label] = {
- 'mean': calculate_frechet_distance(group_activations_pred, group_activations_target, eps=self.eps),
- 'std': 0,
- **self.distribute_fid_to_classes(group_class_freq,
- group_activations_pred,
- group_activations_target)
- }
- else:
- group_results[label] = dict(mean=float('nan'), std=0)
- return total_results, group_results
-
- def distribute_fid_to_classes(self, class_freq, activations_pred, activations_target):
- real_fid = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps)
-
- fid_no_images = Parallel(n_jobs=self.n_jobs)(
- delayed(calculade_fid_no_img)(img_i, activations_pred, activations_target, eps=self.eps)
- for img_i in range(activations_pred.shape[0])
- )
- errors = real_fid - fid_no_images
- return distribute_values_to_classes(class_freq, errors, self.segm_idx2name)
-
- def _get_activations(self, batch):
- activations = self.model(batch)[0]
- if activations.shape[2] != 1 or activations.shape[3] != 1:
- activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1))
- activations = activations.squeeze(-1).squeeze(-1).detach().cpu().numpy()
- return activations
diff --git a/spaces/AlexWang/lama/saicinpainting/training/data/aug.py b/spaces/AlexWang/lama/saicinpainting/training/data/aug.py
deleted file mode 100644
index b1246250924e79511b58cd3d7ab79de8012f8949..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/data/aug.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from albumentations import DualIAATransform, to_tuple
-import imgaug.augmenters as iaa
-
-class IAAAffine2(DualIAATransform):
- """Place a regular grid of points on the input and randomly move the neighbourhood of these point around
- via affine transformations.
-
- Note: This class introduce interpolation artifacts to mask if it has values other than {0;1}
-
- Args:
- p (float): probability of applying the transform. Default: 0.5.
-
- Targets:
- image, mask
- """
-
- def __init__(
- self,
- scale=(0.7, 1.3),
- translate_percent=None,
- translate_px=None,
- rotate=0.0,
- shear=(-0.1, 0.1),
- order=1,
- cval=0,
- mode="reflect",
- always_apply=False,
- p=0.5,
- ):
- super(IAAAffine2, self).__init__(always_apply, p)
- self.scale = dict(x=scale, y=scale)
- self.translate_percent = to_tuple(translate_percent, 0)
- self.translate_px = to_tuple(translate_px, 0)
- self.rotate = to_tuple(rotate)
- self.shear = dict(x=shear, y=shear)
- self.order = order
- self.cval = cval
- self.mode = mode
-
- @property
- def processor(self):
- return iaa.Affine(
- self.scale,
- self.translate_percent,
- self.translate_px,
- self.rotate,
- self.shear,
- self.order,
- self.cval,
- self.mode,
- )
-
- def get_transform_init_args_names(self):
- return ("scale", "translate_percent", "translate_px", "rotate", "shear", "order", "cval", "mode")
-
-
-class IAAPerspective2(DualIAATransform):
- """Perform a random four point perspective transform of the input.
-
- Note: This class introduce interpolation artifacts to mask if it has values other than {0;1}
-
- Args:
- scale ((float, float): standard deviation of the normal distributions. These are used to sample
- the random distances of the subimage's corners from the full image's corners. Default: (0.05, 0.1).
- p (float): probability of applying the transform. Default: 0.5.
-
- Targets:
- image, mask
- """
-
- def __init__(self, scale=(0.05, 0.1), keep_size=True, always_apply=False, p=0.5,
- order=1, cval=0, mode="replicate"):
- super(IAAPerspective2, self).__init__(always_apply, p)
- self.scale = to_tuple(scale, 1.0)
- self.keep_size = keep_size
- self.cval = cval
- self.mode = mode
-
- @property
- def processor(self):
- return iaa.PerspectiveTransform(self.scale, keep_size=self.keep_size, mode=self.mode, cval=self.cval)
-
- def get_transform_init_args_names(self):
- return ("scale", "keep_size")
diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/quantifier.py b/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/quantifier.py
deleted file mode 100644
index d86b4c363e37359e9f7fa94276e238c05c2404ff..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/quantifier.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import re
-
-from .num import num2str
-
-# 温度表达式,温度会影响负号的读法
-# -3°C 零下三度
-RE_TEMPERATURE = re.compile(r'(-?)(\d+(\.\d+)?)(°C|℃|度|摄氏度)')
-
-
-def replace_temperature(match) -> str:
- """
- Args:
- match (re.Match)
- Returns:
- str
- """
- sign = match.group(1)
- temperature = match.group(2)
- unit = match.group(3)
- sign: str = "零下" if sign else ""
- temperature: str = num2str(temperature)
- unit: str = "摄氏度" if unit == "摄氏度" else "度"
- result = f"{sign}{temperature}{unit}"
- return result
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/modelzoo.md b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/modelzoo.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_controlnet.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_controlnet.py
deleted file mode 100644
index bec2424ece4dc91fbafd530d525e36d1fb84c4ff..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_controlnet.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# NOTE: This file is deprecated and will be removed in a future version.
-# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works
-
-from ...utils import deprecate
-from ..controlnet.pipeline_flax_controlnet import FlaxStableDiffusionControlNetPipeline # noqa: F401
-
-
-deprecate(
- "stable diffusion controlnet",
- "0.22.0",
- "Importing `FlaxStableDiffusionControlNetPipeline` from diffusers.pipelines.stable_diffusion.flax_pipeline_stable_diffusion_controlnet is deprecated. Please import `from diffusers import FlaxStableDiffusionControlNetPipeline` instead.",
- standard_warn=False,
- stacklevel=3,
-)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/pipeline_params.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/pipeline_params.py
deleted file mode 100644
index 7c5ffa2ca24b450a86bfb32438904cecfb1c5895..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/pipeline_params.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# These are canonical sets of parameters for different types of pipelines.
-# They are set on subclasses of `PipelineTesterMixin` as `params` and
-# `batch_params`.
-#
-# If a pipeline's set of arguments has minor changes from one of the common sets
-# of arguments, do not make modifications to the existing common sets of arguments.
-# I.e. a text to image pipeline with non-configurable height and width arguments
-# should set its attribute as `params = TEXT_TO_IMAGE_PARAMS - {'height', 'width'}`.
-
-TEXT_TO_IMAGE_PARAMS = frozenset(
- [
- "prompt",
- "height",
- "width",
- "guidance_scale",
- "negative_prompt",
- "prompt_embeds",
- "negative_prompt_embeds",
- "cross_attention_kwargs",
- ]
-)
-
-TEXT_TO_IMAGE_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"])
-
-TEXT_TO_IMAGE_IMAGE_PARAMS = frozenset([])
-
-IMAGE_TO_IMAGE_IMAGE_PARAMS = frozenset(["image"])
-
-IMAGE_VARIATION_PARAMS = frozenset(
- [
- "image",
- "height",
- "width",
- "guidance_scale",
- ]
-)
-
-IMAGE_VARIATION_BATCH_PARAMS = frozenset(["image"])
-
-TEXT_GUIDED_IMAGE_VARIATION_PARAMS = frozenset(
- [
- "prompt",
- "image",
- "height",
- "width",
- "guidance_scale",
- "negative_prompt",
- "prompt_embeds",
- "negative_prompt_embeds",
- ]
-)
-
-TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS = frozenset(["prompt", "image", "negative_prompt"])
-
-TEXT_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset(
- [
- # Text guided image variation with an image mask
- "prompt",
- "image",
- "mask_image",
- "height",
- "width",
- "guidance_scale",
- "negative_prompt",
- "prompt_embeds",
- "negative_prompt_embeds",
- ]
-)
-
-TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["prompt", "image", "mask_image", "negative_prompt"])
-
-IMAGE_INPAINTING_PARAMS = frozenset(
- [
- # image variation with an image mask
- "image",
- "mask_image",
- "height",
- "width",
- "guidance_scale",
- ]
-)
-
-IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["image", "mask_image"])
-
-IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset(
- [
- "example_image",
- "image",
- "mask_image",
- "height",
- "width",
- "guidance_scale",
- ]
-)
-
-IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["example_image", "image", "mask_image"])
-
-CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS = frozenset(["class_labels"])
-
-CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS = frozenset(["class_labels"])
-
-UNCONDITIONAL_IMAGE_GENERATION_PARAMS = frozenset(["batch_size"])
-
-UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS = frozenset([])
-
-UNCONDITIONAL_AUDIO_GENERATION_PARAMS = frozenset(["batch_size"])
-
-UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS = frozenset([])
-
-TEXT_TO_AUDIO_PARAMS = frozenset(
- [
- "prompt",
- "audio_length_in_s",
- "guidance_scale",
- "negative_prompt",
- "prompt_embeds",
- "negative_prompt_embeds",
- "cross_attention_kwargs",
- ]
-)
-
-TEXT_TO_AUDIO_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"])
-TOKENS_TO_AUDIO_GENERATION_PARAMS = frozenset(["input_tokens"])
-
-TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS = frozenset(["input_tokens"])
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_cycle_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_cycle_diffusion.py
deleted file mode 100644
index 9a54c21c0a2173968cd2134a8126e1f63f84e3e4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_cycle_diffusion.py
+++ /dev/null
@@ -1,275 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import random
-import unittest
-
-import numpy as np
-import torch
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import AutoencoderKL, CycleDiffusionPipeline, DDIMScheduler, UNet2DConditionModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
-
-from ..pipeline_params import (
- IMAGE_TO_IMAGE_IMAGE_PARAMS,
- TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS,
- TEXT_GUIDED_IMAGE_VARIATION_PARAMS,
-)
-from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-class CycleDiffusionPipelineFastTests(PipelineLatentTesterMixin, PipelineTesterMixin, unittest.TestCase):
- pipeline_class = CycleDiffusionPipeline
- params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {
- "negative_prompt",
- "height",
- "width",
- "negative_prompt_embeds",
- }
- required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"}
- batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS.union({"source_prompt"})
- image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
- image_latents_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- num_train_timesteps=1000,
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- image = image / 2 + 0.5
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "prompt": "An astronaut riding an elephant",
- "source_prompt": "An astronaut riding a horse",
- "image": image,
- "generator": generator,
- "num_inference_steps": 2,
- "eta": 0.1,
- "strength": 0.8,
- "guidance_scale": 3,
- "source_guidance_scale": 1,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_cycle(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- pipe = CycleDiffusionPipeline(**components)
- pipe = pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = pipe(**inputs)
- images = output.images
-
- image_slice = images[0, -3:, -3:, -1]
-
- assert images.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.4459, 0.4943, 0.4544, 0.6643, 0.5474, 0.4327, 0.5701, 0.5959, 0.5179])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
- def test_stable_diffusion_cycle_fp16(self):
- components = self.get_dummy_components()
- for name, module in components.items():
- if hasattr(module, "half"):
- components[name] = module.half()
- pipe = CycleDiffusionPipeline(**components)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(torch_device)
- output = pipe(**inputs)
- images = output.images
-
- image_slice = images[0, -3:, -3:, -1]
-
- assert images.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.3506, 0.4543, 0.446, 0.4575, 0.5195, 0.4155, 0.5273, 0.518, 0.4116])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- @skip_mps
- def test_save_load_local(self):
- return super().test_save_load_local()
-
- @unittest.skip("non-deterministic pipeline")
- def test_inference_batch_single_identical(self):
- return super().test_inference_batch_single_identical()
-
- @skip_mps
- def test_dict_tuple_outputs_equivalent(self):
- return super().test_dict_tuple_outputs_equivalent()
-
- @skip_mps
- def test_save_load_optional_components(self):
- return super().test_save_load_optional_components()
-
- @skip_mps
- def test_attention_slicing_forward_pass(self):
- return super().test_attention_slicing_forward_pass()
-
-
-@slow
-@require_torch_gpu
-class CycleDiffusionPipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_cycle_diffusion_pipeline_fp16(self):
- init_image = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- "/cycle-diffusion/black_colored_car.png"
- )
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/cycle-diffusion/blue_colored_car_fp16.npy"
- )
- init_image = init_image.resize((512, 512))
-
- model_id = "CompVis/stable-diffusion-v1-4"
- scheduler = DDIMScheduler.from_pretrained(model_id, subfolder="scheduler")
- pipe = CycleDiffusionPipeline.from_pretrained(
- model_id, scheduler=scheduler, safety_checker=None, torch_dtype=torch.float16, revision="fp16"
- )
-
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- source_prompt = "A black colored car"
- prompt = "A blue colored car"
-
- generator = torch.manual_seed(0)
- output = pipe(
- prompt=prompt,
- source_prompt=source_prompt,
- image=init_image,
- num_inference_steps=100,
- eta=0.1,
- strength=0.85,
- guidance_scale=3,
- source_guidance_scale=1,
- generator=generator,
- output_type="np",
- )
- image = output.images
-
- # the values aren't exactly equal, but the images look the same visually
- assert np.abs(image - expected_image).max() < 5e-1
-
- def test_cycle_diffusion_pipeline(self):
- init_image = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- "/cycle-diffusion/black_colored_car.png"
- )
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/cycle-diffusion/blue_colored_car.npy"
- )
- init_image = init_image.resize((512, 512))
-
- model_id = "CompVis/stable-diffusion-v1-4"
- scheduler = DDIMScheduler.from_pretrained(model_id, subfolder="scheduler")
- pipe = CycleDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, safety_checker=None)
-
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- source_prompt = "A black colored car"
- prompt = "A blue colored car"
-
- generator = torch.manual_seed(0)
- output = pipe(
- prompt=prompt,
- source_prompt=source_prompt,
- image=init_image,
- num_inference_steps=100,
- eta=0.1,
- strength=0.85,
- guidance_scale=3,
- source_guidance_scale=1,
- generator=generator,
- output_type="np",
- )
- image = output.images
-
- assert np.abs(image - expected_image).max() < 2e-2
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index ef7b369dd9e12b2282a30da14f99dd4547c53a7b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/ann_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/loaders.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/loaders.py
deleted file mode 100644
index ab10e0a4dee24deb0d3dc54918be825308316f9e..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/loaders.py
+++ /dev/null
@@ -1,493 +0,0 @@
-import functools
-from collections import OrderedDict
-
-import gradio as gr
-
-from modules import shared
-
-loaders_and_params = OrderedDict({
- 'Transformers': [
- 'cpu_memory',
- 'gpu_memory',
- 'trust_remote_code',
- 'load_in_8bit',
- 'bf16',
- 'cpu',
- 'disk',
- 'auto_devices',
- 'load_in_4bit',
- 'use_double_quant',
- 'quant_type',
- 'compute_dtype',
- 'trust_remote_code',
- 'use_fast',
- 'alpha_value',
- 'rope_freq_base',
- 'compress_pos_emb',
- 'disable_exllama',
- 'transformers_info'
- ],
- 'ExLlama_HF': [
- 'gpu_split',
- 'max_seq_len',
- 'alpha_value',
- 'rope_freq_base',
- 'compress_pos_emb',
- 'cfg_cache',
- 'use_fast',
- 'exllama_HF_info',
- ],
- 'ExLlamav2_HF': [
- 'gpu_split',
- 'max_seq_len',
- 'cfg_cache',
- 'alpha_value',
- 'compress_pos_emb',
- 'use_fast',
- ],
- 'ExLlama': [
- 'gpu_split',
- 'max_seq_len',
- 'alpha_value',
- 'rope_freq_base',
- 'compress_pos_emb',
- 'exllama_info',
- ],
- 'ExLlamav2': [
- 'gpu_split',
- 'max_seq_len',
- 'alpha_value',
- 'compress_pos_emb',
- ],
- 'AutoGPTQ': [
- 'triton',
- 'no_inject_fused_attention',
- 'no_inject_fused_mlp',
- 'no_use_cuda_fp16',
- 'wbits',
- 'groupsize',
- 'desc_act',
- 'disable_exllama',
- 'gpu_memory',
- 'cpu_memory',
- 'cpu',
- 'disk',
- 'auto_devices',
- 'trust_remote_code',
- 'use_fast',
- 'autogptq_info',
- ],
- 'GPTQ-for-LLaMa': [
- 'wbits',
- 'groupsize',
- 'model_type',
- 'pre_layer',
- 'use_fast',
- 'gptq_for_llama_info',
- ],
- 'llama.cpp': [
- 'n_ctx',
- 'n_gpu_layers',
- 'tensor_split',
- 'n_batch',
- 'threads',
- 'threads_batch',
- 'no_mmap',
- 'mlock',
- 'mul_mat_q',
- 'llama_cpp_seed',
- 'alpha_value',
- 'rope_freq_base',
- 'compress_pos_emb',
- 'cpu',
- 'numa',
- ],
- 'llamacpp_HF': [
- 'n_ctx',
- 'n_gpu_layers',
- 'tensor_split',
- 'n_batch',
- 'threads',
- 'threads_batch',
- 'no_mmap',
- 'mlock',
- 'mul_mat_q',
- 'alpha_value',
- 'rope_freq_base',
- 'compress_pos_emb',
- 'cpu',
- 'numa',
- 'cfg_cache',
- 'use_fast',
- 'llamacpp_HF_info',
- ],
- 'ctransformers': [
- 'n_ctx',
- 'n_gpu_layers',
- 'n_batch',
- 'threads',
- 'model_type',
- 'no_mmap',
- 'mlock'
- ],
- 'AutoAWQ': [
- 'cpu_memory',
- 'gpu_memory',
- 'auto_devices',
- 'max_seq_len',
- 'n_batch',
- 'no_inject_fused_attention',
- 'trust_remote_code',
- 'use_fast',
- ]
-})
-
-loaders_samplers = {
- 'Transformers': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'penalty_alpha',
- 'num_beams',
- 'length_penalty',
- 'early_stopping',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'ExLlama_HF': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'ExLlama': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'seed',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'add_bos_token',
- 'custom_token_bans',
- 'auto_max_new_tokens',
- },
- 'ExLlamav2': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'seed',
- 'ban_eos_token',
- 'add_bos_token',
- 'custom_token_bans',
- 'auto_max_new_tokens',
- },
- 'ExLlamav2_HF': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'AutoGPTQ': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'penalty_alpha',
- 'num_beams',
- 'length_penalty',
- 'early_stopping',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'GPTQ-for-LLaMa': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'penalty_alpha',
- 'num_beams',
- 'length_penalty',
- 'early_stopping',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'llama.cpp': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'tfs',
- 'repetition_penalty',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'ban_eos_token',
- 'custom_token_bans',
- },
- 'llamacpp_HF': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
- 'ctransformers': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'repetition_penalty',
- 'repetition_penalty_range',
- },
- 'AutoAWQ': {
- 'temperature',
- 'top_p',
- 'top_k',
- 'typical_p',
- 'epsilon_cutoff',
- 'eta_cutoff',
- 'tfs',
- 'top_a',
- 'repetition_penalty',
- 'repetition_penalty_range',
- 'encoder_repetition_penalty',
- 'no_repeat_ngram_size',
- 'min_length',
- 'seed',
- 'do_sample',
- 'penalty_alpha',
- 'num_beams',
- 'length_penalty',
- 'early_stopping',
- 'mirostat_mode',
- 'mirostat_tau',
- 'mirostat_eta',
- 'grammar_file_row',
- 'grammar_string',
- 'guidance_scale',
- 'negative_prompt',
- 'ban_eos_token',
- 'custom_token_bans',
- 'add_bos_token',
- 'skip_special_tokens',
- 'auto_max_new_tokens',
- },
-}
-
-loaders_model_types = {
- 'GPTQ-for-LLaMa': [
- "None",
- "llama",
- "opt",
- "gptj"
- ],
- 'ctransformers': [
- "None",
- "gpt2",
- "gptj",
- "gptneox",
- "llama",
- "mpt",
- "dollyv2",
- "replit",
- "starcoder",
- "gptbigcode",
- "falcon"
- ],
-}
-
-
-@functools.cache
-def list_all_samplers():
- all_samplers = set()
- for k in loaders_samplers:
- for sampler in loaders_samplers[k]:
- all_samplers.add(sampler)
-
- return sorted(all_samplers)
-
-
-def blacklist_samplers(loader):
- all_samplers = list_all_samplers()
- if loader == 'All':
- return [gr.update(visible=True) for sampler in all_samplers]
- else:
- return [gr.update(visible=True) if sampler in loaders_samplers[loader] else gr.update(visible=False) for sampler in all_samplers]
-
-
-def get_model_types(loader):
- if loader in loaders_model_types:
- return loaders_model_types[loader]
-
- return ["None"]
-
-
-def get_gpu_memory_keys():
- return [k for k in shared.gradio if k.startswith('gpu_memory')]
-
-
-@functools.cache
-def get_all_params():
- all_params = set()
- for k in loaders_and_params:
- for el in loaders_and_params[k]:
- all_params.add(el)
-
- if 'gpu_memory' in all_params:
- all_params.remove('gpu_memory')
- for k in get_gpu_memory_keys():
- all_params.add(k)
-
- return sorted(all_params)
-
-
-def make_loader_params_visible(loader):
- params = []
- all_params = get_all_params()
- if loader in loaders_and_params:
- params = loaders_and_params[loader]
-
- if 'gpu_memory' in params:
- params.remove('gpu_memory')
- params += get_gpu_memory_keys()
-
- return [gr.update(visible=True) if k in params else gr.update(visible=False) for k in all_params]
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan.py
deleted file mode 100644
index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan.py
+++ /dev/null
@@ -1,730 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-# --------------------------------------------
-# Super-Resolution
-# --------------------------------------------
-#
-# Kai Zhang (cskaizhang@gmail.com)
-# https://github.com/cszn
-# From 2019/03--2021/08
-# --------------------------------------------
-"""
-
-import numpy as np
-import cv2
-import torch
-
-from functools import partial
-import random
-from scipy import ndimage
-import scipy
-import scipy.stats as ss
-from scipy.interpolate import interp2d
-from scipy.linalg import orth
-import albumentations
-
-import ldm.modules.image_degradation.utils_image as util
-
-
-def modcrop_np(img, sf):
- '''
- Args:
- img: numpy image, WxH or WxHxC
- sf: scale factor
- Return:
- cropped image
- '''
- w, h = img.shape[:2]
- im = np.copy(img)
- return im[:w - w % sf, :h - h % sf, ...]
-
-
-"""
-# --------------------------------------------
-# anisotropic Gaussian kernels
-# --------------------------------------------
-"""
-
-
-def analytic_kernel(k):
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
- k_size = k.shape[0]
- # Calculate the big kernels size
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
- # Loop over the small kernel to fill the big one
- for r in range(k_size):
- for c in range(k_size):
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
- crop = k_size // 2
- cropped_big_k = big_k[crop:-crop, crop:-crop]
- # Normalize to 1
- return cropped_big_k / cropped_big_k.sum()
-
-
-def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
- """ generate an anisotropic Gaussian kernel
- Args:
- ksize : e.g., 15, kernel size
- theta : [0, pi], rotation angle range
- l1 : [0.1,50], scaling of eigenvalues
- l2 : [0.1,l1], scaling of eigenvalues
- If l1 = l2, will get an isotropic Gaussian kernel.
- Returns:
- k : kernel
- """
-
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
- D = np.array([[l1, 0], [0, l2]])
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
-
- return k
-
-
-def gm_blur_kernel(mean, cov, size=15):
- center = size / 2.0 + 0.5
- k = np.zeros([size, size])
- for y in range(size):
- for x in range(size):
- cy = y - center + 1
- cx = x - center + 1
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
-
- k = k / np.sum(k)
- return k
-
-
-def shift_pixel(x, sf, upper_left=True):
- """shift pixel for super-resolution with different scale factors
- Args:
- x: WxHxC or WxH
- sf: scale factor
- upper_left: shift direction
- """
- h, w = x.shape[:2]
- shift = (sf - 1) * 0.5
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
- if upper_left:
- x1 = xv + shift
- y1 = yv + shift
- else:
- x1 = xv - shift
- y1 = yv - shift
-
- x1 = np.clip(x1, 0, w - 1)
- y1 = np.clip(y1, 0, h - 1)
-
- if x.ndim == 2:
- x = interp2d(xv, yv, x)(x1, y1)
- if x.ndim == 3:
- for i in range(x.shape[-1]):
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
-
- return x
-
-
-def blur(x, k):
- '''
- x: image, NxcxHxW
- k: kernel, Nx1xhxw
- '''
- n, c = x.shape[:2]
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
- k = k.repeat(1, c, 1, 1)
- k = k.view(-1, 1, k.shape[2], k.shape[3])
- x = x.view(1, -1, x.shape[2], x.shape[3])
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
- x = x.view(n, c, x.shape[2], x.shape[3])
-
- return x
-
-
-def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
- """"
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
- # Kai Zhang
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
- # max_var = 2.5 * sf
- """
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
- theta = np.random.rand() * np.pi # random theta
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
-
- # Set COV matrix using Lambdas and Theta
- LAMBDA = np.diag([lambda_1, lambda_2])
- Q = np.array([[np.cos(theta), -np.sin(theta)],
- [np.sin(theta), np.cos(theta)]])
- SIGMA = Q @ LAMBDA @ Q.T
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
-
- # Set expectation position (shifting kernel for aligned image)
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
- MU = MU[None, None, :, None]
-
- # Create meshgrid for Gaussian
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
- Z = np.stack([X, Y], 2)[:, :, :, None]
-
- # Calcualte Gaussian for every pixel of the kernel
- ZZ = Z - MU
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
-
- # shift the kernel so it will be centered
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
-
- # Normalize the kernel and return
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
- kernel = raw_kernel / np.sum(raw_kernel)
- return kernel
-
-
-def fspecial_gaussian(hsize, sigma):
- hsize = [hsize, hsize]
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
- std = sigma
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
- arg = -(x * x + y * y) / (2 * std * std)
- h = np.exp(arg)
- h[h < scipy.finfo(float).eps * h.max()] = 0
- sumh = h.sum()
- if sumh != 0:
- h = h / sumh
- return h
-
-
-def fspecial_laplacian(alpha):
- alpha = max([0, min([alpha, 1])])
- h1 = alpha / (alpha + 1)
- h2 = (1 - alpha) / (alpha + 1)
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
- h = np.array(h)
- return h
-
-
-def fspecial(filter_type, *args, **kwargs):
- '''
- python code from:
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
- '''
- if filter_type == 'gaussian':
- return fspecial_gaussian(*args, **kwargs)
- if filter_type == 'laplacian':
- return fspecial_laplacian(*args, **kwargs)
-
-
-"""
-# --------------------------------------------
-# degradation models
-# --------------------------------------------
-"""
-
-
-def bicubic_degradation(x, sf=3):
- '''
- Args:
- x: HxWxC image, [0, 1]
- sf: down-scale factor
- Return:
- bicubicly downsampled LR image
- '''
- x = util.imresize_np(x, scale=1 / sf)
- return x
-
-
-def srmd_degradation(x, k, sf=3):
- ''' blur + bicubic downsampling
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2018learning,
- title={Learning a single convolutional super-resolution network for multiple degradations},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={3262--3271},
- year={2018}
- }
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
- x = bicubic_degradation(x, sf=sf)
- return x
-
-
-def dpsr_degradation(x, k, sf=3):
- ''' bicubic downsampling + blur
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2019deep,
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1671--1681},
- year={2019}
- }
- '''
- x = bicubic_degradation(x, sf=sf)
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- return x
-
-
-def classical_degradation(x, k, sf=3):
- ''' blur + downsampling
- Args:
- x: HxWxC image, [0, 1]/[0, 255]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
- st = 0
- return x[st::sf, st::sf, ...]
-
-
-def add_sharpening(img, weight=0.5, radius=50, threshold=10):
- """USM sharpening. borrowed from real-ESRGAN
- Input image: I; Blurry image: B.
- 1. K = I + weight * (I - B)
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
- 3. Blur mask:
- 4. Out = Mask * K + (1 - Mask) * I
- Args:
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
- weight (float): Sharp weight. Default: 1.
- radius (float): Kernel size of Gaussian blur. Default: 50.
- threshold (int):
- """
- if radius % 2 == 0:
- radius += 1
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
- residual = img - blur
- mask = np.abs(residual) * 255 > threshold
- mask = mask.astype('float32')
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
-
- K = img + weight * residual
- K = np.clip(K, 0, 1)
- return soft_mask * K + (1 - soft_mask) * img
-
-
-def add_blur(img, sf=4):
- wd2 = 4.0 + sf
- wd = 2.0 + 0.2 * sf
- if random.random() < 0.5:
- l1 = wd2 * random.random()
- l2 = wd2 * random.random()
- k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
- else:
- k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
-
- return img
-
-
-def add_resize(img, sf=4):
- rnum = np.random.rand()
- if rnum > 0.8: # up
- sf1 = random.uniform(1, 2)
- elif rnum < 0.7: # down
- sf1 = random.uniform(0.5 / sf, 1)
- else:
- sf1 = 1.0
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- return img
-
-
-# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
-# noise_level = random.randint(noise_level1, noise_level2)
-# rnum = np.random.rand()
-# if rnum > 0.6: # add color Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
-# elif rnum < 0.4: # add grayscale Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
-# else: # add noise
-# L = noise_level2 / 255.
-# D = np.diag(np.random.rand(3))
-# U = orth(np.random.rand(3, 3))
-# conv = np.dot(np.dot(np.transpose(U), D), U)
-# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
-# img = np.clip(img, 0.0, 1.0)
-# return img
-
-def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- rnum = np.random.rand()
- if rnum > 0.6: # add color Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4: # add grayscale Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else: # add noise
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_speckle_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- img = np.clip(img, 0.0, 1.0)
- rnum = random.random()
- if rnum > 0.6:
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4:
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else:
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_Poisson_noise(img):
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
- if random.random() < 0.5:
- img = np.random.poisson(img * vals).astype(np.float32) / vals
- else:
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
- img += noise_gray[:, :, np.newaxis]
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_JPEG_noise(img):
- quality_factor = random.randint(30, 95)
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
- img = cv2.imdecode(encimg, 1)
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
- return img
-
-
-def random_crop(lq, hq, sf=4, lq_patchsize=64):
- h, w = lq.shape[:2]
- rnd_h = random.randint(0, h - lq_patchsize)
- rnd_w = random.randint(0, w - lq_patchsize)
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
-
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
- return lq, hq
-
-
-def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- hq = img.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- img = util.imresize_np(img, 1 / 2, True)
- img = np.clip(img, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- img = add_blur(img, sf=sf)
-
- elif i == 1:
- img = add_blur(img, sf=sf)
-
- elif i == 2:
- a, b = img.shape[1], img.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
- img = img[0::sf, 0::sf, ...] # nearest downsampling
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- img = add_JPEG_noise(img)
-
- elif i == 6:
- # add processed camera sensor noise
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
-
- return img, hq
-
-
-# todo no isp_model?
-def degradation_bsrgan_variant(image, sf=4, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- image = util.uint2single(image)
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = image.shape[:2]
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = image.shape[:2]
-
- hq = image.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- image = util.imresize_np(image, 1 / 2, True)
- image = np.clip(image, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- image = add_blur(image, sf=sf)
-
- elif i == 1:
- image = add_blur(image, sf=sf)
-
- elif i == 2:
- a, b = image.shape[1], image.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
- image = image[0::sf, 0::sf, ...] # nearest downsampling
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- image = add_JPEG_noise(image)
-
- # elif i == 6:
- # # add processed camera sensor noise
- # if random.random() < isp_prob and isp_model is not None:
- # with torch.no_grad():
- # img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- image = add_JPEG_noise(image)
- image = util.single2uint(image)
- example = {"image":image}
- return example
-
-
-# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...
-def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
- """
- This is an extended degradation model by combining
- the degradation models of BSRGAN and Real-ESRGAN
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- use_shuffle: the degradation shuffle
- use_sharp: sharpening the img
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- if use_sharp:
- img = add_sharpening(img)
- hq = img.copy()
-
- if random.random() < shuffle_prob:
- shuffle_order = random.sample(range(13), 13)
- else:
- shuffle_order = list(range(13))
- # local shuffle for noise, JPEG is always the last one
- shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
- shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
-
- poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
-
- for i in shuffle_order:
- if i == 0:
- img = add_blur(img, sf=sf)
- elif i == 1:
- img = add_resize(img, sf=sf)
- elif i == 2:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 3:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 4:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 5:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- elif i == 6:
- img = add_JPEG_noise(img)
- elif i == 7:
- img = add_blur(img, sf=sf)
- elif i == 8:
- img = add_resize(img, sf=sf)
- elif i == 9:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 10:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 11:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 12:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- else:
- print('check the shuffle!')
-
- # resize to desired size
- img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),
- interpolation=random.choice([1, 2, 3]))
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf, lq_patchsize)
-
- return img, hq
-
-
-if __name__ == '__main__':
- print("hey")
- img = util.imread_uint('utils/test.png', 3)
- print(img)
- img = util.uint2single(img)
- print(img)
- img = img[:448, :448]
- h = img.shape[0] // 4
- print("resizing to", h)
- sf = 4
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
- for i in range(20):
- print(i)
- img_lq = deg_fn(img)
- print(img_lq)
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"]
- print(img_lq.shape)
- print("bicubic", img_lq_bicubic.shape)
- print(img_hq.shape)
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
- util.imsave(img_concat, str(i) + '.png')
-
-
diff --git a/spaces/AnonymousSub/Ayurveda4U/README.md b/spaces/AnonymousSub/Ayurveda4U/README.md
deleted file mode 100644
index e33c6e173b21861fe80fbdcfcf5c3302dc61db01..0000000000000000000000000000000000000000
--- a/spaces/AnonymousSub/Ayurveda4U/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ayurveda4U
-emoji: 🏆
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.44.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/markers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/markers.py
deleted file mode 100644
index 18769b09a8a34f1e7d63cc61e62cd128ff5f9484..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/markers.py
+++ /dev/null
@@ -1,304 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-import operator
-import os
-import platform
-import sys
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-from pkg_resources.extern.pyparsing import ( # noqa: N817
- Forward,
- Group,
- Literal as L,
- ParseException,
- ParseResults,
- QuotedString,
- ZeroOrMore,
- stringEnd,
- stringStart,
-)
-
-from .specifiers import InvalidSpecifier, Specifier
-
-__all__ = [
- "InvalidMarker",
- "UndefinedComparison",
- "UndefinedEnvironmentName",
- "Marker",
- "default_environment",
-]
-
-Operator = Callable[[str, str], bool]
-
-
-class InvalidMarker(ValueError):
- """
- An invalid marker was found, users should refer to PEP 508.
- """
-
-
-class UndefinedComparison(ValueError):
- """
- An invalid operation was attempted on a value that doesn't support it.
- """
-
-
-class UndefinedEnvironmentName(ValueError):
- """
- A name was attempted to be used that does not exist inside of the
- environment.
- """
-
-
-class Node:
- def __init__(self, value: Any) -> None:
- self.value = value
-
- def __str__(self) -> str:
- return str(self.value)
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__}('{self}')>"
-
- def serialize(self) -> str:
- raise NotImplementedError
-
-
-class Variable(Node):
- def serialize(self) -> str:
- return str(self)
-
-
-class Value(Node):
- def serialize(self) -> str:
- return f'"{self}"'
-
-
-class Op(Node):
- def serialize(self) -> str:
- return str(self)
-
-
-VARIABLE = (
- L("implementation_version")
- | L("platform_python_implementation")
- | L("implementation_name")
- | L("python_full_version")
- | L("platform_release")
- | L("platform_version")
- | L("platform_machine")
- | L("platform_system")
- | L("python_version")
- | L("sys_platform")
- | L("os_name")
- | L("os.name") # PEP-345
- | L("sys.platform") # PEP-345
- | L("platform.version") # PEP-345
- | L("platform.machine") # PEP-345
- | L("platform.python_implementation") # PEP-345
- | L("python_implementation") # undocumented setuptools legacy
- | L("extra") # PEP-508
-)
-ALIASES = {
- "os.name": "os_name",
- "sys.platform": "sys_platform",
- "platform.version": "platform_version",
- "platform.machine": "platform_machine",
- "platform.python_implementation": "platform_python_implementation",
- "python_implementation": "platform_python_implementation",
-}
-VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
-
-VERSION_CMP = (
- L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
-)
-
-MARKER_OP = VERSION_CMP | L("not in") | L("in")
-MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
-
-MARKER_VALUE = QuotedString("'") | QuotedString('"')
-MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
-
-BOOLOP = L("and") | L("or")
-
-MARKER_VAR = VARIABLE | MARKER_VALUE
-
-MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
-MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
-
-LPAREN = L("(").suppress()
-RPAREN = L(")").suppress()
-
-MARKER_EXPR = Forward()
-MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
-MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
-
-MARKER = stringStart + MARKER_EXPR + stringEnd
-
-
-def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
- if isinstance(results, ParseResults):
- return [_coerce_parse_result(i) for i in results]
- else:
- return results
-
-
-def _format_marker(
- marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
-) -> str:
-
- assert isinstance(marker, (list, tuple, str))
-
- # Sometimes we have a structure like [[...]] which is a single item list
- # where the single item is itself it's own list. In that case we want skip
- # the rest of this function so that we don't get extraneous () on the
- # outside.
- if (
- isinstance(marker, list)
- and len(marker) == 1
- and isinstance(marker[0], (list, tuple))
- ):
- return _format_marker(marker[0])
-
- if isinstance(marker, list):
- inner = (_format_marker(m, first=False) for m in marker)
- if first:
- return " ".join(inner)
- else:
- return "(" + " ".join(inner) + ")"
- elif isinstance(marker, tuple):
- return " ".join([m.serialize() for m in marker])
- else:
- return marker
-
-
-_operators: Dict[str, Operator] = {
- "in": lambda lhs, rhs: lhs in rhs,
- "not in": lambda lhs, rhs: lhs not in rhs,
- "<": operator.lt,
- "<=": operator.le,
- "==": operator.eq,
- "!=": operator.ne,
- ">=": operator.ge,
- ">": operator.gt,
-}
-
-
-def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
- try:
- spec = Specifier("".join([op.serialize(), rhs]))
- except InvalidSpecifier:
- pass
- else:
- return spec.contains(lhs)
-
- oper: Optional[Operator] = _operators.get(op.serialize())
- if oper is None:
- raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
-
- return oper(lhs, rhs)
-
-
-class Undefined:
- pass
-
-
-_undefined = Undefined()
-
-
-def _get_env(environment: Dict[str, str], name: str) -> str:
- value: Union[str, Undefined] = environment.get(name, _undefined)
-
- if isinstance(value, Undefined):
- raise UndefinedEnvironmentName(
- f"{name!r} does not exist in evaluation environment."
- )
-
- return value
-
-
-def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
- groups: List[List[bool]] = [[]]
-
- for marker in markers:
- assert isinstance(marker, (list, tuple, str))
-
- if isinstance(marker, list):
- groups[-1].append(_evaluate_markers(marker, environment))
- elif isinstance(marker, tuple):
- lhs, op, rhs = marker
-
- if isinstance(lhs, Variable):
- lhs_value = _get_env(environment, lhs.value)
- rhs_value = rhs.value
- else:
- lhs_value = lhs.value
- rhs_value = _get_env(environment, rhs.value)
-
- groups[-1].append(_eval_op(lhs_value, op, rhs_value))
- else:
- assert marker in ["and", "or"]
- if marker == "or":
- groups.append([])
-
- return any(all(item) for item in groups)
-
-
-def format_full_version(info: "sys._version_info") -> str:
- version = "{0.major}.{0.minor}.{0.micro}".format(info)
- kind = info.releaselevel
- if kind != "final":
- version += kind[0] + str(info.serial)
- return version
-
-
-def default_environment() -> Dict[str, str]:
- iver = format_full_version(sys.implementation.version)
- implementation_name = sys.implementation.name
- return {
- "implementation_name": implementation_name,
- "implementation_version": iver,
- "os_name": os.name,
- "platform_machine": platform.machine(),
- "platform_release": platform.release(),
- "platform_system": platform.system(),
- "platform_version": platform.version(),
- "python_full_version": platform.python_version(),
- "platform_python_implementation": platform.python_implementation(),
- "python_version": ".".join(platform.python_version_tuple()[:2]),
- "sys_platform": sys.platform,
- }
-
-
-class Marker:
- def __init__(self, marker: str) -> None:
- try:
- self._markers = _coerce_parse_result(MARKER.parseString(marker))
- except ParseException as e:
- raise InvalidMarker(
- f"Invalid marker: {marker!r}, parse error at "
- f"{marker[e.loc : e.loc + 8]!r}"
- )
-
- def __str__(self) -> str:
- return _format_marker(self._markers)
-
- def __repr__(self) -> str:
- return f""
-
- def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
- """Evaluate a marker.
-
- Return the boolean from evaluating the given marker against the
- environment. environment is an optional argument to override all or
- part of the determined environment.
-
- The environment is determined from the current Python process.
- """
- current_environment = default_environment()
- if environment is not None:
- current_environment.update(environment)
-
- return _evaluate_markers(self._markers, current_environment)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/core.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/core.py
deleted file mode 100644
index de13978f02aa85ac70aa49a0d39178cbba913199..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/core.py
+++ /dev/null
@@ -1,291 +0,0 @@
-"""distutils.core
-
-The only module that needs to be imported to use the Distutils; provides
-the 'setup' function (which is to be called from the setup script). Also
-indirectly provides the Distribution and Command classes, although they are
-really defined in distutils.dist and distutils.cmd.
-"""
-
-import os
-import sys
-import tokenize
-
-from distutils.debug import DEBUG
-from distutils.errors import (
- DistutilsSetupError,
- DistutilsError,
- CCompilerError,
- DistutilsArgError,
-)
-
-# Mainly import these so setup scripts can "from distutils.core import" them.
-from distutils.dist import Distribution
-from distutils.cmd import Command
-from distutils.config import PyPIRCCommand
-from distutils.extension import Extension
-
-
-__all__ = ['Distribution', 'Command', 'PyPIRCCommand', 'Extension', 'setup']
-
-# This is a barebones help message generated displayed when the user
-# runs the setup script with no arguments at all. More useful help
-# is generated with various --help options: global help, list commands,
-# and per-command help.
-USAGE = """\
-usage: %(script)s [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
- or: %(script)s --help [cmd1 cmd2 ...]
- or: %(script)s --help-commands
- or: %(script)s cmd --help
-"""
-
-
-def gen_usage(script_name):
- script = os.path.basename(script_name)
- return USAGE % locals()
-
-
-# Some mild magic to control the behaviour of 'setup()' from 'run_setup()'.
-_setup_stop_after = None
-_setup_distribution = None
-
-# Legal keyword arguments for the setup() function
-setup_keywords = (
- 'distclass',
- 'script_name',
- 'script_args',
- 'options',
- 'name',
- 'version',
- 'author',
- 'author_email',
- 'maintainer',
- 'maintainer_email',
- 'url',
- 'license',
- 'description',
- 'long_description',
- 'keywords',
- 'platforms',
- 'classifiers',
- 'download_url',
- 'requires',
- 'provides',
- 'obsoletes',
-)
-
-# Legal keyword arguments for the Extension constructor
-extension_keywords = (
- 'name',
- 'sources',
- 'include_dirs',
- 'define_macros',
- 'undef_macros',
- 'library_dirs',
- 'libraries',
- 'runtime_library_dirs',
- 'extra_objects',
- 'extra_compile_args',
- 'extra_link_args',
- 'swig_opts',
- 'export_symbols',
- 'depends',
- 'language',
-)
-
-
-def setup(**attrs): # noqa: C901
- """The gateway to the Distutils: do everything your setup script needs
- to do, in a highly flexible and user-driven way. Briefly: create a
- Distribution instance; find and parse config files; parse the command
- line; run each Distutils command found there, customized by the options
- supplied to 'setup()' (as keyword arguments), in config files, and on
- the command line.
-
- The Distribution instance might be an instance of a class supplied via
- the 'distclass' keyword argument to 'setup'; if no such class is
- supplied, then the Distribution class (in dist.py) is instantiated.
- All other arguments to 'setup' (except for 'cmdclass') are used to set
- attributes of the Distribution instance.
-
- The 'cmdclass' argument, if supplied, is a dictionary mapping command
- names to command classes. Each command encountered on the command line
- will be turned into a command class, which is in turn instantiated; any
- class found in 'cmdclass' is used in place of the default, which is
- (for command 'foo_bar') class 'foo_bar' in module
- 'distutils.command.foo_bar'. The command class must provide a
- 'user_options' attribute which is a list of option specifiers for
- 'distutils.fancy_getopt'. Any command-line options between the current
- and the next command are used to set attributes of the current command
- object.
-
- When the entire command-line has been successfully parsed, calls the
- 'run()' method on each command object in turn. This method will be
- driven entirely by the Distribution object (which each command object
- has a reference to, thanks to its constructor), and the
- command-specific options that became attributes of each command
- object.
- """
-
- global _setup_stop_after, _setup_distribution
-
- # Determine the distribution class -- either caller-supplied or
- # our Distribution (see below).
- klass = attrs.get('distclass')
- if klass:
- del attrs['distclass']
- else:
- klass = Distribution
-
- if 'script_name' not in attrs:
- attrs['script_name'] = os.path.basename(sys.argv[0])
- if 'script_args' not in attrs:
- attrs['script_args'] = sys.argv[1:]
-
- # Create the Distribution instance, using the remaining arguments
- # (ie. everything except distclass) to initialize it
- try:
- _setup_distribution = dist = klass(attrs)
- except DistutilsSetupError as msg:
- if 'name' not in attrs:
- raise SystemExit("error in setup command: %s" % msg)
- else:
- raise SystemExit("error in {} setup command: {}".format(attrs['name'], msg))
-
- if _setup_stop_after == "init":
- return dist
-
- # Find and parse the config file(s): they will override options from
- # the setup script, but be overridden by the command line.
- dist.parse_config_files()
-
- if DEBUG:
- print("options (after parsing config files):")
- dist.dump_option_dicts()
-
- if _setup_stop_after == "config":
- return dist
-
- # Parse the command line and override config files; any
- # command-line errors are the end user's fault, so turn them into
- # SystemExit to suppress tracebacks.
- try:
- ok = dist.parse_command_line()
- except DistutilsArgError as msg:
- raise SystemExit(gen_usage(dist.script_name) + "\nerror: %s" % msg)
-
- if DEBUG:
- print("options (after parsing command line):")
- dist.dump_option_dicts()
-
- if _setup_stop_after == "commandline":
- return dist
-
- # And finally, run all the commands found on the command line.
- if ok:
- return run_commands(dist)
-
- return dist
-
-
-# setup ()
-
-
-def run_commands(dist):
- """Given a Distribution object run all the commands,
- raising ``SystemExit`` errors in the case of failure.
-
- This function assumes that either ``sys.argv`` or ``dist.script_args``
- is already set accordingly.
- """
- try:
- dist.run_commands()
- except KeyboardInterrupt:
- raise SystemExit("interrupted")
- except OSError as exc:
- if DEBUG:
- sys.stderr.write("error: {}\n".format(exc))
- raise
- else:
- raise SystemExit("error: {}".format(exc))
-
- except (DistutilsError, CCompilerError) as msg:
- if DEBUG:
- raise
- else:
- raise SystemExit("error: " + str(msg))
-
- return dist
-
-
-def run_setup(script_name, script_args=None, stop_after="run"):
- """Run a setup script in a somewhat controlled environment, and
- return the Distribution instance that drives things. This is useful
- if you need to find out the distribution meta-data (passed as
- keyword args from 'script' to 'setup()', or the contents of the
- config files or command-line.
-
- 'script_name' is a file that will be read and run with 'exec()';
- 'sys.argv[0]' will be replaced with 'script' for the duration of the
- call. 'script_args' is a list of strings; if supplied,
- 'sys.argv[1:]' will be replaced by 'script_args' for the duration of
- the call.
-
- 'stop_after' tells 'setup()' when to stop processing; possible
- values:
- init
- stop after the Distribution instance has been created and
- populated with the keyword arguments to 'setup()'
- config
- stop after config files have been parsed (and their data
- stored in the Distribution instance)
- commandline
- stop after the command-line ('sys.argv[1:]' or 'script_args')
- have been parsed (and the data stored in the Distribution)
- run [default]
- stop after all commands have been run (the same as if 'setup()'
- had been called in the usual way
-
- Returns the Distribution instance, which provides all information
- used to drive the Distutils.
- """
- if stop_after not in ('init', 'config', 'commandline', 'run'):
- raise ValueError("invalid value for 'stop_after': {!r}".format(stop_after))
-
- global _setup_stop_after, _setup_distribution
- _setup_stop_after = stop_after
-
- save_argv = sys.argv.copy()
- g = {'__file__': script_name, '__name__': '__main__'}
- try:
- try:
- sys.argv[0] = script_name
- if script_args is not None:
- sys.argv[1:] = script_args
- # tokenize.open supports automatic encoding detection
- with tokenize.open(script_name) as f:
- code = f.read().replace(r'\r\n', r'\n')
- exec(code, g)
- finally:
- sys.argv = save_argv
- _setup_stop_after = None
- except SystemExit:
- # Hmm, should we do something if exiting with a non-zero code
- # (ie. error)?
- pass
-
- if _setup_distribution is None:
- raise RuntimeError(
- (
- "'distutils.core.setup()' was never called -- "
- "perhaps '%s' is not a Distutils setup script?"
- )
- % script_name
- )
-
- # I wonder if the setup script's namespace -- g and l -- would be of
- # any interest to callers?
- # print "_setup_distribution:", _setup_distribution
- return _setup_distribution
-
-
-# run_setup ()
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py
deleted file mode 100644
index 9a2dbd35bb24f0d4a979bc8f304142376d87e7ec..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build import build_lr_scheduler, build_optimizer, get_default_optimizer_params
-from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR, LRMultiplier, WarmupParamScheduler
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/Aymene/FakeNewsDetector/README.md b/spaces/Aymene/FakeNewsDetector/README.md
deleted file mode 100644
index 7b8b0283bf44d1931646532e189fd10997365359..0000000000000000000000000000000000000000
--- a/spaces/Aymene/FakeNewsDetector/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: FakeNewsDetector
-emoji: 🔥
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/compatibility_tags.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/compatibility_tags.py
deleted file mode 100644
index b6ed9a78e552806cb23d8ac48ada6d41db5b4de5..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/compatibility_tags.py
+++ /dev/null
@@ -1,165 +0,0 @@
-"""Generate and work with PEP 425 Compatibility Tags.
-"""
-
-import re
-from typing import List, Optional, Tuple
-
-from pip._vendor.packaging.tags import (
- PythonVersion,
- Tag,
- compatible_tags,
- cpython_tags,
- generic_tags,
- interpreter_name,
- interpreter_version,
- mac_platforms,
-)
-
-_osx_arch_pat = re.compile(r"(.+)_(\d+)_(\d+)_(.+)")
-
-
-def version_info_to_nodot(version_info: Tuple[int, ...]) -> str:
- # Only use up to the first two numbers.
- return "".join(map(str, version_info[:2]))
-
-
-def _mac_platforms(arch: str) -> List[str]:
- match = _osx_arch_pat.match(arch)
- if match:
- name, major, minor, actual_arch = match.groups()
- mac_version = (int(major), int(minor))
- arches = [
- # Since we have always only checked that the platform starts
- # with "macosx", for backwards-compatibility we extract the
- # actual prefix provided by the user in case they provided
- # something like "macosxcustom_". It may be good to remove
- # this as undocumented or deprecate it in the future.
- "{}_{}".format(name, arch[len("macosx_") :])
- for arch in mac_platforms(mac_version, actual_arch)
- ]
- else:
- # arch pattern didn't match (?!)
- arches = [arch]
- return arches
-
-
-def _custom_manylinux_platforms(arch: str) -> List[str]:
- arches = [arch]
- arch_prefix, arch_sep, arch_suffix = arch.partition("_")
- if arch_prefix == "manylinux2014":
- # manylinux1/manylinux2010 wheels run on most manylinux2014 systems
- # with the exception of wheels depending on ncurses. PEP 599 states
- # manylinux1/manylinux2010 wheels should be considered
- # manylinux2014 wheels:
- # https://www.python.org/dev/peps/pep-0599/#backwards-compatibility-with-manylinux2010-wheels
- if arch_suffix in {"i686", "x86_64"}:
- arches.append("manylinux2010" + arch_sep + arch_suffix)
- arches.append("manylinux1" + arch_sep + arch_suffix)
- elif arch_prefix == "manylinux2010":
- # manylinux1 wheels run on most manylinux2010 systems with the
- # exception of wheels depending on ncurses. PEP 571 states
- # manylinux1 wheels should be considered manylinux2010 wheels:
- # https://www.python.org/dev/peps/pep-0571/#backwards-compatibility-with-manylinux1-wheels
- arches.append("manylinux1" + arch_sep + arch_suffix)
- return arches
-
-
-def _get_custom_platforms(arch: str) -> List[str]:
- arch_prefix, arch_sep, arch_suffix = arch.partition("_")
- if arch.startswith("macosx"):
- arches = _mac_platforms(arch)
- elif arch_prefix in ["manylinux2014", "manylinux2010"]:
- arches = _custom_manylinux_platforms(arch)
- else:
- arches = [arch]
- return arches
-
-
-def _expand_allowed_platforms(platforms: Optional[List[str]]) -> Optional[List[str]]:
- if not platforms:
- return None
-
- seen = set()
- result = []
-
- for p in platforms:
- if p in seen:
- continue
- additions = [c for c in _get_custom_platforms(p) if c not in seen]
- seen.update(additions)
- result.extend(additions)
-
- return result
-
-
-def _get_python_version(version: str) -> PythonVersion:
- if len(version) > 1:
- return int(version[0]), int(version[1:])
- else:
- return (int(version[0]),)
-
-
-def _get_custom_interpreter(
- implementation: Optional[str] = None, version: Optional[str] = None
-) -> str:
- if implementation is None:
- implementation = interpreter_name()
- if version is None:
- version = interpreter_version()
- return f"{implementation}{version}"
-
-
-def get_supported(
- version: Optional[str] = None,
- platforms: Optional[List[str]] = None,
- impl: Optional[str] = None,
- abis: Optional[List[str]] = None,
-) -> List[Tag]:
- """Return a list of supported tags for each version specified in
- `versions`.
-
- :param version: a string version, of the form "33" or "32",
- or None. The version will be assumed to support our ABI.
- :param platform: specify a list of platforms you want valid
- tags for, or None. If None, use the local system platform.
- :param impl: specify the exact implementation you want valid
- tags for, or None. If None, use the local interpreter impl.
- :param abis: specify a list of abis you want valid
- tags for, or None. If None, use the local interpreter abi.
- """
- supported: List[Tag] = []
-
- python_version: Optional[PythonVersion] = None
- if version is not None:
- python_version = _get_python_version(version)
-
- interpreter = _get_custom_interpreter(impl, version)
-
- platforms = _expand_allowed_platforms(platforms)
-
- is_cpython = (impl or interpreter_name()) == "cp"
- if is_cpython:
- supported.extend(
- cpython_tags(
- python_version=python_version,
- abis=abis,
- platforms=platforms,
- )
- )
- else:
- supported.extend(
- generic_tags(
- interpreter=interpreter,
- abis=abis,
- platforms=platforms,
- )
- )
- supported.extend(
- compatible_tags(
- python_version=python_version,
- interpreter=interpreter,
- platforms=platforms,
- )
- )
-
- return supported
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/python.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/python.py
deleted file mode 100644
index 3341a3826858e8623fade6da45a83f031b735ab8..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/python.py
+++ /dev/null
@@ -1,1204 +0,0 @@
-"""
- pygments.lexers.python
- ~~~~~~~~~~~~~~~~~~~~~~
-
- Lexers for Python and related languages.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-import keyword
-
-from pip._vendor.pygments.lexer import Lexer, RegexLexer, include, bygroups, using, \
- default, words, combined, do_insertions, this, line_re
-from pip._vendor.pygments.util import get_bool_opt, shebang_matches
-from pip._vendor.pygments.token import Text, Comment, Operator, Keyword, Name, String, \
- Number, Punctuation, Generic, Other, Error, Whitespace
-from pip._vendor.pygments import unistring as uni
-
-__all__ = ['PythonLexer', 'PythonConsoleLexer', 'PythonTracebackLexer',
- 'Python2Lexer', 'Python2TracebackLexer',
- 'CythonLexer', 'DgLexer', 'NumPyLexer']
-
-
-class PythonLexer(RegexLexer):
- """
- For Python source code (version 3.x).
-
- .. versionadded:: 0.10
-
- .. versionchanged:: 2.5
- This is now the default ``PythonLexer``. It is still available as the
- alias ``Python3Lexer``.
- """
-
- name = 'Python'
- url = 'http://www.python.org'
- aliases = ['python', 'py', 'sage', 'python3', 'py3']
- filenames = [
- '*.py',
- '*.pyw',
- # Type stubs
- '*.pyi',
- # Jython
- '*.jy',
- # Sage
- '*.sage',
- # SCons
- '*.sc',
- 'SConstruct',
- 'SConscript',
- # Skylark/Starlark (used by Bazel, Buck, and Pants)
- '*.bzl',
- 'BUCK',
- 'BUILD',
- 'BUILD.bazel',
- 'WORKSPACE',
- # Twisted Application infrastructure
- '*.tac',
- ]
- mimetypes = ['text/x-python', 'application/x-python',
- 'text/x-python3', 'application/x-python3']
-
- uni_name = "[%s][%s]*" % (uni.xid_start, uni.xid_continue)
-
- def innerstring_rules(ttype):
- return [
- # the old style '%s' % (...) string formatting (still valid in Py3)
- (r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[E-GXc-giorsaux%]', String.Interpol),
- # the new style '{}'.format(...) string formatting
- (r'\{'
- r'((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name
- r'(\![sra])?' # conversion
- r'(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?'
- r'\}', String.Interpol),
-
- # backslashes, quotes and formatting signs must be parsed one at a time
- (r'[^\\\'"%{\n]+', ttype),
- (r'[\'"\\]', ttype),
- # unhandled string formatting sign
- (r'%|(\{{1,2})', ttype)
- # newlines are an error (use "nl" state)
- ]
-
- def fstring_rules(ttype):
- return [
- # Assuming that a '}' is the closing brace after format specifier.
- # Sadly, this means that we won't detect syntax error. But it's
- # more important to parse correct syntax correctly, than to
- # highlight invalid syntax.
- (r'\}', String.Interpol),
- (r'\{', String.Interpol, 'expr-inside-fstring'),
- # backslashes, quotes and formatting signs must be parsed one at a time
- (r'[^\\\'"{}\n]+', ttype),
- (r'[\'"\\]', ttype),
- # newlines are an error (use "nl" state)
- ]
-
- tokens = {
- 'root': [
- (r'\n', Whitespace),
- (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")',
- bygroups(Whitespace, String.Affix, String.Doc)),
- (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')",
- bygroups(Whitespace, String.Affix, String.Doc)),
- (r'\A#!.+$', Comment.Hashbang),
- (r'#.*$', Comment.Single),
- (r'\\\n', Text),
- (r'\\', Text),
- include('keywords'),
- include('soft-keywords'),
- (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'),
- (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'fromimport'),
- (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'import'),
- include('expr'),
- ],
- 'expr': [
- # raw f-strings
- ('(?i)(rf|fr)(""")',
- bygroups(String.Affix, String.Double),
- combined('rfstringescape', 'tdqf')),
- ("(?i)(rf|fr)(''')",
- bygroups(String.Affix, String.Single),
- combined('rfstringescape', 'tsqf')),
- ('(?i)(rf|fr)(")',
- bygroups(String.Affix, String.Double),
- combined('rfstringescape', 'dqf')),
- ("(?i)(rf|fr)(')",
- bygroups(String.Affix, String.Single),
- combined('rfstringescape', 'sqf')),
- # non-raw f-strings
- ('([fF])(""")', bygroups(String.Affix, String.Double),
- combined('fstringescape', 'tdqf')),
- ("([fF])(''')", bygroups(String.Affix, String.Single),
- combined('fstringescape', 'tsqf')),
- ('([fF])(")', bygroups(String.Affix, String.Double),
- combined('fstringescape', 'dqf')),
- ("([fF])(')", bygroups(String.Affix, String.Single),
- combined('fstringescape', 'sqf')),
- # raw bytes and strings
- ('(?i)(rb|br|r)(""")',
- bygroups(String.Affix, String.Double), 'tdqs'),
- ("(?i)(rb|br|r)(''')",
- bygroups(String.Affix, String.Single), 'tsqs'),
- ('(?i)(rb|br|r)(")',
- bygroups(String.Affix, String.Double), 'dqs'),
- ("(?i)(rb|br|r)(')",
- bygroups(String.Affix, String.Single), 'sqs'),
- # non-raw strings
- ('([uU]?)(""")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'tdqs')),
- ("([uU]?)(''')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'tsqs')),
- ('([uU]?)(")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'dqs')),
- ("([uU]?)(')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'sqs')),
- # non-raw bytes
- ('([bB])(""")', bygroups(String.Affix, String.Double),
- combined('bytesescape', 'tdqs')),
- ("([bB])(''')", bygroups(String.Affix, String.Single),
- combined('bytesescape', 'tsqs')),
- ('([bB])(")', bygroups(String.Affix, String.Double),
- combined('bytesescape', 'dqs')),
- ("([bB])(')", bygroups(String.Affix, String.Single),
- combined('bytesescape', 'sqs')),
-
- (r'[^\S\n]+', Text),
- include('numbers'),
- (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|.]', Operator),
- (r'[]{}:(),;[]', Punctuation),
- (r'(in|is|and|or|not)\b', Operator.Word),
- include('expr-keywords'),
- include('builtins'),
- include('magicfuncs'),
- include('magicvars'),
- include('name'),
- ],
- 'expr-inside-fstring': [
- (r'[{([]', Punctuation, 'expr-inside-fstring-inner'),
- # without format specifier
- (r'(=\s*)?' # debug (https://bugs.python.org/issue36817)
- r'(\![sraf])?' # conversion
- r'\}', String.Interpol, '#pop'),
- # with format specifier
- # we'll catch the remaining '}' in the outer scope
- (r'(=\s*)?' # debug (https://bugs.python.org/issue36817)
- r'(\![sraf])?' # conversion
- r':', String.Interpol, '#pop'),
- (r'\s+', Whitespace), # allow new lines
- include('expr'),
- ],
- 'expr-inside-fstring-inner': [
- (r'[{([]', Punctuation, 'expr-inside-fstring-inner'),
- (r'[])}]', Punctuation, '#pop'),
- (r'\s+', Whitespace), # allow new lines
- include('expr'),
- ],
- 'expr-keywords': [
- # Based on https://docs.python.org/3/reference/expressions.html
- (words((
- 'async for', 'await', 'else', 'for', 'if', 'lambda',
- 'yield', 'yield from'), suffix=r'\b'),
- Keyword),
- (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant),
- ],
- 'keywords': [
- (words((
- 'assert', 'async', 'await', 'break', 'continue', 'del', 'elif',
- 'else', 'except', 'finally', 'for', 'global', 'if', 'lambda',
- 'pass', 'raise', 'nonlocal', 'return', 'try', 'while', 'yield',
- 'yield from', 'as', 'with'), suffix=r'\b'),
- Keyword),
- (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant),
- ],
- 'soft-keywords': [
- # `match`, `case` and `_` soft keywords
- (r'(^[ \t]*)' # at beginning of line + possible indentation
- r'(match|case)\b' # a possible keyword
- r'(?![ \t]*(?:' # not followed by...
- r'[:,;=^&|@~)\]}]|(?:' + # characters and keywords that mean this isn't
- r'|'.join(keyword.kwlist) + r')\b))', # pattern matching
- bygroups(Text, Keyword), 'soft-keywords-inner'),
- ],
- 'soft-keywords-inner': [
- # optional `_` keyword
- (r'(\s+)([^\n_]*)(_\b)', bygroups(Whitespace, using(this), Keyword)),
- default('#pop')
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'bin', 'bool', 'bytearray',
- 'breakpoint', 'bytes', 'chr', 'classmethod', 'compile', 'complex',
- 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'filter',
- 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr',
- 'hash', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass',
- 'iter', 'len', 'list', 'locals', 'map', 'max', 'memoryview',
- 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print',
- 'property', 'range', 'repr', 'reversed', 'round', 'set', 'setattr',
- 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple',
- 'type', 'vars', 'zip'), prefix=r'(?>|[-~+/*%=<>&^|.]', Operator),
- include('keywords'),
- (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'),
- (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'fromimport'),
- (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'import'),
- include('builtins'),
- include('magicfuncs'),
- include('magicvars'),
- include('backtick'),
- ('([rR]|[uUbB][rR]|[rR][uUbB])(""")',
- bygroups(String.Affix, String.Double), 'tdqs'),
- ("([rR]|[uUbB][rR]|[rR][uUbB])(''')",
- bygroups(String.Affix, String.Single), 'tsqs'),
- ('([rR]|[uUbB][rR]|[rR][uUbB])(")',
- bygroups(String.Affix, String.Double), 'dqs'),
- ("([rR]|[uUbB][rR]|[rR][uUbB])(')",
- bygroups(String.Affix, String.Single), 'sqs'),
- ('([uUbB]?)(""")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'tdqs')),
- ("([uUbB]?)(''')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'tsqs')),
- ('([uUbB]?)(")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'dqs')),
- ("([uUbB]?)(')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'sqs')),
- include('name'),
- include('numbers'),
- ],
- 'keywords': [
- (words((
- 'assert', 'break', 'continue', 'del', 'elif', 'else', 'except',
- 'exec', 'finally', 'for', 'global', 'if', 'lambda', 'pass',
- 'print', 'raise', 'return', 'try', 'while', 'yield',
- 'yield from', 'as', 'with'), suffix=r'\b'),
- Keyword),
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin',
- 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod',
- 'cmp', 'coerce', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod',
- 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float',
- 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id',
- 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len',
- 'list', 'locals', 'long', 'map', 'max', 'min', 'next', 'object',
- 'oct', 'open', 'ord', 'pow', 'property', 'range', 'raw_input', 'reduce',
- 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice',
- 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type',
- 'unichr', 'unicode', 'vars', 'xrange', 'zip'),
- prefix=r'(?>> a = 'foo'
- >>> print a
- foo
- >>> 1 / 0
- Traceback (most recent call last):
- File "", line 1, in
- ZeroDivisionError: integer division or modulo by zero
-
- Additional options:
-
- `python3`
- Use Python 3 lexer for code. Default is ``True``.
-
- .. versionadded:: 1.0
- .. versionchanged:: 2.5
- Now defaults to ``True``.
- """
- name = 'Python console session'
- aliases = ['pycon']
- mimetypes = ['text/x-python-doctest']
-
- def __init__(self, **options):
- self.python3 = get_bool_opt(options, 'python3', True)
- Lexer.__init__(self, **options)
-
- def get_tokens_unprocessed(self, text):
- if self.python3:
- pylexer = PythonLexer(**self.options)
- tblexer = PythonTracebackLexer(**self.options)
- else:
- pylexer = Python2Lexer(**self.options)
- tblexer = Python2TracebackLexer(**self.options)
-
- curcode = ''
- insertions = []
- curtb = ''
- tbindex = 0
- tb = 0
- for match in line_re.finditer(text):
- line = match.group()
- if line.startswith('>>> ') or line.startswith('... '):
- tb = 0
- insertions.append((len(curcode),
- [(0, Generic.Prompt, line[:4])]))
- curcode += line[4:]
- elif line.rstrip() == '...' and not tb:
- # only a new >>> prompt can end an exception block
- # otherwise an ellipsis in place of the traceback frames
- # will be mishandled
- insertions.append((len(curcode),
- [(0, Generic.Prompt, '...')]))
- curcode += line[3:]
- else:
- if curcode:
- yield from do_insertions(
- insertions, pylexer.get_tokens_unprocessed(curcode))
- curcode = ''
- insertions = []
- if (line.startswith('Traceback (most recent call last):') or
- re.match(' File "[^"]+", line \\d+\\n$', line)):
- tb = 1
- curtb = line
- tbindex = match.start()
- elif line == 'KeyboardInterrupt\n':
- yield match.start(), Name.Class, line
- elif tb:
- curtb += line
- if not (line.startswith(' ') or line.strip() == '...'):
- tb = 0
- for i, t, v in tblexer.get_tokens_unprocessed(curtb):
- yield tbindex+i, t, v
- curtb = ''
- else:
- yield match.start(), Generic.Output, line
- if curcode:
- yield from do_insertions(insertions,
- pylexer.get_tokens_unprocessed(curcode))
- if curtb:
- for i, t, v in tblexer.get_tokens_unprocessed(curtb):
- yield tbindex+i, t, v
-
-
-class PythonTracebackLexer(RegexLexer):
- """
- For Python 3.x tracebacks, with support for chained exceptions.
-
- .. versionadded:: 1.0
-
- .. versionchanged:: 2.5
- This is now the default ``PythonTracebackLexer``. It is still available
- as the alias ``Python3TracebackLexer``.
- """
-
- name = 'Python Traceback'
- aliases = ['pytb', 'py3tb']
- filenames = ['*.pytb', '*.py3tb']
- mimetypes = ['text/x-python-traceback', 'text/x-python3-traceback']
-
- tokens = {
- 'root': [
- (r'\n', Whitespace),
- (r'^Traceback \(most recent call last\):\n', Generic.Traceback, 'intb'),
- (r'^During handling of the above exception, another '
- r'exception occurred:\n\n', Generic.Traceback),
- (r'^The above exception was the direct cause of the '
- r'following exception:\n\n', Generic.Traceback),
- (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'),
- (r'^.*\n', Other),
- ],
- 'intb': [
- (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text, Name, Whitespace)),
- (r'^( File )("[^"]+")(, line )(\d+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Whitespace)),
- (r'^( )(.+)(\n)',
- bygroups(Whitespace, using(PythonLexer), Whitespace), 'markers'),
- (r'^([ \t]*)(\.\.\.)(\n)',
- bygroups(Whitespace, Comment, Whitespace)), # for doctests...
- (r'^([^:]+)(: )(.+)(\n)',
- bygroups(Generic.Error, Text, Name, Whitespace), '#pop'),
- (r'^([a-zA-Z_][\w.]*)(:?\n)',
- bygroups(Generic.Error, Whitespace), '#pop')
- ],
- 'markers': [
- # Either `PEP 657 `
- # error locations in Python 3.11+, or single-caret markers
- # for syntax errors before that.
- (r'^( {4,})([~^]+)(\n)',
- bygroups(Whitespace, Punctuation.Marker, Whitespace),
- '#pop'),
- default('#pop'),
- ],
- }
-
-
-Python3TracebackLexer = PythonTracebackLexer
-
-
-class Python2TracebackLexer(RegexLexer):
- """
- For Python tracebacks.
-
- .. versionadded:: 0.7
-
- .. versionchanged:: 2.5
- This class has been renamed from ``PythonTracebackLexer``.
- ``PythonTracebackLexer`` now refers to the Python 3 variant.
- """
-
- name = 'Python 2.x Traceback'
- aliases = ['py2tb']
- filenames = ['*.py2tb']
- mimetypes = ['text/x-python2-traceback']
-
- tokens = {
- 'root': [
- # Cover both (most recent call last) and (innermost last)
- # The optional ^C allows us to catch keyboard interrupt signals.
- (r'^(\^C)?(Traceback.*\n)',
- bygroups(Text, Generic.Traceback), 'intb'),
- # SyntaxError starts with this.
- (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'),
- (r'^.*\n', Other),
- ],
- 'intb': [
- (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text, Name, Whitespace)),
- (r'^( File )("[^"]+")(, line )(\d+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Whitespace)),
- (r'^( )(.+)(\n)',
- bygroups(Text, using(Python2Lexer), Whitespace), 'marker'),
- (r'^([ \t]*)(\.\.\.)(\n)',
- bygroups(Text, Comment, Whitespace)), # for doctests...
- (r'^([^:]+)(: )(.+)(\n)',
- bygroups(Generic.Error, Text, Name, Whitespace), '#pop'),
- (r'^([a-zA-Z_]\w*)(:?\n)',
- bygroups(Generic.Error, Whitespace), '#pop')
- ],
- 'marker': [
- # For syntax errors.
- (r'( {4,})(\^)', bygroups(Text, Punctuation.Marker), '#pop'),
- default('#pop'),
- ],
- }
-
-
-class CythonLexer(RegexLexer):
- """
- For Pyrex and Cython source code.
-
- .. versionadded:: 1.1
- """
-
- name = 'Cython'
- url = 'http://cython.org'
- aliases = ['cython', 'pyx', 'pyrex']
- filenames = ['*.pyx', '*.pxd', '*.pxi']
- mimetypes = ['text/x-cython', 'application/x-cython']
-
- tokens = {
- 'root': [
- (r'\n', Whitespace),
- (r'^(\s*)("""(?:.|\n)*?""")', bygroups(Whitespace, String.Doc)),
- (r"^(\s*)('''(?:.|\n)*?''')", bygroups(Whitespace, String.Doc)),
- (r'[^\S\n]+', Text),
- (r'#.*$', Comment),
- (r'[]{}:(),;[]', Punctuation),
- (r'\\\n', Whitespace),
- (r'\\', Text),
- (r'(in|is|and|or|not)\b', Operator.Word),
- (r'(<)([a-zA-Z0-9.?]+)(>)',
- bygroups(Punctuation, Keyword.Type, Punctuation)),
- (r'!=|==|<<|>>|[-~+/*%=<>&^|.?]', Operator),
- (r'(from)(\d+)(<=)(\s+)(<)(\d+)(:)',
- bygroups(Keyword, Number.Integer, Operator, Name, Operator,
- Name, Punctuation)),
- include('keywords'),
- (r'(def|property)(\s+)', bygroups(Keyword, Text), 'funcname'),
- (r'(cp?def)(\s+)', bygroups(Keyword, Text), 'cdef'),
- # (should actually start a block with only cdefs)
- (r'(cdef)(:)', bygroups(Keyword, Punctuation)),
- (r'(class|struct)(\s+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)(\s+)', bygroups(Keyword, Text), 'fromimport'),
- (r'(c?import)(\s+)', bygroups(Keyword, Text), 'import'),
- include('builtins'),
- include('backtick'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"""', String, 'tdqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'''", String, 'tsqs'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"', String, 'dqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'", String, 'sqs'),
- ('[uU]?"""', String, combined('stringescape', 'tdqs')),
- ("[uU]?'''", String, combined('stringescape', 'tsqs')),
- ('[uU]?"', String, combined('stringescape', 'dqs')),
- ("[uU]?'", String, combined('stringescape', 'sqs')),
- include('name'),
- include('numbers'),
- ],
- 'keywords': [
- (words((
- 'assert', 'async', 'await', 'break', 'by', 'continue', 'ctypedef', 'del', 'elif',
- 'else', 'except', 'except?', 'exec', 'finally', 'for', 'fused', 'gil',
- 'global', 'if', 'include', 'lambda', 'nogil', 'pass', 'print',
- 'raise', 'return', 'try', 'while', 'yield', 'as', 'with'), suffix=r'\b'),
- Keyword),
- (r'(DEF|IF|ELIF|ELSE)\b', Comment.Preproc),
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bint',
- 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr',
- 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'delattr',
- 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit',
- 'file', 'filter', 'float', 'frozenset', 'getattr', 'globals',
- 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance',
- 'issubclass', 'iter', 'len', 'list', 'locals', 'long', 'map', 'max',
- 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'property', 'Py_ssize_t',
- 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed',
- 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod',
- 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'unsigned',
- 'vars', 'xrange', 'zip'), prefix=r'(? str:
- return f"ColorSystem.{self.name}"
-
- def __str__(self) -> str:
- return repr(self)
-
-
-class ColorType(IntEnum):
- """Type of color stored in Color class."""
-
- DEFAULT = 0
- STANDARD = 1
- EIGHT_BIT = 2
- TRUECOLOR = 3
- WINDOWS = 4
-
- def __repr__(self) -> str:
- return f"ColorType.{self.name}"
-
-
-ANSI_COLOR_NAMES = {
- "black": 0,
- "red": 1,
- "green": 2,
- "yellow": 3,
- "blue": 4,
- "magenta": 5,
- "cyan": 6,
- "white": 7,
- "bright_black": 8,
- "bright_red": 9,
- "bright_green": 10,
- "bright_yellow": 11,
- "bright_blue": 12,
- "bright_magenta": 13,
- "bright_cyan": 14,
- "bright_white": 15,
- "grey0": 16,
- "gray0": 16,
- "navy_blue": 17,
- "dark_blue": 18,
- "blue3": 20,
- "blue1": 21,
- "dark_green": 22,
- "deep_sky_blue4": 25,
- "dodger_blue3": 26,
- "dodger_blue2": 27,
- "green4": 28,
- "spring_green4": 29,
- "turquoise4": 30,
- "deep_sky_blue3": 32,
- "dodger_blue1": 33,
- "green3": 40,
- "spring_green3": 41,
- "dark_cyan": 36,
- "light_sea_green": 37,
- "deep_sky_blue2": 38,
- "deep_sky_blue1": 39,
- "spring_green2": 47,
- "cyan3": 43,
- "dark_turquoise": 44,
- "turquoise2": 45,
- "green1": 46,
- "spring_green1": 48,
- "medium_spring_green": 49,
- "cyan2": 50,
- "cyan1": 51,
- "dark_red": 88,
- "deep_pink4": 125,
- "purple4": 55,
- "purple3": 56,
- "blue_violet": 57,
- "orange4": 94,
- "grey37": 59,
- "gray37": 59,
- "medium_purple4": 60,
- "slate_blue3": 62,
- "royal_blue1": 63,
- "chartreuse4": 64,
- "dark_sea_green4": 71,
- "pale_turquoise4": 66,
- "steel_blue": 67,
- "steel_blue3": 68,
- "cornflower_blue": 69,
- "chartreuse3": 76,
- "cadet_blue": 73,
- "sky_blue3": 74,
- "steel_blue1": 81,
- "pale_green3": 114,
- "sea_green3": 78,
- "aquamarine3": 79,
- "medium_turquoise": 80,
- "chartreuse2": 112,
- "sea_green2": 83,
- "sea_green1": 85,
- "aquamarine1": 122,
- "dark_slate_gray2": 87,
- "dark_magenta": 91,
- "dark_violet": 128,
- "purple": 129,
- "light_pink4": 95,
- "plum4": 96,
- "medium_purple3": 98,
- "slate_blue1": 99,
- "yellow4": 106,
- "wheat4": 101,
- "grey53": 102,
- "gray53": 102,
- "light_slate_grey": 103,
- "light_slate_gray": 103,
- "medium_purple": 104,
- "light_slate_blue": 105,
- "dark_olive_green3": 149,
- "dark_sea_green": 108,
- "light_sky_blue3": 110,
- "sky_blue2": 111,
- "dark_sea_green3": 150,
- "dark_slate_gray3": 116,
- "sky_blue1": 117,
- "chartreuse1": 118,
- "light_green": 120,
- "pale_green1": 156,
- "dark_slate_gray1": 123,
- "red3": 160,
- "medium_violet_red": 126,
- "magenta3": 164,
- "dark_orange3": 166,
- "indian_red": 167,
- "hot_pink3": 168,
- "medium_orchid3": 133,
- "medium_orchid": 134,
- "medium_purple2": 140,
- "dark_goldenrod": 136,
- "light_salmon3": 173,
- "rosy_brown": 138,
- "grey63": 139,
- "gray63": 139,
- "medium_purple1": 141,
- "gold3": 178,
- "dark_khaki": 143,
- "navajo_white3": 144,
- "grey69": 145,
- "gray69": 145,
- "light_steel_blue3": 146,
- "light_steel_blue": 147,
- "yellow3": 184,
- "dark_sea_green2": 157,
- "light_cyan3": 152,
- "light_sky_blue1": 153,
- "green_yellow": 154,
- "dark_olive_green2": 155,
- "dark_sea_green1": 193,
- "pale_turquoise1": 159,
- "deep_pink3": 162,
- "magenta2": 200,
- "hot_pink2": 169,
- "orchid": 170,
- "medium_orchid1": 207,
- "orange3": 172,
- "light_pink3": 174,
- "pink3": 175,
- "plum3": 176,
- "violet": 177,
- "light_goldenrod3": 179,
- "tan": 180,
- "misty_rose3": 181,
- "thistle3": 182,
- "plum2": 183,
- "khaki3": 185,
- "light_goldenrod2": 222,
- "light_yellow3": 187,
- "grey84": 188,
- "gray84": 188,
- "light_steel_blue1": 189,
- "yellow2": 190,
- "dark_olive_green1": 192,
- "honeydew2": 194,
- "light_cyan1": 195,
- "red1": 196,
- "deep_pink2": 197,
- "deep_pink1": 199,
- "magenta1": 201,
- "orange_red1": 202,
- "indian_red1": 204,
- "hot_pink": 206,
- "dark_orange": 208,
- "salmon1": 209,
- "light_coral": 210,
- "pale_violet_red1": 211,
- "orchid2": 212,
- "orchid1": 213,
- "orange1": 214,
- "sandy_brown": 215,
- "light_salmon1": 216,
- "light_pink1": 217,
- "pink1": 218,
- "plum1": 219,
- "gold1": 220,
- "navajo_white1": 223,
- "misty_rose1": 224,
- "thistle1": 225,
- "yellow1": 226,
- "light_goldenrod1": 227,
- "khaki1": 228,
- "wheat1": 229,
- "cornsilk1": 230,
- "grey100": 231,
- "gray100": 231,
- "grey3": 232,
- "gray3": 232,
- "grey7": 233,
- "gray7": 233,
- "grey11": 234,
- "gray11": 234,
- "grey15": 235,
- "gray15": 235,
- "grey19": 236,
- "gray19": 236,
- "grey23": 237,
- "gray23": 237,
- "grey27": 238,
- "gray27": 238,
- "grey30": 239,
- "gray30": 239,
- "grey35": 240,
- "gray35": 240,
- "grey39": 241,
- "gray39": 241,
- "grey42": 242,
- "gray42": 242,
- "grey46": 243,
- "gray46": 243,
- "grey50": 244,
- "gray50": 244,
- "grey54": 245,
- "gray54": 245,
- "grey58": 246,
- "gray58": 246,
- "grey62": 247,
- "gray62": 247,
- "grey66": 248,
- "gray66": 248,
- "grey70": 249,
- "gray70": 249,
- "grey74": 250,
- "gray74": 250,
- "grey78": 251,
- "gray78": 251,
- "grey82": 252,
- "gray82": 252,
- "grey85": 253,
- "gray85": 253,
- "grey89": 254,
- "gray89": 254,
- "grey93": 255,
- "gray93": 255,
-}
-
-
-class ColorParseError(Exception):
- """The color could not be parsed."""
-
-
-RE_COLOR = re.compile(
- r"""^
-\#([0-9a-f]{6})$|
-color\(([0-9]{1,3})\)$|
-rgb\(([\d\s,]+)\)$
-""",
- re.VERBOSE,
-)
-
-
-@rich_repr
-class Color(NamedTuple):
- """Terminal color definition."""
-
- name: str
- """The name of the color (typically the input to Color.parse)."""
- type: ColorType
- """The type of the color."""
- number: Optional[int] = None
- """The color number, if a standard color, or None."""
- triplet: Optional[ColorTriplet] = None
- """A triplet of color components, if an RGB color."""
-
- def __rich__(self) -> "Text":
- """Displays the actual color if Rich printed."""
- from .style import Style
- from .text import Text
-
- return Text.assemble(
- f"",
- )
-
- def __rich_repr__(self) -> Result:
- yield self.name
- yield self.type
- yield "number", self.number, None
- yield "triplet", self.triplet, None
-
- @property
- def system(self) -> ColorSystem:
- """Get the native color system for this color."""
- if self.type == ColorType.DEFAULT:
- return ColorSystem.STANDARD
- return ColorSystem(int(self.type))
-
- @property
- def is_system_defined(self) -> bool:
- """Check if the color is ultimately defined by the system."""
- return self.system not in (ColorSystem.EIGHT_BIT, ColorSystem.TRUECOLOR)
-
- @property
- def is_default(self) -> bool:
- """Check if the color is a default color."""
- return self.type == ColorType.DEFAULT
-
- def get_truecolor(
- self, theme: Optional["TerminalTheme"] = None, foreground: bool = True
- ) -> ColorTriplet:
- """Get an equivalent color triplet for this color.
-
- Args:
- theme (TerminalTheme, optional): Optional terminal theme, or None to use default. Defaults to None.
- foreground (bool, optional): True for a foreground color, or False for background. Defaults to True.
-
- Returns:
- ColorTriplet: A color triplet containing RGB components.
- """
-
- if theme is None:
- theme = DEFAULT_TERMINAL_THEME
- if self.type == ColorType.TRUECOLOR:
- assert self.triplet is not None
- return self.triplet
- elif self.type == ColorType.EIGHT_BIT:
- assert self.number is not None
- return EIGHT_BIT_PALETTE[self.number]
- elif self.type == ColorType.STANDARD:
- assert self.number is not None
- return theme.ansi_colors[self.number]
- elif self.type == ColorType.WINDOWS:
- assert self.number is not None
- return WINDOWS_PALETTE[self.number]
- else: # self.type == ColorType.DEFAULT:
- assert self.number is None
- return theme.foreground_color if foreground else theme.background_color
-
- @classmethod
- def from_ansi(cls, number: int) -> "Color":
- """Create a Color number from it's 8-bit ansi number.
-
- Args:
- number (int): A number between 0-255 inclusive.
-
- Returns:
- Color: A new Color instance.
- """
- return cls(
- name=f"color({number})",
- type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT),
- number=number,
- )
-
- @classmethod
- def from_triplet(cls, triplet: "ColorTriplet") -> "Color":
- """Create a truecolor RGB color from a triplet of values.
-
- Args:
- triplet (ColorTriplet): A color triplet containing red, green and blue components.
-
- Returns:
- Color: A new color object.
- """
- return cls(name=triplet.hex, type=ColorType.TRUECOLOR, triplet=triplet)
-
- @classmethod
- def from_rgb(cls, red: float, green: float, blue: float) -> "Color":
- """Create a truecolor from three color components in the range(0->255).
-
- Args:
- red (float): Red component in range 0-255.
- green (float): Green component in range 0-255.
- blue (float): Blue component in range 0-255.
-
- Returns:
- Color: A new color object.
- """
- return cls.from_triplet(ColorTriplet(int(red), int(green), int(blue)))
-
- @classmethod
- def default(cls) -> "Color":
- """Get a Color instance representing the default color.
-
- Returns:
- Color: Default color.
- """
- return cls(name="default", type=ColorType.DEFAULT)
-
- @classmethod
- @lru_cache(maxsize=1024)
- def parse(cls, color: str) -> "Color":
- """Parse a color definition."""
- original_color = color
- color = color.lower().strip()
-
- if color == "default":
- return cls(color, type=ColorType.DEFAULT)
-
- color_number = ANSI_COLOR_NAMES.get(color)
- if color_number is not None:
- return cls(
- color,
- type=(ColorType.STANDARD if color_number < 16 else ColorType.EIGHT_BIT),
- number=color_number,
- )
-
- color_match = RE_COLOR.match(color)
- if color_match is None:
- raise ColorParseError(f"{original_color!r} is not a valid color")
-
- color_24, color_8, color_rgb = color_match.groups()
- if color_24:
- triplet = ColorTriplet(
- int(color_24[0:2], 16), int(color_24[2:4], 16), int(color_24[4:6], 16)
- )
- return cls(color, ColorType.TRUECOLOR, triplet=triplet)
-
- elif color_8:
- number = int(color_8)
- if number > 255:
- raise ColorParseError(f"color number must be <= 255 in {color!r}")
- return cls(
- color,
- type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT),
- number=number,
- )
-
- else: # color_rgb:
- components = color_rgb.split(",")
- if len(components) != 3:
- raise ColorParseError(
- f"expected three components in {original_color!r}"
- )
- red, green, blue = components
- triplet = ColorTriplet(int(red), int(green), int(blue))
- if not all(component <= 255 for component in triplet):
- raise ColorParseError(
- f"color components must be <= 255 in {original_color!r}"
- )
- return cls(color, ColorType.TRUECOLOR, triplet=triplet)
-
- @lru_cache(maxsize=1024)
- def get_ansi_codes(self, foreground: bool = True) -> Tuple[str, ...]:
- """Get the ANSI escape codes for this color."""
- _type = self.type
- if _type == ColorType.DEFAULT:
- return ("39" if foreground else "49",)
-
- elif _type == ColorType.WINDOWS:
- number = self.number
- assert number is not None
- fore, back = (30, 40) if number < 8 else (82, 92)
- return (str(fore + number if foreground else back + number),)
-
- elif _type == ColorType.STANDARD:
- number = self.number
- assert number is not None
- fore, back = (30, 40) if number < 8 else (82, 92)
- return (str(fore + number if foreground else back + number),)
-
- elif _type == ColorType.EIGHT_BIT:
- assert self.number is not None
- return ("38" if foreground else "48", "5", str(self.number))
-
- else: # self.standard == ColorStandard.TRUECOLOR:
- assert self.triplet is not None
- red, green, blue = self.triplet
- return ("38" if foreground else "48", "2", str(red), str(green), str(blue))
-
- @lru_cache(maxsize=1024)
- def downgrade(self, system: ColorSystem) -> "Color":
- """Downgrade a color system to a system with fewer colors."""
-
- if self.type in (ColorType.DEFAULT, system):
- return self
- # Convert to 8-bit color from truecolor color
- if system == ColorSystem.EIGHT_BIT and self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- _h, l, s = rgb_to_hls(*self.triplet.normalized)
- # If saturation is under 15% assume it is grayscale
- if s < 0.15:
- gray = round(l * 25.0)
- if gray == 0:
- color_number = 16
- elif gray == 25:
- color_number = 231
- else:
- color_number = 231 + gray
- return Color(self.name, ColorType.EIGHT_BIT, number=color_number)
-
- red, green, blue = self.triplet
- six_red = red / 95 if red < 95 else 1 + (red - 95) / 40
- six_green = green / 95 if green < 95 else 1 + (green - 95) / 40
- six_blue = blue / 95 if blue < 95 else 1 + (blue - 95) / 40
-
- color_number = (
- 16 + 36 * round(six_red) + 6 * round(six_green) + round(six_blue)
- )
- return Color(self.name, ColorType.EIGHT_BIT, number=color_number)
-
- # Convert to standard from truecolor or 8-bit
- elif system == ColorSystem.STANDARD:
- if self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- triplet = self.triplet
- else: # self.system == ColorSystem.EIGHT_BIT
- assert self.number is not None
- triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number])
-
- color_number = STANDARD_PALETTE.match(triplet)
- return Color(self.name, ColorType.STANDARD, number=color_number)
-
- elif system == ColorSystem.WINDOWS:
- if self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- triplet = self.triplet
- else: # self.system == ColorSystem.EIGHT_BIT
- assert self.number is not None
- if self.number < 16:
- return Color(self.name, ColorType.WINDOWS, number=self.number)
- triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number])
-
- color_number = WINDOWS_PALETTE.match(triplet)
- return Color(self.name, ColorType.WINDOWS, number=color_number)
-
- return self
-
-
-def parse_rgb_hex(hex_color: str) -> ColorTriplet:
- """Parse six hex characters in to RGB triplet."""
- assert len(hex_color) == 6, "must be 6 characters"
- color = ColorTriplet(
- int(hex_color[0:2], 16), int(hex_color[2:4], 16), int(hex_color[4:6], 16)
- )
- return color
-
-
-def blend_rgb(
- color1: ColorTriplet, color2: ColorTriplet, cross_fade: float = 0.5
-) -> ColorTriplet:
- """Blend one RGB color in to another."""
- r1, g1, b1 = color1
- r2, g2, b2 = color2
- new_color = ColorTriplet(
- int(r1 + (r2 - r1) * cross_fade),
- int(g1 + (g2 - g1) * cross_fade),
- int(b1 + (b2 - b1) * cross_fade),
- )
- return new_color
-
-
-if __name__ == "__main__": # pragma: no cover
-
- from .console import Console
- from .table import Table
- from .text import Text
-
- console = Console()
-
- table = Table(show_footer=False, show_edge=True)
- table.add_column("Color", width=10, overflow="ellipsis")
- table.add_column("Number", justify="right", style="yellow")
- table.add_column("Name", style="green")
- table.add_column("Hex", style="blue")
- table.add_column("RGB", style="magenta")
-
- colors = sorted((v, k) for k, v in ANSI_COLOR_NAMES.items())
- for color_number, name in colors:
- if "grey" in name:
- continue
- color_cell = Text(" " * 10, style=f"on {name}")
- if color_number < 16:
- table.add_row(color_cell, f"{color_number}", Text(f'"{name}"'))
- else:
- color = EIGHT_BIT_PALETTE[color_number] # type: ignore[has-type]
- table.add_row(
- color_cell, str(color_number), Text(f'"{name}"'), color.hex, color.rgb
- )
-
- console.print(table)
diff --git a/spaces/CVPR/LIVE/pybind11/tools/clang/__init__.py b/spaces/CVPR/LIVE/pybind11/tools/clang/__init__.py
deleted file mode 100644
index 88f30812383f8ebdcf095566500b1ecc78c92710..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tools/clang/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-#===- __init__.py - Clang Python Bindings --------------------*- python -*--===#
-#
-# The LLVM Compiler Infrastructure
-#
-# This file is distributed under the University of Illinois Open Source
-# License. See LICENSE.TXT for details.
-#
-#===------------------------------------------------------------------------===#
-
-r"""
-Clang Library Bindings
-======================
-
-This package provides access to the Clang compiler and libraries.
-
-The available modules are:
-
- cindex
-
- Bindings for the Clang indexing library.
-"""
-
-__all__ = ['cindex']
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/execution_policy.h
deleted file mode 100644
index 18f68bfdc6c544ecf0ab9ad8562632ec73c8c95c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/execution_policy.h
+++ /dev/null
@@ -1,156 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-/*! \file thrust/system/tbb/execution_policy.h
- * \brief Execution policies for Thrust's TBB system.
- */
-
-#include
-
-// get the execution policies definitions first
-#include
-
-// get the definition of par
-#include
-
-// now get all the algorithm definitions
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-
-// define these entities here for the purpose of Doxygenating them
-// they are actually defined elsewhere
-#if 0
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-
-
-/*! \addtogroup execution_policies
- * \{
- */
-
-
-/*! \p thrust::tbb::execution_policy is the base class for all Thrust parallel execution
- * policies which are derived from Thrust's TBB backend system.
- */
-template
-struct execution_policy : thrust::execution_policy
-{};
-
-
-/*! \p tbb::tag is a type representing Thrust's TBB backend system in C++'s type system.
- * Iterators "tagged" with a type which is convertible to \p tbb::tag assert that they may be
- * "dispatched" to algorithm implementations in the \p tbb system.
- */
-struct tag : thrust::system::tbb::execution_policy { unspecified };
-
-
-/*! \p thrust::tbb::par is the parallel execution policy associated with Thrust's TBB
- * backend system.
- *
- * Instead of relying on implicit algorithm dispatch through iterator system tags, users may
- * directly target Thrust's TBB backend system by providing \p thrust::tbb::par as an algorithm
- * parameter.
- *
- * Explicit dispatch can be useful in avoiding the introduction of data copies into containers such
- * as \p thrust::tbb::vector.
- *
- * The type of \p thrust::tbb::par is implementation-defined.
- *
- * The following code snippet demonstrates how to use \p thrust::tbb::par to explicitly dispatch an
- * invocation of \p thrust::for_each to the TBB backend system:
- *
- * \code
- * #include
- * #include
- * #include
- *
- * struct printf_functor
- * {
- * __host__ __device__
- * void operator()(int x)
- * {
- * printf("%d\n", x);
- * }
- * };
- * ...
- * int vec[3];
- * vec[0] = 0; vec[1] = 1; vec[2] = 2;
- *
- * thrust::for_each(thrust::tbb::par, vec.begin(), vec.end(), printf_functor());
- *
- * // 0 1 2 is printed to standard output in some unspecified order
- * \endcode
- */
-static const unspecified par;
-
-
-/*! \}
- */
-
-
-} // end tbb
-} // end system
-} // end thrust
-#endif
-
-
diff --git a/spaces/CVPR/drawings-to-human/frontend/src/app.d.ts b/spaces/CVPR/drawings-to-human/frontend/src/app.d.ts
deleted file mode 100644
index 9ca010a6dc91bcfac91c85f03a2dcddef3a379d9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/drawings-to-human/frontend/src/app.d.ts
+++ /dev/null
@@ -1,16 +0,0 @@
-///
-
-// See https://kit.svelte.dev/docs/types#app
-// for information about these interfaces
-// and what to do when importing types
-declare namespace App {
- interface Locals {
- userid: string;
- }
-
- // interface Platform {}
-
- // interface Session {}
-
- // interface Stuff {}
-}
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/__init__.py
deleted file mode 100644
index 3f4e4df7645c67b7a013295207b98fe70b2e574c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build import PROPOSAL_GENERATOR_REGISTRY, build_proposal_generator
-from .rpn import RPN_HEAD_REGISTRY, build_rpn_head, RPN, StandardRPNHead
-
-__all__ = list(globals().keys())
diff --git a/spaces/CjangCjengh/Sanskrit-TTS/modules.py b/spaces/CjangCjengh/Sanskrit-TTS/modules.py
deleted file mode 100644
index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Sanskrit-TTS/modules.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/CofAI/chat/client/js/change-language.js b/spaces/CofAI/chat/client/js/change-language.js
deleted file mode 100644
index ce87f6f60c7a9acca5e1902612930ef677f3fb65..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/js/change-language.js
+++ /dev/null
@@ -1,47 +0,0 @@
-document.addEventListener('DOMContentLoaded', fetchLanguages);
-
-async function fetchLanguages() {
- try {
- const [languagesResponse, currentLanguageResponse] = await Promise.all([
- fetch(`${url_prefix}/get-languages`),
- fetch(`${url_prefix}/get-locale`)
- ]);
-
- const languages = await languagesResponse.json();
- const currentLanguage = await currentLanguageResponse.text();
-
- const languageSelect = document.getElementById('language');
- languages.forEach(lang => {
- const option = document.createElement('option');
- option.value = lang;
- option.textContent = lang;
- languageSelect.appendChild(option);
- });
-
- const savedLanguage = localStorage.getItem("language") || currentLanguage;
- setLanguageOnPageLoad(savedLanguage);
- } catch (error) {
- console.error("Failed to fetch languages or current language");
- }
-}
-
-function setLanguageOnPageLoad(language) {
- document.getElementById("language").value = language;
-}
-
-function changeLanguage(lang) {
- fetch(`${url_prefix}/change-language`, {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- },
- body: JSON.stringify({ language: lang }),
- }).then((response) => {
- if (response.ok) {
- localStorage.setItem("language", lang);
- location.reload();
- } else {
- console.error("Failed to change language");
- }
- });
-}
diff --git a/spaces/Cran-May/ygVI/README.md b/spaces/Cran-May/ygVI/README.md
deleted file mode 100644
index e200bb56372153cca590ddbda9d15e390efe523d..0000000000000000000000000000000000000000
--- a/spaces/Cran-May/ygVI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 玉刚六号改-Chat
-emoji: 🌍
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Cvandi/remake/scripts/extract_subimages.py b/spaces/Cvandi/remake/scripts/extract_subimages.py
deleted file mode 100644
index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000
--- a/spaces/Cvandi/remake/scripts/extract_subimages.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import argparse
-import cv2
-import numpy as np
-import os
-import sys
-from basicsr.utils import scandir
-from multiprocessing import Pool
-from os import path as osp
-from tqdm import tqdm
-
-
-def main(args):
- """A multi-thread tool to crop large images to sub-images for faster IO.
-
- opt (dict): Configuration dict. It contains:
- n_thread (int): Thread number.
- compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size
- and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2.
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
-
- Usage:
- For each folder, run this script.
- Typically, there are GT folder and LQ folder to be processed for DIV2K dataset.
- After process, each sub_folder should have the same number of subimages.
- Remember to modify opt configurations according to your settings.
- """
-
- opt = {}
- opt['n_thread'] = args.n_thread
- opt['compression_level'] = args.compression_level
- opt['input_folder'] = args.input
- opt['save_folder'] = args.output
- opt['crop_size'] = args.crop_size
- opt['step'] = args.step
- opt['thresh_size'] = args.thresh_size
- extract_subimages(opt)
-
-
-def extract_subimages(opt):
- """Crop images to subimages.
-
- Args:
- opt (dict): Configuration dict. It contains:
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- n_thread (int): Thread number.
- """
- input_folder = opt['input_folder']
- save_folder = opt['save_folder']
- if not osp.exists(save_folder):
- os.makedirs(save_folder)
- print(f'mkdir {save_folder} ...')
- else:
- print(f'Folder {save_folder} already exists. Exit.')
- sys.exit(1)
-
- # scan all images
- img_list = list(scandir(input_folder, full_path=True))
-
- pbar = tqdm(total=len(img_list), unit='image', desc='Extract')
- pool = Pool(opt['n_thread'])
- for path in img_list:
- pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1))
- pool.close()
- pool.join()
- pbar.close()
- print('All processes done.')
-
-
-def worker(path, opt):
- """Worker for each process.
-
- Args:
- path (str): Image path.
- opt (dict): Configuration dict. It contains:
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
- save_folder (str): Path to save folder.
- compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION.
-
- Returns:
- process_info (str): Process information displayed in progress bar.
- """
- crop_size = opt['crop_size']
- step = opt['step']
- thresh_size = opt['thresh_size']
- img_name, extension = osp.splitext(osp.basename(path))
-
- # remove the x2, x3, x4 and x8 in the filename for DIV2K
- img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '')
-
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
-
- h, w = img.shape[0:2]
- h_space = np.arange(0, h - crop_size + 1, step)
- if h - (h_space[-1] + crop_size) > thresh_size:
- h_space = np.append(h_space, h - crop_size)
- w_space = np.arange(0, w - crop_size + 1, step)
- if w - (w_space[-1] + crop_size) > thresh_size:
- w_space = np.append(w_space, w - crop_size)
-
- index = 0
- for x in h_space:
- for y in w_space:
- index += 1
- cropped_img = img[x:x + crop_size, y:y + crop_size, ...]
- cropped_img = np.ascontiguousarray(cropped_img)
- cv2.imwrite(
- osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img,
- [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']])
- process_info = f'Processing {img_name} ...'
- return process_info
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
- parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder')
- parser.add_argument('--crop_size', type=int, default=480, help='Crop size')
- parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window')
- parser.add_argument(
- '--thresh_size',
- type=int,
- default=0,
- help='Threshold size. Patches whose size is lower than thresh_size will be dropped.')
- parser.add_argument('--n_thread', type=int, default=20, help='Thread number.')
- parser.add_argument('--compression_level', type=int, default=3, help='Compression level')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/share_btn.py b/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/share_btn.py
deleted file mode 100644
index 5d47343861f574daceb418d06679837b2c267c5b..0000000000000000000000000000000000000000
--- a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/share_btn.py
+++ /dev/null
@@ -1,109 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputVideoFile(videoEl){
- const res = await fetch(videoEl.src);
- const blob = await res.blob();
- const videoId = Date.now() % 200;
- const fileName = `sd-perception-${{videoId}}.mp4`;
- return new File([blob], fileName, { type: 'video/mp4' });
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- async function getOutputMusicFile(audioEL){
- const res = await fetch(audioEL.src);
- const blob = await res.blob();
- const audioId = Date.now() % 200;
- const fileName = `spectro-music-${{audioId}}.wav`;
- const musicBlob = new File([blob], fileName, { type: 'audio/wav' });
- console.log(musicBlob);
- return musicBlob;
- }
-
- async function audioToBase64(audioFile) {
- return new Promise((resolve, reject) => {
- let reader = new FileReader();
- reader.readAsDataURL(audioFile);
- reader.onload = () => resolve(reader.result);
- reader.onerror = error => reject(error);
-
- });
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputPromptEl = gradioEl.querySelector('#input-prompt input').value;
- const outputVideoEl = gradioEl.querySelector('#output-video video');
- const outputImgEl = gradioEl.querySelector('#output-img img');
- const outputMusic = gradioEl.querySelector('#output-music audio');
- const outputMusic_src = gradioEl.querySelector('#output-music audio').src;
-
- let titleTxt = inputPromptEl;
- //if(titleTxt.length > 100){
- // titleTxt = titleTxt.slice(0, 100) + ' ...';
- //}
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputVideoEl){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const outputVideo = await getInputVideoFile(outputVideoEl);
- const urlOutputVideo = await uploadFile(outputVideo);
- const outputImg = await getInputImgFile(outputImgEl);
- const urlOutputImg = await uploadFile(outputImg);
- const musicFile = await getOutputMusicFile(outputMusic);
- const dataOutputMusic = await uploadFile(musicFile);
-
- const descriptionMd = `#### Here is my AI generated art & music:
-
-
-
-`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py
deleted file mode 100644
index 67dfc3d3c8e5726c5885b1c62cdcb2553854c4dc..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py
+++ /dev/null
@@ -1,255 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# TGA file handling
-#
-# History:
-# 95-09-01 fl created (reads 24-bit files only)
-# 97-01-04 fl support more TGA versions, including compressed images
-# 98-07-04 fl fixed orientation and alpha layer bugs
-# 98-09-11 fl fixed orientation for runlength decoder
-#
-# Copyright (c) Secret Labs AB 1997-98.
-# Copyright (c) Fredrik Lundh 1995-97.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import warnings
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-#
-# --------------------------------------------------------------------
-# Read RGA file
-
-
-MODES = {
- # map imagetype/depth to rawmode
- (1, 8): "P",
- (3, 1): "1",
- (3, 8): "L",
- (3, 16): "LA",
- (2, 16): "BGR;5",
- (2, 24): "BGR",
- (2, 32): "BGRA",
-}
-
-
-##
-# Image plugin for Targa files.
-
-
-class TgaImageFile(ImageFile.ImageFile):
- format = "TGA"
- format_description = "Targa"
-
- def _open(self):
- # process header
- s = self.fp.read(18)
-
- id_len = s[0]
-
- colormaptype = s[1]
- imagetype = s[2]
-
- depth = s[16]
-
- flags = s[17]
-
- self._size = i16(s, 12), i16(s, 14)
-
- # validate header fields
- if (
- colormaptype not in (0, 1)
- or self.size[0] <= 0
- or self.size[1] <= 0
- or depth not in (1, 8, 16, 24, 32)
- ):
- msg = "not a TGA file"
- raise SyntaxError(msg)
-
- # image mode
- if imagetype in (3, 11):
- self.mode = "L"
- if depth == 1:
- self.mode = "1" # ???
- elif depth == 16:
- self.mode = "LA"
- elif imagetype in (1, 9):
- self.mode = "P"
- elif imagetype in (2, 10):
- self.mode = "RGB"
- if depth == 32:
- self.mode = "RGBA"
- else:
- msg = "unknown TGA mode"
- raise SyntaxError(msg)
-
- # orientation
- orientation = flags & 0x30
- self._flip_horizontally = orientation in [0x10, 0x30]
- if orientation in [0x20, 0x30]:
- orientation = 1
- elif orientation in [0, 0x10]:
- orientation = -1
- else:
- msg = "unknown TGA orientation"
- raise SyntaxError(msg)
-
- self.info["orientation"] = orientation
-
- if imagetype & 8:
- self.info["compression"] = "tga_rle"
-
- if id_len:
- self.info["id_section"] = self.fp.read(id_len)
-
- if colormaptype:
- # read palette
- start, size, mapdepth = i16(s, 3), i16(s, 5), s[7]
- if mapdepth == 16:
- self.palette = ImagePalette.raw(
- "BGR;15", b"\0" * 2 * start + self.fp.read(2 * size)
- )
- elif mapdepth == 24:
- self.palette = ImagePalette.raw(
- "BGR", b"\0" * 3 * start + self.fp.read(3 * size)
- )
- elif mapdepth == 32:
- self.palette = ImagePalette.raw(
- "BGRA", b"\0" * 4 * start + self.fp.read(4 * size)
- )
-
- # setup tile descriptor
- try:
- rawmode = MODES[(imagetype & 7, depth)]
- if imagetype & 8:
- # compressed
- self.tile = [
- (
- "tga_rle",
- (0, 0) + self.size,
- self.fp.tell(),
- (rawmode, orientation, depth),
- )
- ]
- else:
- self.tile = [
- (
- "raw",
- (0, 0) + self.size,
- self.fp.tell(),
- (rawmode, 0, orientation),
- )
- ]
- except KeyError:
- pass # cannot decode
-
- def load_end(self):
- if self._flip_horizontally:
- self.im = self.im.transpose(Image.Transpose.FLIP_LEFT_RIGHT)
-
-
-#
-# --------------------------------------------------------------------
-# Write TGA file
-
-
-SAVE = {
- "1": ("1", 1, 0, 3),
- "L": ("L", 8, 0, 3),
- "LA": ("LA", 16, 0, 3),
- "P": ("P", 8, 1, 1),
- "RGB": ("BGR", 24, 0, 2),
- "RGBA": ("BGRA", 32, 0, 2),
-}
-
-
-def _save(im, fp, filename):
- try:
- rawmode, bits, colormaptype, imagetype = SAVE[im.mode]
- except KeyError as e:
- msg = f"cannot write mode {im.mode} as TGA"
- raise OSError(msg) from e
-
- if "rle" in im.encoderinfo:
- rle = im.encoderinfo["rle"]
- else:
- compression = im.encoderinfo.get("compression", im.info.get("compression"))
- rle = compression == "tga_rle"
- if rle:
- imagetype += 8
-
- id_section = im.encoderinfo.get("id_section", im.info.get("id_section", ""))
- id_len = len(id_section)
- if id_len > 255:
- id_len = 255
- id_section = id_section[:255]
- warnings.warn("id_section has been trimmed to 255 characters")
-
- if colormaptype:
- palette = im.im.getpalette("RGB", "BGR")
- colormaplength, colormapentry = len(palette) // 3, 24
- else:
- colormaplength, colormapentry = 0, 0
-
- if im.mode in ("LA", "RGBA"):
- flags = 8
- else:
- flags = 0
-
- orientation = im.encoderinfo.get("orientation", im.info.get("orientation", -1))
- if orientation > 0:
- flags = flags | 0x20
-
- fp.write(
- o8(id_len)
- + o8(colormaptype)
- + o8(imagetype)
- + o16(0) # colormapfirst
- + o16(colormaplength)
- + o8(colormapentry)
- + o16(0)
- + o16(0)
- + o16(im.size[0])
- + o16(im.size[1])
- + o8(bits)
- + o8(flags)
- )
-
- if id_section:
- fp.write(id_section)
-
- if colormaptype:
- fp.write(palette)
-
- if rle:
- ImageFile._save(
- im, fp, [("tga_rle", (0, 0) + im.size, 0, (rawmode, orientation))]
- )
- else:
- ImageFile._save(
- im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, orientation))]
- )
-
- # write targa version 2 footer
- fp.write(b"\000" * 8 + b"TRUEVISION-XFILE." + b"\000")
-
-
-#
-# --------------------------------------------------------------------
-# Registry
-
-
-Image.register_open(TgaImageFile.format, TgaImageFile)
-Image.register_save(TgaImageFile.format, _save)
-
-Image.register_extensions(TgaImageFile.format, [".tga", ".icb", ".vda", ".vst"])
-
-Image.register_mime(TgaImageFile.format, "image/x-tga")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-f0e43e7d.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-f0e43e7d.css
deleted file mode 100644
index fb320f5e9afc1570c36e34f44865052ff83acf86..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-f0e43e7d.css
+++ /dev/null
@@ -1 +0,0 @@
-.base-image.svelte-m3v3vb.svelte-m3v3vb{display:block;width:100%;height:auto}.container.svelte-m3v3vb.svelte-m3v3vb{display:flex;position:relative;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.image-container.svelte-m3v3vb.svelte-m3v3vb{position:relative;top:0;left:0;flex-grow:1;width:100%;overflow:hidden}.fit-height.svelte-m3v3vb.svelte-m3v3vb{position:absolute;top:0;left:0;width:100%;height:100%;object-fit:contain}.mask.svelte-m3v3vb.svelte-m3v3vb{opacity:.85;transition:all .2s ease-in-out}.image-container.svelte-m3v3vb:hover .mask.svelte-m3v3vb{opacity:.3}.mask.active.svelte-m3v3vb.svelte-m3v3vb{opacity:1}.mask.inactive.svelte-m3v3vb.svelte-m3v3vb{opacity:0}.legend.svelte-m3v3vb.svelte-m3v3vb{display:flex;flex-direction:row;flex-wrap:wrap;align-content:center;justify-content:center;align-items:center;gap:var(--spacing-sm);padding:var(--spacing-sm)}.legend-item.svelte-m3v3vb.svelte-m3v3vb{display:flex;flex-direction:row;align-items:center;cursor:pointer;border-radius:var(--radius-sm);padding:var(--spacing-sm)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-64e31f50.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-64e31f50.js
deleted file mode 100644
index 0d9fe3993bfa6b1219eddf0b69474f5f4158413c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-64e31f50.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as ne,e as se,s as ae,k as S,o as z,z as I,v as A,x as C,B as ie,E as oe,ae as ue,O as L,N as E,K as d,p as b,q as fe,r as re,u as _e,y as me,A as v,G as j,m as ce,T as D,U as M,M as G,n as V,V as y,P as he,L as F,Q as q,R as ge,a1 as de}from"./index-3370be2a.js";import{B as be}from"./Button-89624748.js";import{B as ve}from"./BlockLabel-56db415e.js";import{E as ke}from"./Empty-585389a4.js";import{I as p}from"./Image-93033d87.js";import{n as H}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";function J(t,e,n){const l=t.slice();return l[27]=e[n][0],l[12]=e[n][1],l[29]=n,l}function W(t,e,n){const l=t.slice();return l[30]=e[n][0],l[12]=e[n][1],l[29]=n,l}function we(t){let e,n,l,s,i,a,_=j(t[13]?t[13][1]:[]),m=[];for(let u=0;u<_.length;u+=1)m[u]=X(W(t,_,u));let c=t[4]&&t[13]&&Y(t);return{c(){e=E("div"),n=E("img"),s=L();for(let u=0;u{r[w]=null}),me(),_=r[a],_?_.p(f,g):(_=r[a]=h[a](f),_.c()),I(_,1),_.m(i,null))},i(f){m||(I(e.$$.fragment,f),I(l.$$.fragment,f),I(_),m=!0)},o(f){A(e.$$.fragment,f),A(l.$$.fragment,f),A(_),m=!1},d(f){f&&(v(n),v(s),v(i)),C(e,f),C(l,f),r[a].d()}}}function Me(t){let e,n;return e=new be({props:{visible:t[2],elem_id:t[0],elem_classes:t[1],padding:!1,height:t[5],width:t[6],allow_overflow:!1,container:t[8],scale:t[9],min_width:t[10],$$slots:{default:[Be]},$$scope:{ctx:t}}}),{c(){S(e.$$.fragment)},m(l,s){z(e,l,s),n=!0},p(l,s){const i={};s[0]&4&&(i.visible=l[2]),s[0]&1&&(i.elem_id=l[0]),s[0]&2&&(i.elem_classes=l[1]),s[0]&32&&(i.height=l[5]),s[0]&64&&(i.width=l[6]),s[0]&256&&(i.container=l[8]),s[0]&512&&(i.scale=l[9]),s[0]&1024&&(i.min_width=l[10]),s[0]&30904|s[1]&2&&(i.$$scope={dirty:s,ctx:l}),e.$set(i)},i(l){n||(I(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){C(e,l)}}}function Ee(t,e,n){let{elem_id:l=""}=e,{elem_classes:s=[]}=e,{visible:i=!0}=e,{value:a}=e,_,m,{label:c="Annotated Image"}=e,{show_label:u=!0}=e,{show_legend:h=!0}=e,{height:r}=e,{width:k}=e,{color_map:f}=e,{container:g=!0}=e,{scale:N=null}=e,{min_width:B=void 0}=e,{root:w}=e,{root_url:T}=e,K=null,{loading_status:U}=e;const O=ie();function P(o){n(14,K=o)}function Q(){n(14,K=null)}const x=o=>P(o),$=o=>P(o),ee=()=>Q(),le=()=>Q(),te=(o,R)=>O("select",{index:o,value:R});return t.$$set=o=>{"elem_id"in o&&n(0,l=o.elem_id),"elem_classes"in o&&n(1,s=o.elem_classes),"visible"in o&&n(2,i=o.visible),"value"in o&&n(18,a=o.value),"label"in o&&n(12,c=o.label),"show_label"in o&&n(3,u=o.show_label),"show_legend"in o&&n(4,h=o.show_legend),"height"in o&&n(5,r=o.height),"width"in o&&n(6,k=o.width),"color_map"in o&&n(7,f=o.color_map),"container"in o&&n(8,g=o.container),"scale"in o&&n(9,N=o.scale),"min_width"in o&&n(10,B=o.min_width),"root"in o&&n(19,w=o.root),"root_url"in o&&n(20,T=o.root_url),"loading_status"in o&&n(11,U=o.loading_status)},t.$$.update=()=>{t.$$.dirty[0]&3932160&&(a!==_&&(n(21,_=a),O("change")),a?n(13,m=[H(a[0],w,T),a[1].map(([o,R])=>[H(o,w,T),R])]):n(13,m=null))},[l,s,i,u,h,r,k,f,g,N,B,U,c,m,K,O,P,Q,a,w,T,_,x,$,ee,le,te]}class qe extends ne{constructor(e){super(),se(this,e,Ee,Me,ae,{elem_id:0,elem_classes:1,visible:2,value:18,label:12,show_label:3,show_legend:4,height:5,width:6,color_map:7,container:8,scale:9,min_width:10,root:19,root_url:20,loading_status:11},null,[-1,-1])}}const je=qe,De=["static"],Ge=t=>({type:{payload:"[string, Array<[string, string]>]"},description:{payload:"path to base image, followed by a list of tuples [mask image path, label]"}});export{je as Component,Ge as document,De as modes};
-//# sourceMappingURL=index-64e31f50.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/hub_mixin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/hub_mixin.py
deleted file mode 100644
index 2fc0613b0630bc86c467781263a30e893bde9882..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/hub_mixin.py
+++ /dev/null
@@ -1,370 +0,0 @@
-import json
-import os
-from pathlib import Path
-from typing import Dict, List, Optional, Type, TypeVar, Union
-
-import requests
-
-from .constants import CONFIG_NAME, PYTORCH_WEIGHTS_NAME
-from .file_download import hf_hub_download, is_torch_available
-from .hf_api import HfApi
-from .utils import SoftTemporaryDirectory, logging, validate_hf_hub_args
-
-
-if is_torch_available():
- import torch # type: ignore
-
-logger = logging.get_logger(__name__)
-
-# Generic variable that is either ModelHubMixin or a subclass thereof
-T = TypeVar("T", bound="ModelHubMixin")
-
-
-class ModelHubMixin:
- """
- A generic mixin to integrate ANY machine learning framework with the Hub.
-
- To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
- have to be overwritten in [`_from_pretrained`] and [`_save_pretrained`]. [`PyTorchModelHubMixin`] is a good example
- of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.
- """
-
- def save_pretrained(
- self,
- save_directory: Union[str, Path],
- *,
- config: Optional[dict] = None,
- repo_id: Optional[str] = None,
- push_to_hub: bool = False,
- **kwargs,
- ) -> Optional[str]:
- """
- Save weights in local directory.
-
- Args:
- save_directory (`str` or `Path`):
- Path to directory in which the model weights and configuration will be saved.
- config (`dict`, *optional*):
- Model configuration specified as a key/value dictionary.
- push_to_hub (`bool`, *optional*, defaults to `False`):
- Whether or not to push your model to the Huggingface Hub after saving it.
- repo_id (`str`, *optional*):
- ID of your repository on the Hub. Used only if `push_to_hub=True`. Will default to the folder name if
- not provided.
- kwargs:
- Additional key word arguments passed along to the [`~ModelHubMixin._from_pretrained`] method.
- """
- save_directory = Path(save_directory)
- save_directory.mkdir(parents=True, exist_ok=True)
-
- # saving model weights/files
- self._save_pretrained(save_directory)
-
- # saving config
- if isinstance(config, dict):
- (save_directory / CONFIG_NAME).write_text(json.dumps(config))
-
- if push_to_hub:
- kwargs = kwargs.copy() # soft-copy to avoid mutating input
- if config is not None: # kwarg for `push_to_hub`
- kwargs["config"] = config
- if repo_id is None:
- repo_id = save_directory.name # Defaults to `save_directory` name
- return self.push_to_hub(repo_id=repo_id, **kwargs)
- return None
-
- def _save_pretrained(self, save_directory: Path) -> None:
- """
- Overwrite this method in subclass to define how to save your model.
- Check out our [integration guide](../guides/integrations) for instructions.
-
- Args:
- save_directory (`str` or `Path`):
- Path to directory in which the model weights and configuration will be saved.
- """
- raise NotImplementedError
-
- @classmethod
- @validate_hf_hub_args
- def from_pretrained(
- cls: Type[T],
- pretrained_model_name_or_path: Union[str, Path],
- *,
- force_download: bool = False,
- resume_download: bool = False,
- proxies: Optional[Dict] = None,
- token: Optional[Union[str, bool]] = None,
- cache_dir: Optional[Union[str, Path]] = None,
- local_files_only: bool = False,
- revision: Optional[str] = None,
- **model_kwargs,
- ) -> T:
- """
- Download a model from the Huggingface Hub and instantiate it.
-
- Args:
- pretrained_model_name_or_path (`str`, `Path`):
- - Either the `model_id` (string) of a model hosted on the Hub, e.g. `bigscience/bloom`.
- - Or a path to a `directory` containing model weights saved using
- [`~transformers.PreTrainedModel.save_pretrained`], e.g., `../path/to/my_model_directory/`.
- revision (`str`, *optional*):
- Revision of the model on the Hub. Can be a branch name, a git tag or any commit id.
- Defaults to the latest commit on `main` branch.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
- the existing cache.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether to delete incompletely received files. Will attempt to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`. The proxies are used on every request.
- token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- cache_dir (`str`, `Path`, *optional*):
- Path to the folder where cached files are stored.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- model_kwargs (`Dict`, *optional*):
- Additional kwargs to pass to the model during initialization.
- """
- model_id = pretrained_model_name_or_path
- config_file: Optional[str] = None
- if os.path.isdir(model_id):
- if CONFIG_NAME in os.listdir(model_id):
- config_file = os.path.join(model_id, CONFIG_NAME)
- else:
- logger.warning(f"{CONFIG_NAME} not found in {Path(model_id).resolve()}")
- elif isinstance(model_id, str):
- try:
- config_file = hf_hub_download(
- repo_id=str(model_id),
- filename=CONFIG_NAME,
- revision=revision,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- token=token,
- local_files_only=local_files_only,
- )
- except requests.exceptions.RequestException:
- logger.warning(f"{CONFIG_NAME} not found in HuggingFace Hub.")
-
- if config_file is not None:
- with open(config_file, "r", encoding="utf-8") as f:
- config = json.load(f)
- model_kwargs.update({"config": config})
-
- return cls._from_pretrained(
- model_id=str(model_id),
- revision=revision,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- token=token,
- **model_kwargs,
- )
-
- @classmethod
- def _from_pretrained(
- cls: Type[T],
- *,
- model_id: str,
- revision: Optional[str],
- cache_dir: Optional[Union[str, Path]],
- force_download: bool,
- proxies: Optional[Dict],
- resume_download: bool,
- local_files_only: bool,
- token: Optional[Union[str, bool]],
- **model_kwargs,
- ) -> T:
- """Overwrite this method in subclass to define how to load your model from pretrained.
-
- Use [`hf_hub_download`] or [`snapshot_download`] to download files from the Hub before loading them. Most
- args taken as input can be directly passed to those 2 methods. If needed, you can add more arguments to this
- method using "model_kwargs". For example [`PyTorchModelHubMixin._from_pretrained`] takes as input a `map_location`
- parameter to set on which device the model should be loaded.
-
- Check out our [integration guide](../guides/integrations) for more instructions.
-
- Args:
- model_id (`str`):
- ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`).
- revision (`str`, *optional*):
- Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the
- latest commit on `main` branch.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
- the existing cache.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether to delete incompletely received files. Will attempt to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`).
- token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- cache_dir (`str`, `Path`, *optional*):
- Path to the folder where cached files are stored.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- model_kwargs:
- Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method.
- """
- raise NotImplementedError
-
- @validate_hf_hub_args
- def push_to_hub(
- self,
- repo_id: str,
- *,
- config: Optional[dict] = None,
- commit_message: str = "Push model using huggingface_hub.",
- private: bool = False,
- api_endpoint: Optional[str] = None,
- token: Optional[str] = None,
- branch: Optional[str] = None,
- create_pr: Optional[bool] = None,
- allow_patterns: Optional[Union[List[str], str]] = None,
- ignore_patterns: Optional[Union[List[str], str]] = None,
- delete_patterns: Optional[Union[List[str], str]] = None,
- ) -> str:
- """
- Upload model checkpoint to the Hub.
-
- Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
- `delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
- details.
-
-
- Args:
- repo_id (`str`):
- ID of the repository to push to (example: `"username/my-model"`).
- config (`dict`, *optional*):
- Configuration object to be saved alongside the model weights.
- commit_message (`str`, *optional*):
- Message to commit while pushing.
- private (`bool`, *optional*, defaults to `False`):
- Whether the repository created should be private.
- api_endpoint (`str`, *optional*):
- The API endpoint to use when pushing the model to the hub.
- token (`str`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- branch (`str`, *optional*):
- The git branch on which to push the model. This defaults to `"main"`.
- create_pr (`boolean`, *optional*):
- Whether or not to create a Pull Request from `branch` with that commit. Defaults to `False`.
- allow_patterns (`List[str]` or `str`, *optional*):
- If provided, only files matching at least one pattern are pushed.
- ignore_patterns (`List[str]` or `str`, *optional*):
- If provided, files matching any of the patterns are not pushed.
- delete_patterns (`List[str]` or `str`, *optional*):
- If provided, remote files matching any of the patterns will be deleted from the repo.
-
- Returns:
- The url of the commit of your model in the given repository.
- """
- api = HfApi(endpoint=api_endpoint, token=token)
- repo_id = api.create_repo(repo_id=repo_id, private=private, exist_ok=True).repo_id
-
- # Push the files to the repo in a single commit
- with SoftTemporaryDirectory() as tmp:
- saved_path = Path(tmp) / repo_id
- self.save_pretrained(saved_path, config=config)
- return api.upload_folder(
- repo_id=repo_id,
- repo_type="model",
- folder_path=saved_path,
- commit_message=commit_message,
- revision=branch,
- create_pr=create_pr,
- allow_patterns=allow_patterns,
- ignore_patterns=ignore_patterns,
- delete_patterns=delete_patterns,
- )
-
-
-class PyTorchModelHubMixin(ModelHubMixin):
- """
- Implementation of [`ModelHubMixin`] to provide model Hub upload/download capabilities to PyTorch models. The model
- is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
- you should first set it back in training mode with `model.train()`.
-
- Example:
-
- ```python
- >>> import torch
- >>> import torch.nn as nn
- >>> from huggingface_hub import PyTorchModelHubMixin
-
-
- >>> class MyModel(nn.Module, PyTorchModelHubMixin):
- ... def __init__(self):
- ... super().__init__()
- ... self.param = nn.Parameter(torch.rand(3, 4))
- ... self.linear = nn.Linear(4, 5)
-
- ... def forward(self, x):
- ... return self.linear(x + self.param)
- >>> model = MyModel()
-
- # Save model weights to local directory
- >>> model.save_pretrained("my-awesome-model")
-
- # Push model weights to the Hub
- >>> model.push_to_hub("my-awesome-model")
-
- # Download and initialize weights from the Hub
- >>> model = MyModel.from_pretrained("username/my-awesome-model")
- ```
- """
-
- def _save_pretrained(self, save_directory: Path) -> None:
- """Save weights from a Pytorch model to a local directory."""
- model_to_save = self.module if hasattr(self, "module") else self # type: ignore
- torch.save(model_to_save.state_dict(), save_directory / PYTORCH_WEIGHTS_NAME)
-
- @classmethod
- def _from_pretrained(
- cls,
- *,
- model_id: str,
- revision: Optional[str],
- cache_dir: Optional[Union[str, Path]],
- force_download: bool,
- proxies: Optional[Dict],
- resume_download: bool,
- local_files_only: bool,
- token: Union[str, bool, None],
- map_location: str = "cpu",
- strict: bool = False,
- **model_kwargs,
- ):
- """Load Pytorch pretrained weights and return the loaded model."""
- if os.path.isdir(model_id):
- print("Loading weights from local directory")
- model_file = os.path.join(model_id, PYTORCH_WEIGHTS_NAME)
- else:
- model_file = hf_hub_download(
- repo_id=model_id,
- filename=PYTORCH_WEIGHTS_NAME,
- revision=revision,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- token=token,
- local_files_only=local_files_only,
- )
- model = cls(**model_kwargs)
-
- state_dict = torch.load(model_file, map_location=torch.device(map_location))
- model.load_state_dict(state_dict, strict=strict) # type: ignore
- model.eval() # type: ignore
-
- return model
diff --git a/spaces/Danielzero/GPT3.5/modules/overwrites.py b/spaces/Danielzero/GPT3.5/modules/overwrites.py
deleted file mode 100644
index 035a4a52722d66ee28af1c05231ad1cea3339ef5..0000000000000000000000000000000000000000
--- a/spaces/Danielzero/GPT3.5/modules/overwrites.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from __future__ import annotations
-import logging
-
-from llama_index import Prompt
-from typing import List, Tuple
-import mdtex2html
-from gradio_client import utils as client_utils
-
-from modules.presets import *
-from modules.llama_func import *
-
-
-def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]:
- logging.debug("Compacting text chunks...🚀🚀🚀")
- combined_str = [c.strip() for c in text_chunks if c.strip()]
- combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)]
- combined_str = "\n\n".join(combined_str)
- # resplit based on self.max_chunk_overlap
- text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1)
- return text_splitter.split_text(combined_str)
-
-
-def postprocess(
- self,
- y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple],
- ) -> List[List[str | Dict | None]]:
- """
- Parameters:
- y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed.
- Returns:
- List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed.
- """
- if y is None:
- return []
- processed_messages = []
- for message_pair in y:
- assert isinstance(
- message_pair, (tuple, list)
- ), f"Expected a list of lists or list of tuples. Received: {message_pair}"
- assert (
- len(message_pair) == 2
- ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
-
- processed_messages.append(
- [
- self._postprocess_chat_messages(message_pair[0], "user"),
- self._postprocess_chat_messages(message_pair[1], "bot"),
- ]
- )
- return processed_messages
-
-def postprocess_chat_messages(
- self, chat_message: str | Tuple | List | None, message_type: str
- ) -> str | Dict | None:
- if chat_message is None:
- return None
- elif isinstance(chat_message, (tuple, list)):
- filepath = chat_message[0]
- mime_type = client_utils.get_mimetype(filepath)
- filepath = self.make_temp_copy_if_needed(filepath)
- return {
- "name": filepath,
- "mime_type": mime_type,
- "alt_text": chat_message[1] if len(chat_message) > 1 else None,
- "data": None, # These last two fields are filled in by the frontend
- "is_file": True,
- }
- elif isinstance(chat_message, str):
- if message_type == "bot":
- if not detect_converted_mark(chat_message):
- chat_message = convert_mdtext(chat_message)
- elif message_type == "user":
- if not detect_converted_mark(chat_message):
- chat_message = convert_asis(chat_message)
- return chat_message
- else:
- raise ValueError(f"Invalid message for Chatbot component: {chat_message}")
-
-with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2:
- customJS = f.read()
- kelpyCodos = f2.read()
-
-def reload_javascript():
- print("Reloading javascript...")
- js = f''
- def template_response(*args, **kwargs):
- res = GradioTemplateResponseOriginal(*args, **kwargs)
- res.body = res.body.replace(b'