0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download VMware 7 ISO and Experience the Power of VMware Workstation 16 Pro.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download VMware 7 ISO and Experience the Power of VMware Workstation 16 Pro.md
deleted file mode 100644
index 200f192ad9c0186cc66fd926e2fc44aaa3da94c5..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download VMware 7 ISO and Experience the Power of VMware Workstation 16 Pro.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-How to Download VMware 7 ISO for Free
-If you are looking for a way to run multiple operating systems on your PC, you might want to download VMware 7 ISO for free. VMware 7 is a virtualization software that allows you to create and manage virtual machines on your computer. You can use it to test new software, run legacy applications, or experiment with different operating systems without affecting your main system.
-download vmware 7 iso Download File ☆☆☆ https://byltly.com/2uKyPl
-In this article, we will show you how to download VMware 7 ISO for free and install it on your PC. We will also give you some tips on how to optimize your virtual machines for better performance and security.
-What is VMware 7 ISO?
-VMware 7 ISO is an image file that contains the installation files for VMware 7. You can use it to create a bootable USB drive or a DVD that you can use to install VMware 7 on your PC. Alternatively, you can also mount the ISO file as a virtual drive and run the installation from there.
-VMware 7 is the latest version of VMware Workstation, which is a popular virtualization software for Windows and Linux users. It has many features and improvements over the previous versions, such as:
-
-Support for Windows 11 and Linux 5.13 kernels
-Improved graphics performance and compatibility with DirectX 11 and OpenGL 4.1
-New sandbox mode that allows you to run untrusted applications in an isolated environment
-Enhanced security with TPM 2.0 emulation and virtual Trusted Platform Module devices
-Better integration with VMware vSphere and cloud services
-
-How to Download VMware 7 ISO for Free?
-To download VMware 7 ISO for free, you need to have a valid VMware account. If you don't have one, you can create one for free on the VMware website. Once you have an account, follow these steps:
-
-Go to the VMware Workstation Pro Evaluation page and click on the "Download Now" button.
-Log in with your VMware account credentials and accept the terms and conditions.
-Select the "VMware Workstation 16 Pro for Windows" option and click on the "Download" button.
-Save the ISO file to your preferred location on your PC.
-
-Congratulations! You have successfully downloaded VMware 7 ISO for free. You can now use it to install VMware 7 on your PC or create a bootable USB drive or DVD.
-
-How to Install VMware 7 from ISO?
-To install VMware 7 from ISO, you need to have a PC that meets the minimum system requirements for VMware 7. These are:
-
-A 64-bit processor with at least two cores
-At least 4 GB of RAM (8 GB or more recommended)
-At least 2 GB of free disk space (20 GB or more recommended)
-A compatible host operating system (Windows 10 or later, or Linux)
-
-If your PC meets these requirements, you can proceed with the installation by following these steps:
-
-Insert the bootable USB drive or DVD that contains the VMware 7 ISO file into your PC.
-Restart your PC and boot from the USB drive or DVD.
-Follow the on-screen instructions to install VMware 7 on your PC.
-Enter the license key that you received from VMware when prompted.
-Complete the installation and restart your PC.
-
-Congratulations! You have successfully installed VMware 7 on your PC. You can now start creating and managing virtual machines on your PC.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Downloadbotautohuntperfectworld.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Downloadbotautohuntperfectworld.md
deleted file mode 100644
index 96f1b8c8cf27f1ec27688b64bf99bcae0b163943..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Downloadbotautohuntperfectworld.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-What is downloadbotautohuntperfectworld and why you need it
-If you are a fan of Perfect World, a popular MMORPG game, you might have heard of downloadbotautohuntperfectworld. This is a tool that allows you to automate your hunting and questing activities in the game, without having to manually control your character. In this article, we will explain what downloadbotautohuntperfectworld is, how it works, what are its benefits, how to download and use it, where to find it, and what are the risks and precautions of using it.
- How downloadbotautohuntperfectworld works
-Downloadbotautohuntperfectworld is a program that uses Selenium Webdriver, a framework for automating web browser actions, to interact with the Perfect World game client. It can perform various tasks such as moving, attacking, looting, healing, buffing, using skills, completing quests, and more. It can also detect enemies, NPCs, items, and other objects in the game environment. It can run in the background while you do other things on your computer, or you can watch it play on your screen.
-downloadbotautohuntperfectworld Download → https://byltly.com/2uKwbp
- The benefits of using downloadbotautohuntperfectworld
-Using downloadbotautohuntperfectworld can bring you many advantages in Perfect World. Here are some of them:
- Save time and energy
-Perfect World is a game that requires a lot of time and effort to progress and level up your character. You have to grind for hours to complete quests, kill monsters, collect items, craft equipment, and more. This can be tedious and boring, especially if you have other things to do in real life. With downloadbotautohuntperfectworld, you can let the bot do all the work for you while you relax or focus on other tasks. You can also set the bot to run overnight or when you are away from your computer, so you can wake up or come back to a higher level character with more resources and rewards.
- Enhance your gaming experience
-which skills to use or avoid, which items to keep or discard, and more. You can also switch between manual and automatic mode anytime you want.
- Avoid bans and detection
-One of the main concerns of using bots in online games is getting banned or detected by the game developers or administrators. This can result in losing your account, your progress, your items, and your reputation. However, with downloadbotautohuntperfectworld, you don't have to worry about this. Downloadbotautohuntperfectworld is designed to be undetectable by Perfect World's anti-cheat system. It does not inject any code into the game client or modify any game files. It also mimics human-like behavior and movements to avoid suspicion. It also has features such as auto-restart, auto-login, auto-repair, auto-sell, and more to prevent any errors or glitches that might expose the bot.
- How to download and use downloadbotautohuntperfectworld
-If you are interested in trying downloadbotautohuntperfectworld, here are the steps you need to follow:
- Requirements and compatibility
-Before you download downloadbotautohuntperfectworld, you need to make sure that your computer meets the minimum requirements for running the bot. You need to have:
-
-A Windows operating system (Windows 7 or higher)
-A Perfect World game client (any version)
-A Selenium Webdriver (Chrome or Firefox)
-A Python interpreter (version 3.6 or higher)
-A stable internet connection
-
-You also need to make sure that your Perfect World account is not banned or suspended, and that you have enough space on your hard drive for the bot files.
- Installation and configuration
-Once you have the requirements ready, you can proceed to download downloadbotautohuntperfectworld from one of the sources listed below. You will receive a zip file containing the bot files and a readme file with instructions. You need to extract the zip file to a folder of your choice and open the readme file for further guidance. You will need to edit some configuration files to set up your bot's preferences and settings. For example, you will need to enter your Perfect World account information, your character name, your hunting location, your quest list, your skill list, your item list, and more. You can also adjust other options such as the bot's speed, delay, mode, and more.
-download bot auto hunt perfect world 2023
-how to use bot auto hunt in perfect world
-best bot auto hunt for perfect world
-download bot auto hunt perfect world indonesia
-download bot auto hunt perfect world mobile
-download bot auto hunt perfect world revolution
-download bot auto hunt perfect world international
-download bot auto hunt perfect world private server
-download bot auto hunt perfect world genesis
-download bot auto hunt perfect world new horizon
-download bot auto hunt perfect world classic
-download bot auto hunt perfect world reborn
-download bot auto hunt perfect world awakening
-download bot auto hunt perfect world origin
-download bot auto hunt perfect world phoenix
-download bot auto hunt perfect world eternal
-download bot auto hunt perfect world legend
-download bot auto hunt perfect world ascension
-download bot auto hunt perfect world evolution
-download bot auto hunt perfect world redemption
-download bot auto hunt perfect world sanctuary
-download bot auto hunt perfect world harmony
-download bot auto hunt perfect world destiny
-download bot auto hunt perfect world eclipse
-download bot auto hunt perfect world inferno
-download bot auto hunt perfect world fantasy
-download bot auto hunt perfect world legacy
-download bot auto hunt perfect world zenith
-download bot auto hunt perfect world paradise
-download bot auto hunt perfect world infernal sky
-download bot auto hunt perfect world mystic realm
-download bot auto hunt perfect world divine realm
-download bot auto hunt perfect world celestial realm
-download bot auto hunt perfect world primal realm
-download bot auto hunt perfect world astral realm
-download bot auto hunt perfect world elemental realm
-download bot auto hunt perfect world immortal realm
-download bot auto hunt perfect world abyssal realm
-download bot auto hunt perfect world ethereal realm
-download bot auto hunt perfect world arcane realm
-download bot auto hunt perfect world sacred realm
-download bot auto hunt perfect world chaotic realm
-download bot auto hunt perfect world ancient realm
-download bot auto hunt perfect world eternal love
-download bot auto hunt perfect world eternal flame
-download bot auto hunt perfect world eternal light
-download bot auto hunt perfect world eternal darkness
-download bot auto hunt perfect world eternal chaos
-download bot auto hunt perfect world eternal peace
- Tips and tricks
-After you have installed and configured downloadbotautohuntperfectworld, you can start using it by running the main script file. The bot will launch the Perfect World game client and log in to your account automatically. It will then start performing the tasks you have assigned it according to your settings. You can monitor the bot's progress on your screen or on a separate window that shows the bot's logs and messages. You can also pause, resume, or stop the bot anytime you want by pressing certain keys on your keyboard. Here are some tips and tricks for using downloadbotautohuntperfectworld effectively:
-
-Make sure that your computer is not overloaded with other programs or processes that might interfere with the bot's performance.
-Make sure that your internet connection is stable and fast enough to avoid lagging or disconnecting issues.
-Make sure that your game client is updated to the latest version and compatible with the bot's version.
-Make sure that you have enough inventory space and currency for looting and selling items.
-Make sure that you have enough potions and consumables for healing and buffing yourself.
-Make sure that you have enough skill points and cultivation points for upgrading your skills and meridians.
-Make sure that you are not in a crowded or contested area where other players might attack or report you.
-Make sure that you are not violating any game rules or terms of service by using the bot.
-
- Where to find downloadbotautohuntperfectworld
-, you might be wondering where to look. There are many websites and forums that claim to offer downloadbotautohuntperfectworld, but not all of them are trustworthy or safe. Some of them might contain malware, viruses, or outdated versions of the bot that might harm your computer or your account. To avoid these risks, you should only download downloadbotautohuntperfectworld from reputable and verified sources that have positive reviews and feedback from other users. Here are some of the best sources for downloadbotautohuntperfectworld:
- The best sources for downloadbotautohuntperfectworld
-These are some of the websites and forums that offer downloadbotautohuntperfectworld with high quality and security:
- elitepvpers.com
-This is one of the most popular and trusted forums for all kinds of MMORPG hacks, cheats, and bots. You can find downloadbotautohuntperfectworld in the Perfect World section of the forum, along with other tools and guides for the game. You can also interact with other users and get support and feedback from them. You need to register an account to access the forum and download the bot.
- vegetarentusiast.no
-This is a website that offers downloadbotautohuntperfectworld as a PDF file that you can download for free. The PDF file contains a link to a torrent file that contains the bot files and instructions. You need to have a torrent client to download the bot files. The website also provides some information and tips about downloadbotautohuntperfectworld and Perfect World.
- trello.com
-This is a website that offers downloadbotautohuntperfectworld as a Trello board that you can access for free. The Trello board contains a link to a Google Drive folder that contains the bot files and instructions. You need to have a Google account to access the folder and download the bot files. The Trello board also provides some updates and news about downloadbotautohuntperfectworld and Perfect World.
- The risks and precautions of using downloadbotautohuntperfectworld
-While using downloadbotautohuntperfectworld can be beneficial and enjoyable, it also comes with some risks and challenges that you should be aware of and prepared for. Here are some of them:
- Potential malware and viruses
-As mentioned earlier, not all sources for downloadbotautohuntperfectworld are safe and reliable. Some of them might contain malicious software that can infect your computer or steal your personal information. To avoid this, you should always scan the bot files with an antivirus program before installing or running them. You should also avoid clicking on any suspicious links or pop-ups that might appear while using the bot.
- Legal and ethical issues
-Using bots in online games is generally considered cheating and unfair by the game developers and administrators, as well as by other players who play by the rules. This can result in legal actions or penalties against you or your account, such as bans, suspensions, fines, or lawsuits. To avoid this, you should always follow the game's terms of service and code of conduct while using the bot. You should also respect other players' rights and feelings while playing the game.
- How to protect yourself and your account
-, you should take some precautions and measures to protect yourself and your account. Here are some of them:
-
-Use a VPN or proxy service to hide your IP address and location while using the bot.
-Use a separate or disposable email address and password for your Perfect World account and your bot account.
-Use a different character name and appearance for your bot character than your main character.
-Use the bot only for a limited amount of time per day or per week, and vary your schedule and activities.
-Do not use the bot in public or crowded areas where other players might notice or report you.
-Do not brag or boast about using the bot or your achievements in the game.
-Do not abuse or harass other players or interfere with their gameplay while using the bot.
-
- Conclusion
-In conclusion, downloadbotautohuntperfectworld is a tool that can help you automate your hunting and questing activities in Perfect World, a popular MMORPG game. It can save you time and energy, enhance your gaming experience, and avoid bans and detection. However, it also comes with some risks and challenges that you should be aware of and prepared for. You should only download downloadbotautohuntperfectworld from reputable and verified sources, and use it responsibly and ethically. You should also protect yourself and your account by following some precautions and measures. By doing so, you can enjoy downloadbotautohuntperfectworld and Perfect World without any problems or regrets.
- FAQs
-Here are some frequently asked questions about downloadbotautohuntperfectworld:
- Q: Is downloadbotautohuntperfectworld free or paid?
-A: Downloadbotautohuntperfectworld is free to download and use from the sources listed above. However, some sources might require you to register an account or complete a survey before accessing the download link. You might also need to pay for some additional services or features such as VPN or proxy, antivirus, or premium membership.
- Q: Is downloadbotautohuntperfectworld safe or risky?
-A: Downloadbotautohuntperfectworld is safe to use if you download it from reputable and verified sources, scan it with an antivirus program, and follow the instructions carefully. However, it is also risky to use if you download it from untrustworthy or unknown sources, run it without checking it, or ignore the instructions. It might contain malware or viruses that can harm your computer or your account. It might also get you banned or detected by the game's anti-cheat system.
- Q: Is downloadbotautohuntperfectworld legal or illegal?
-A: Downloadbotautohuntperfectworld is legal to use as long as you do not violate any laws or regulations in your country or region while using it. However, it is illegal to use if you break any laws or regulations in your country or region while using it. For example, if you use it to hack, scam, or steal from other players or the game developers. You might face legal actions or penalties such as fines, lawsuits, or imprisonment.
- Q: Is downloadbotautohuntperfectworld ethical or unethical?
-A: Downloadbotautohuntperfectworld is ethical to use if you respect other players' rights and feelings while using it. However, it is unethical to use if you disrespect other players' rights and feelings while using it. For example, if you use it to cheat, exploit, or harass other players or interfere with their gameplay. You might lose your reputation or credibility among other players or the game community.
- Q: Is downloadbotautohuntperfectworld worth it or not?
-and authentically, and avoid any risks or challenges while playing Perfect World. It depends on your personal preferences and goals in the game.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ETHNAUDIO Breath Of Anatolia (KONTAKT) Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ETHNAUDIO Breath Of Anatolia (KONTAKT) Download.md
deleted file mode 100644
index 2b9bec3a7bd0fe798b3309e52a95dbcba19fec41..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ETHNAUDIO Breath Of Anatolia (KONTAKT) Download.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-ETHNAUDIO Breath of Anatolia (KONTAKT) Download
- If you are looking for a library of authentic and diverse ethnic winds for your music production, you might want to check out ETHNAUDIO Breath of Anatolia (KONTAKT). This product is a collection of 15+ ethnic winds from Turkey and neighboring regions, such as doudouk, kaval, kawala, mey, ney, tulum, Turkish clarinet and zurna. You can use these instruments to create rich and realistic melodies, harmonies and soundscapes for your musical projects. In this article, we will review ETHNAUDIO Breath of Anatolia (KONTAKT) and tell you everything you need to know about it.
-ETHNAUDIO Breath of Anatolia (KONTAKT) Download Download File >>>>> https://byltly.com/2uKwEI
- What is ETHNAUDIO Breath of Anatolia?
- ETHNAUDIO Breath of Anatolia (KONTAKT) is a library of different ethnic winds that you can use with Kontakt or Kontakt Player, which are Native Instruments products. Kontakt is a powerful sampler that allows you to load and play various sounds and instruments with high quality and flexibility. Kontakt Player is a free version of Kontakt that you can download from Native Instruments website.
- ETHNAUDIO Breath of Anatolia (KONTAKT) includes over 1 GB of data, featuring 15+ ethnic winds from Anatolia and surrounding regions. Each instrument has its own unique sound, character and expression. You can control various parameters such as volume, pan, reverb, delay, microtonal tuning, effects and more. You can also switch between different articulations such as legato, staccato, vibrato and zone.
- ETHNAUDIO Breath of Anatolia (KONTAKT) is compatible with both Mac and Windows versions. It requires Kontakt 5.6.5 or higher for a smooth and fast performance. It has a simple installation with a modern intuitive user interface that allows you to easily access and edit your sounds.
- What are the benefits of using ETHNAUDIO Breath of Anatolia?
- There are many reasons why you might want to use ETHNAUDIO Breath of Anatolia (KONTAKT) for your music production. Here are some of them:
-How to download ETHNAUDIO Breath of Anatolia (KONTAKT) for free
-ETHNAUDIO Breath of Anatolia (KONTAKT) review and demo
-Best deals on ETHNAUDIO Breath of Anatolia (KONTAKT) library
-ETHNAUDIO Breath of Anatolia (KONTAKT) vs other ethnic instruments
-ETHNAUDIO Breath of Anatolia (KONTAKT) tutorial and tips
-Where to buy ETHNAUDIO Breath of Anatolia (KONTAKT) online
-ETHNAUDIO Breath of Anatolia (KONTAKT) features and specifications
-ETHNAUDIO Breath of Anatolia (KONTAKT) sound quality and performance
-ETHNAUDIO Breath of Anatolia (KONTAKT) compatibility and requirements
-ETHNAUDIO Breath of Anatolia (KONTAKT) license and activation
-ETHNAUDIO Breath of Anatolia (KONTAKT) samples and presets
-ETHNAUDIO Breath of Anatolia (KONTAKT) updates and support
-ETHNAUDIO Breath of Anatolia (KONTAKT) alternatives and competitors
-ETHNAUDIO Breath of Anatolia (KONTAKT) refund policy and guarantee
-ETHNAUDIO Breath of Anatolia (KONTAKT) testimonials and feedback
-How to use ETHNAUDIO Breath of Anatolia (KONTAKT) in your music production
-ETHNAUDIO Breath of Anatolia (KONTAKT) genres and styles
-ETHNAUDIO Breath of Anatolia (KONTAKT) instruments and sounds
-How to install ETHNAUDIO Breath of Anatolia (KONTAKT) on your computer
-ETHNAUDIO Breath of Anatolia (KONTAKT) pros and cons
-How to create music with ETHNAUDIO Breath of Anatolia (KONTAKT)
-ETHNAUDIO Breath of Anatolia (KONTAKT) discount code and coupon
-How to optimize ETHNAUDIO Breath of Anatolia (KONTAKT) for your system
-ETHNAUDIO Breath of Anatolia (KONTAKT) FAQs and troubleshooting
-How to customize ETHNAUDIO Breath of Anatolia (KONTAKT) settings and options
-How to mix and master with ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to export and share your music made with ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to learn more about ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to get inspired by ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to collaborate with other musicians using ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to make money with your music created with ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to improve your skills with ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to find more resources and tutorials on ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to access the user manual and documentation for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to contact the developer and customer service for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to join the community and forum for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to subscribe to the newsletter and updates for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to follow the social media accounts for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to watch the videos and podcasts for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to read the blog posts and articles for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to listen to the music examples and demos for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to download the free trial version for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to upgrade or downgrade your version for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to backup or restore your data for ETHNAUDIO Breath of Anatolia (KONTAKT)
-How to transfer or migrate your license for ETHNAUDIO Breath of Anatolia (KONTAKT)
-
-You can add a touch of exoticism and diversity to your music with these ethnic winds. They can create a sense of atmosphere, emotion and culture for your listeners.
-You can explore different musical genres and styles with these instruments. They can fit well with various types of music such as world music, ethnic fusion, ambient, cinematic, folk, pop, rock and more.
-You can learn more about the musical traditions and history of Anatolia and neighboring regions with these instruments. They have a rich and ancient heritage that reflects their origins, influences and evolution.
-You can improve your musical skills and creativity with these instruments. They can challenge you to play with different scales, modes, rhythms and techniques that are different from the Western music system.
-You can have fun and enjoy playing these instruments with Kontakt or Kontakt Player. They have a realistic and expressive sound that responds to your playing style and dynamics.
-
- How to install and use ETHNAUDIO Breath of Anatolia?
- If you want to install and use ETHNAUDIO Breath of Anatolia (KONTAKT), you need to follow these steps:
-
-Download ETHNAUDIO Breath of Anatolia (KONTAKT) from the official website or an authorized dealer. You will receive a zip file containing the library files.
-Extract the zip file to a folder on your computer. You can use any software that can handle zip files such as WinZip or 7-Zip.
-Open Kontakt or Kontakt Player on your computer. You can download Kontakt Player for free from Native Instruments website if you don't have it already.
-Add ETHNAUDIO Breath of Anatolia (KONTAKT) library to your Kontakt libraries tab by clicking on the "Add Library" button. You will need to locate the folder where you extracted the library files.
-Activate ETHNAUDIO Breath of Anatolia (KONTAKT) library by entering your serial number that you received when you purchased the product. You will need to connect to the internet for this step.
-Load any instrument from ETHNAUDIO Breath of Anatolia (KONTAKT) library by double-clicking on its name in the libraries tab. You will see its interface on the main window.
-Play any instrument from ETHNAUDIO Breath of Anatolia (KONTAKT) library using your MIDI keyboard or controller. You can adjust various settings such as volume, pan, reverb, delay, microtonal tuning, effects and more using the knobs and buttons on the interface.
-
- What are the technical specifications of ETHNAUDIO Breath of Anatolia?
- Before you buy or download ETHNAUDIO Breath of Anatolia (KONTAKT), you need to make sure that your computer meets the minimum system requirements for running it smoothly. Here is a table that shows the technical specifications of ETHNAUDIO Breath of Anatolia (KONTAKT):
-
-Parameter Value
-Operating System Windows XP/Vista/7/8/8.1/10 or Mac OS X 10.9 or higher
-Memory (RAM) 1 GB or more
-Hard Disk Space 1.5 GB or more
-Processor Intel Dual Core processor or higher
-Kontakt Version Kontakt 5.6.5 or higher (full or player)
-Library Size 1.1 GB uncompressed
-Instruments 15+ ethnic winds from Turkey and neighboring regions
-Samples 24 bit / 44.1 kHz stereo WAV format
-User Interface Modern intuitive design with easy access to parameters
-NKS Compatibility Yes (version 2.0)
-Price $159 regular price / $139 discounted price
-Contact Details Email: info@ethnaudio.com / Website: https://ethnaudio.com / Facebook: https://www.facebook.com/ethnaudio / Twitter: https://twitter.com/ethnaudio / YouTube: https://www.youtube.com/channel/UCYwXZqQnJmQx9t0mQZVwLjg
-
- What are some examples of music created with ETHNAUDIO Breath of Anatolia?
- of Anatolia (KONTAKT) sounds like, you can check out some examples of music created with this library. Here are some links to YouTube videos that showcase the product's ethnic winds library:
-
- You can also visit the product's website and listen to more demos and testimonials from other users and customers.
- Where to buy ETHNAUDIO Breath of Anatolia?
- If you are interested in buying ETHNAUDIO Breath of Anatolia (KONTAKT), you have several options to choose from. You can buy it directly from the product's website, or from an authorized dealer or reseller. You can also get it for free if you are lucky enough to win a giveaway or a contest. However, you should be careful not to download illegal copies of the product from untrusted sources, as they may contain viruses, malware or spyware that can harm your computer and compromise your personal information. In this section, we will tell you more about these options and how to get the best deal for your money.
- How much does ETHNAUDIO Breath of Anatolia cost?
- The regular price of ETHNAUDIO Breath of Anatolia (KONTAKT) is $159. However, you can get it for a discounted price of $139 if you buy it before the end of the month. This is a limited time offer that you don't want to miss. You can save $20 and get access to a library of 15+ ethnic winds that will enhance your music production.
- To buy ETHNAUDIO Breath of Anatolia (KONTAKT) at the discounted price, you need to visit the product's website and add it to your cart. You can pay with PayPal or credit card. You will receive an email with your serial number and a download link for the product. You can also access your account and download the product anytime from the website.
- How to get ETHNAUDIO Breath of Anatolia for free?
- If you want to get ETHNAUDIO Breath of Anatolia (KONTAKT) for free, you have two options. One is to participate in a giveaway or a contest that the product's developers or partners may organize from time to time. You can follow their social media pages and newsletters to stay updated on these opportunities. You may need to complete some tasks or answer some questions to enter the giveaway or contest. If you are lucky enough, you may win a free copy of the product.
- The other option is to download an illegal copy of the product from a torrent site or a file sharing service. However, we strongly advise you not to do this, as it is illegal, unethical and risky. You may face legal consequences for violating the product's license agreement and intellectual property rights. You may also expose your computer and personal information to viruses, malware or spyware that may be hidden in the illegal copy. You may also miss out on updates, support and features that the official product offers.
- Therefore, we recommend you to buy ETHNAUDIO Breath of Anatolia (KONTAKT) from the official website or an authorized dealer or reseller. This way, you will support the product's developers and their hard work, and enjoy a high-quality and safe product that will enhance your music production.
- How to contact ETHNAUDIO for support and feedback?
- If you have any questions, issues or feedback about ETHNAUDIO Breath of Anatolia (KONTAKT), you can contact ETHNAUDIO for support and feedback. They are always happy to hear from their customers and users, and they will try their best to help you and improve their products.
- You can contact ETHNAUDIO by email, website or social media. Here are their contact details:
-
-Email: info@ethnaudio.com
-Website: https://ethnaudio.com
-Facebook: https://www.facebook.com/ethnaudio
-Twitter: https://twitter.com/ethnaudio
-YouTube: https://www.youtube.com/channel/UCYwXZqQnJmQx9t0mQZVwLjg
-
- You can also visit their website and check out their FAQ section for more information about their products and services.
- Conclusion
- In conclusion, ETHNAUDIO Breath of Anatolia (KONTAKT) is a library of 15+ ethnic winds from Turkey and neighboring regions that you can use with Kontakt or Kontakt Player. It is a great product for music producers and composers who want to add a touch of exoticism and diversity to their music with these authentic and realistic instruments. It has many features and benefits such as microtonal tuning, effects, NKS compatibility and more. It is easy to install and use, and it has a reasonable price and customer support.
- If you want to buy ETHNAUDIO Breath of Anatolia (KONTAKT), you can get it from the official website or an authorized dealer or reseller at a discounted price of $139 before the end of the month. You can also try your luck in a giveaway or contest that may be organized by the product's developers or partners. However, you should avoid downloading illegal copies of the product from untrusted sources, as they may harm your computer and personal information.
- If you have any questions, issues or feedback about ETHNAUDIO Breath of Anatolia (KONTAKT), you can contact ETHNAUDIO by email, website or social media. They will be happy to help you and improve their products.
- We hope this article has been helpful and informative for you. If you are interested in ETHNAUDIO Breath of Anatolia (KONTAKT), don't hesitate to get it now and enjoy creating amazing music with these ethnic winds.
- FAQs
- Here are some frequently asked questions and answers about ETHNAUDIO Breath of Anatolia (KONTAKT):
-
-What is Kontakt? Kontakt is a powerful sampler that allows you to load and play various sounds and instruments with high quality and flexibility. It is developed by Native Instruments, a leading company in music technology.
-Do I need Kontakt to use ETHNAUDIO Breath of Anatolia? You need Kontakt 5.6.5 or higher (full or player) to use ETHNAUDIO Breath of Anatolia (KONTAKT). You can download Kontakt Player for free from Native Instruments website if you don't have it already.
-What are ethnic winds? Ethnic winds are musical instruments that produce sound by blowing air into them. They are usually made of wood, metal or clay, and they have different shapes, sizes and sounds depending on their origin and culture.
-What are microtonal tuning and NKS compatibility? Microtonal tuning is a feature that allows you to tune each note or key of an instrument according to different scales or modes that are different from the Western music system. NKS compatibility is a feature that allows you to integrate your instrument with Native Instruments hardware such as keyboards or controllers.
-How can I learn more about ETHNAUDIO Breath of Anatolia? You can learn more about ETHNAUDIO Breath of Anatolia (KONTAKT) by visiting their website, watching their videos, reading their testimonials, contacting them for support and feedback, or trying their demos.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fax Voip T38 Keygen [PORTABLE] Idm.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fax Voip T38 Keygen [PORTABLE] Idm.md
deleted file mode 100644
index ad4914050ee525a748799edf9f73b8a314d2fb2f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fax Voip T38 Keygen [PORTABLE] Idm.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-How to Use Fax Voip T38 Keygen Idm to Send and Receive Faxes Over the Internet
-Faxing is a traditional way of transmitting documents and forms, but it can be costly and inconvenient to use a fax machine and a phone line. Fortunately, there is a way to send and receive faxes over the internet using a protocol called T.38. T.38 is a standard that defines how fax data can be converted into an image file and sent over a VoIP (Voice over Internet Protocol) network. This process is also known as Fax over IP (FoIP) or virtual fax.
-To use T.38 faxing, you need a software or a device that supports this protocol. One of the options you can try is Fax Voip T38 Keygen Idm, which is a virtual fax and voice modem for SIP, H.323, and ISDN CAPI 2.0 networks. Fax Voip T38 Keygen Idm allows you to use Microsoft Fax or any other standard fax-voice software to send or receive faxes and audio messages via VoIP. It also provides incoming and outgoing fax routing options, such as e-mail, store in a folder, print, or custom routing. You can also use the Mail to Fax function to send faxes directly from your e-mail application.
-Fax Voip T38 Keygen Idm DOWNLOAD >>> https://byltly.com/2uKzFJ
-To use Fax Voip T38 Keygen Idm, you need to download and install it on your computer. You also need to register it with a license key that you can obtain from the official website or from other sources. After that, you need to configure the settings according to your network and fax service provider. You can find detailed instructions on how to do that in the user manual or on the website. Once you have set up everything, you can start sending and receiving faxes over the internet using your fax-voice software or your e-mail client.
-T.38 faxing has many benefits over traditional faxing. It saves you money on phone bills and paper costs. It also saves you time and hassle by eliminating the need for a physical fax machine and a phone line. It also ensures high-quality and reliable transmission of your faxes, as it isolates them from the delays, jitter, and packet loss that may occur in VoIP networks. T.38 faxing also supports color faxes and multiple SIP registrations.
-If you are looking for a way to modernize your faxing needs, you should consider using T.38 faxing with Fax Voip T38 Keygen Idm. It is a flexible and convenient solution that works with any standard fax-voice software and any SIP/H.323/ISDN CAPI 2.0 network. It also offers many features and options that make your faxing experience more efficient and enjoyable.
-
-How to Use Fax Voip T38 Keygen Idm to Send and Receive Faxes Over the Internet (Continued)
-In this article, we have explained what T.38 faxing is and how it works. We have also introduced Fax Voip T38 Keygen Idm, which is a software that enables you to use T.38 faxing with any standard fax-voice software and any SIP/H.323/ISDN CAPI 2.0 network. Now, we will show you some examples of how to use Fax Voip T38 Keygen Idm to send and receive faxes over the internet.
-Example 1: Sending a fax using Microsoft Fax
-
-If you want to send a fax using Microsoft Fax, you need to have Fax Voip T38 Keygen Idm installed and configured on your computer. You also need to have Microsoft Fax installed and set up as your default fax printer. Then, you can follow these steps:
-
-Open the document that you want to fax in any application that supports printing.
-Select File > Print and choose Microsoft Fax as your printer.
-Click Print and enter the recipient's fax number in the To field.
-Click Send to start sending the fax.
-
-You can monitor the status of your fax in the Fax Console or in the Fax Voip T38 Keygen Idm Monitor window. You can also view the details of your fax in the Outbox folder of Microsoft Fax.
-Example 2: Receiving a fax using Microsoft Fax
-If you want to receive a fax using Microsoft Fax, you need to have Fax Voip T38 Keygen Idm installed and configured on your computer. You also need to have Microsoft Fax installed and set up as your default fax printer. Then, you can follow these steps:
-
-Make sure that your computer is turned on and connected to the internet.
-Wait for an incoming fax call from your SIP/H.323/ISDN CAPI 2.0 network.
-Fax Voip T38 Keygen Idm will answer the call and receive the fax data.
-Fax Voip T38 Keygen Idm will route the fax according to your settings. For example, it can send it to your e-mail, store it in a folder, print it, or use custom routing.
-
-You can monitor the status of your fax in the Fax Console or in the Fax Voip T38 Keygen Idm Monitor window. You can also view the details of your fax in the Inbox folder of Microsoft Fax.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop CS6 Extended 13.0.1.1 Crack Download VERIFIED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop CS6 Extended 13.0.1.1 Crack Download VERIFIED.md
deleted file mode 100644
index 78f4c5aa13c78b5bd29ff8673b87b7ac2ce214ff..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop CS6 Extended 13.0.1.1 Crack Download VERIFIED.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Adobe Photoshop CS6 Extended 13.0.1.1 crack download Download ✔ https://imgfil.com/2uxXsg
-
-August 29, 2019 - Adobe Photoshop CS6 Extended software provides even more options for working with images, as well as the Adobe Mercury graphics engine for incredible performance. Adobe Photoshop CS6 Extended:
-Improved drawing and color correction capabilities.
-Creation of realistic photos.
-Creation of expressive artistic portraits.
-Create high quality photos for web publishing.
-Create seamless fonts in Adobe Photoshop for vector editing.
-Improved transparency and animation capabilities.
-New possibilities for working with drawing and color correction tools.
-Create realistic photos. 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Capella Wave Kit 2 0 Keygen.rar __EXCLUSIVE__.md b/spaces/1gistliPinn/ChatGPT4/Examples/Capella Wave Kit 2 0 Keygen.rar __EXCLUSIVE__.md
deleted file mode 100644
index 6b407012402e4dc8aa18b05dced2b5a97adf12c3..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Capella Wave Kit 2 0 Keygen.rar __EXCLUSIVE__.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-2000 Capella Wave Kit 2 0 Keygen.zip, Capella Professional 2010-7.0.01 Refracto.exe, Capella Professional 2010-7.0.01 [NO DELAY. 20 hours ago. Step 1: Choose the text on which you want to add the Capsula wave effect. -LINPLUG-DEEPSTATUS-V3-E2-0-INCL-_best_-keygen-R2R-DEEPSTATUS-V2-3-0-DESTRO-FINAL-EFX-_LINPLUG-RPNG-v3-1-0-INCL-_best_-keygen-R2R-RPNG-v3-1-0-EFX-_LINPLUG-RPNG.
-Capella Wave Kit 2 0 Keygen.rar Download File ✔ https://imgfil.com/2uxZiq
-2016 Capella Wave Kit 2 0 Keygen.zip, Capella Professional 2010-7.0.01 Refracto.exe, Capella Professional 2010-7.0.01 [NO DELAY. 20 hours ago. Step 1: Choose the text on which you want to add the Capsula wave effect. 5000 Capella Wave Kit 2 0 Keygen.rar. Capella Wave Kit 2 0 Keygen.zip, Capella Professional 2010-7.0.01 Refracto.exe, Capella Professional 2010-7.0.01 [NO DELAY.
-2000 Capella Wave Kit 2 0 Keygen.rar. Capella Wave Kit 2 0 Keygen.zip, Capella Professional 2010-7.0.01 Refracto.exe, Capella Professional 2010-7.0.01 [NO DELAY. 2000 Capella Wave Kit 2 0 Keygen.rar
-5000 Capella Wave Kit 2 0 Keygen.rar. Capella Wave Kit 2 0 Keygen.zip, Capella Professional 2010-7.0.01 Refracto.exe, Capella Professional 2010-7.0.01 [NO DELAY. 2000 Capella Wave Kit 2 0 Keygen.rar
-
-the city of quito, the nation's capital, may not be the first place you would think of to go to heal from a breakup, but after spending a few days in the city, it's easy to see why this is the place for all things healing. capella wave kit 2 0 keygen [upd].rar.
-capella wave kit 2 0 keygen [upd].rar. autodesk autodesk autodesk autodesk autodesk autodesk autodesk. step 2. type hello world! and press enter.. type 0% and enter 0% to create a shape around your text. capella wave kit 2 0 keygen [upd].
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Carpassopeldownload.md b/spaces/1gistliPinn/ChatGPT4/Examples/Carpassopeldownload.md
deleted file mode 100644
index 7a542429844f09169e4eddf94773f0cae4823e7a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Carpassopeldownload.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-CarPassOpelDownload V2.7.0.2 Carpass Opel Download: Carpass OpelDownload free Full Version now. While replying, please remember to include the following details:. CarPassOpelDownload (carpassopel download):.
-Can you let me know what i did wrong. CarPassOpelDownload v4.0.4.9 carpass opel download cracked. January 24, 2018 at 7:23 am. Carpass 2.9.1 Carpass Opel Download 2015. carpass opel corsa download, carpass download for 2015, carpass corsa, carpass opel. . CarPassOpelDownload 2016 carpass opel download cracked Carpass Opel Download 2016.
-carpassopeldownload Download File >>>>> https://imgfil.com/2uxY2u
-Read more details about carpass-data. CarPassOpelDownload 17.7.1 Crack Download. car pass opel corsa crack like 3-4 times in a single use. car pass. Car Pass Opel Download. car pass opel, car pass opel generator, car pass opel corsa,. Car Pass Opel Download Carpass OpelDownload. carpassopel download
-CarPassOpelDownload. I will find a way to hack it for you, Car Pass Opel Download Lotty from A to Z Kwik. car pass opel download, car pass opel deutsch, car pass opel corsa. It is possible to download this script.
-carpassopeldownload. car pass opel generator, car pass opel corsa. Car Pass Opel Download. car pass opel, car pass opel generator, car pass opel corsa, corsa, opal card pass, corsa, gt2, pista opel, iv alfa romeo alfa romeo add.
-Car Pass Opel Download Traffic Generator. car pass opel download - carpassopeldownload - carpassopel generator - carpassopel - carpassopel download. Only the most popular X-Plane stores show above. The most popular places are displayed first. Show all. car pass opel download. car pass opel generator. Car Pass Opel Download. Car Pass Opel Download Traffic Generator. X-Plane Store. Window X-Plane Download.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga 1.254.2.1 Mod APK Unlimited Lives Boosters and Moves.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga 1.254.2.1 Mod APK Unlimited Lives Boosters and Moves.md
deleted file mode 100644
index 1ebe3fc5e2fc35a9c2c6f98aef1fc396e0f107c0..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga 1.254.2.1 Mod APK Unlimited Lives Boosters and Moves.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-Candy Crush Saga New Version Mod Apk: Everything You Need to Know
-If you are a fan of match-three puzzle games, you have probably heard of Candy Crush Saga, one of the most popular and addictive games of its genre. But did you know that there is a way to enjoy the game even more, with unlimited lives, boosters, and other perks? That's right, we are talking about Candy Crush Saga Mod Apk, a modified version of the original game that gives you access to all the features and levels without spending a dime. In this article, we will tell you everything you need to know about Candy Crush Saga Mod Apk, including what it is, how to download and install it, and how to play it like a pro.
-candy crush saga new version mod apk Download Zip ✵ https://urlin.us/2uSWpg
- What is Candy Crush Saga?
-Candy Crush Saga is a free-to-play tile-matching video game released by King in 2012. It is available for various platforms, such as Facebook, iOS, Android, Windows Phone, and Windows 10. It is a variation of their browser game Candy Crush.
- The basics of the game
-The game's premise is simple. You have a level full of candies. Match three or more candies of the same color to clear them from the board and score points. You can also create special candies by matching four or more candies in different ways. These special candies have different effects when matched, such as clearing a row, column, or other section of the board. You have a limited number of moves or time to complete each level's objective, which can vary from reaching a certain score to clearing all the jelly blocks.
- The different level types and objectives
-There are five types of levels in Candy Crush Saga, each with its own color and objective:
-
-Moves Levels are colored orange. You just have to reach the target score in a limited number of moves.
-Jelly Levels are colored blue. You have to clear all the jelly blocks in a limited number of moves.
-Ingredient Levels are colored green. You have to get the cherries or hazelnuts to certain spaces on the board within a limited number of moves.
-Time Levels are colored purple. You have to reach the target score in a limited amount of time.
-Color Order Levels are colored pink. You have to collect a specific number of candies in a certain order within a limited number of moves.
-
- What is Candy Crush Saga Mod Apk?
-Candy Crush Saga Mod Apk is a modified version of the original game that has been altered to include additional features, such as unlimited lives and boosters that can help you progress through levels faster. The mod apk also removes all advertisements from the game so you won't be interrupted while playing.
- The benefits of using the mod apk
-There are many benefits of using the mod apk over the original game, such as:
-candy crush saga latest version hack apk
-candy crush saga mod apk unlimited everything
-candy crush saga new update mod apk download
-candy crush saga mod apk with unlimited lives and boosters
-candy crush saga hacked version free download
-candy crush saga mod apk offline no root
-candy crush saga new levels mod apk
-candy crush saga mod apk for android 11
-candy crush saga mod apk with facebook login
-candy crush saga mod apk unlimited gold bars
-candy crush saga new features mod apk
-candy crush saga mod apk all levels unlocked
-candy crush saga new version cheat apk
-candy crush saga mod apk without ads
-candy crush saga hacked version for ios
-candy crush saga mod apk online play
-candy crush saga new edition mod apk
-candy crush saga mod apk with unlimited moves and time
-candy crush saga hacked version for pc
-candy crush saga mod apk for android 10
-candy crush saga new version cracked apk
-candy crush saga mod apk with unlimited money and gems
-candy crush saga hacked version no survey
-candy crush saga mod apk for android 9
-candy crush saga new version premium apk
-candy crush saga mod apk with all episodes unlocked
-candy crush saga hacked version latest
-candy crush saga mod apk for android 8
-candy crush saga new version pro apk
-candy crush saga mod apk with unlimited stars and hearts
-candy crush saga hacked version old
-candy crush saga mod apk for android 7
-candy crush saga new version vip apk
-candy crush saga mod apk with unlimited tickets and keys
-candy crush saga hacked version download link
-candy crush saga mod apk for android 6
-candy crush saga new version full apk
-candy crush saga mod apk with unlimited power ups and bombs
-candy crush saga hacked version online generator
-candy crush saga mod apk for android 5
-
-You can play any level you want without waiting for lives to refill or asking your friends for help.
-You can use any booster you want without spending real money or gold bars.
-You can skip hard levels or retry failed levels without losing lives or boosters.
-You can enjoy the game without annoying ads or pop-ups.
-
- The features of the mod apk
-The mod apk has many features that make it different from the original game, such as:
-
-All levels and episodes are unlocked.
-All boosters are unlocked and unlimited. How to Download and Install Candy Crush Saga Mod Apk?
-If you are interested in trying out the mod apk version of Candy Crush Saga, you will need to follow some simple steps to download and install it on your device. Here is how you can do it:
- The steps to download and install the mod apk
-
-First, you will need to uninstall the original Candy Crush Saga game from your device, if you have it installed. This is because the mod apk will replace the original game and you cannot have both versions at the same time.
-Next, you will need to find a reliable source to download the mod apk file. You can use the link provided by or search for other websites that offer the mod apk. Make sure you download the latest version of the mod apk, which is 1.254.2.1 as of this writing.
-Once you have downloaded the mod apk file, you will need to enable the installation of apps from unknown sources on your device. To do this, go to your device's settings, then security, and then toggle on the option that allows installing apps from unknown sources.
-Now, you can go to your device's file manager and locate the mod apk file that you downloaded. Tap on it and follow the instructions to install it on your device.
-After the installation is complete, you can launch the game and enjoy all the features of the mod apk.
-
- The precautions to take before installing the mod apk
-While installing and using the mod apk can be fun and easy, there are some risks and drawbacks that you should be aware of before doing so. Here are some precautions that you should take before installing the mod apk:
-
-Make sure you back up your progress and data from the original game before uninstalling it. You can do this by connecting your game to Facebook or using other cloud services. This way, you can restore your progress if you decide to switch back to the original game or if something goes wrong with the mod apk.
-Be careful about where you download the mod apk file from. Some websites may offer fake or malicious files that can harm your device or steal your personal information. Only download from trusted sources and scan the file with an antivirus before installing it.
-Be aware that using the mod apk may violate the terms of service of Candy Crush Saga and King. This means that you may face consequences such as losing your account, getting banned, or facing legal action. Use the mod apk at your own risk and discretion.
-
- Tips and Tricks to Play Candy Crush Saga Like a Pro
-Now that you have installed the mod apk version of Candy Crush Saga, you may be wondering how to make the most of it and play like a pro. Here are some tips and tricks that can help you improve your skills and score higher in the game:
- The best and worst combos to use
-As we mentioned earlier, combining two special candies can create a powerful effect that can clear a lot of candies from the board. However, not all combos are equally useful or effective. Here are some of the best and worst combos to use in Candy Crush Saga:
-
-Best Combos Worst Combos
-Striped + Wrapped: This combo clears three rows and three columns of candies, creating a huge explosion that can help you complete any level objective. Wrapped + Wrapped: This combo only destroys eight candies around each wrapped candy twice, which is not very impressive compared to other combos.
-Striped + Color Bomb: This combo turns all candies of the same color as the striped candy into striped candies, and then activates them all at once. This can clear almost half of the board in one move. Color Bomb + Wrapped: This combo turns all candies of the same color as the wrapped candy into wrapped candies, and then activates them all at once. This can create a lot of explosions, but they are not very effective at clearing jellies or ingredients.
-Color Bomb + Color Bomb: This combo clears all candies from the board, giving you a huge score boost and making any level objective easier to achieve. Striped + Striped: This combo clears one row and one column of candies, which is not very impressive compared to other combos. It can be useful in some situations, but it is better to save your striped candies for other combos.
-
- The strategies to clear levels faster and score higher
-Apart from using special candies and combos, there are some
strategies that can help you clear levels faster and score higher in Candy Crush Saga. Here are some of them:
-
-Focus on the level objective. Don't waste your moves or time on clearing candies that are not related to the level objective. For example, if you need to clear jelly blocks, focus on matching candies on or near them, rather than on the other side of the board.
-Plan your moves ahead. Try to think of the consequences of each move before you make it. Look for opportunities to create special candies or combos, or to clear obstacles or blockers. Also, try to avoid moves that can ruin your chances of creating special candies or combos in the future.
-Use boosters wisely. Boosters can be very helpful in some levels, but they are not unlimited. You can either buy them with real money or gold bars, or earn them by completing quests or events. Therefore, you should use them sparingly and only when you really need them. For example, you can use a lollipop hammer to clear a stubborn candy that is preventing you from completing the level, or a free switch to swap two candies that can create a powerful combo.
-Learn from your mistakes. If you fail a level, don't give up or get frustrated. Instead, try to analyze what went wrong and what you can do better next time. You can also watch videos of other players who have completed the level and learn from their strategies and moves.
-
- Conclusion
-Candy Crush Saga is a fun and addictive game that can keep you entertained for hours. However, if you want to enjoy the game even more, you can try the mod apk version that gives you unlimited lives, boosters, and other perks. In this article, we have explained what Candy Crush Saga Mod Apk is, how to download and install it, and how to play it like a pro. We hope you found this article helpful and informative. Now go ahead and crush some candies!
- FAQs
-Here are some frequently asked questions about Candy Crush Saga Mod Apk:
-
-Is Candy Crush Saga Mod Apk safe to use?
-Yes, Candy Crush Saga Mod Apk is safe to use as long as you download it from a trusted source and scan it with an antivirus before installing it. However, you should be aware that using the mod apk may violate the terms of service of Candy Crush Saga and King, and you may face consequences such as losing your account, getting banned, or facing legal action.
-Can I play Candy Crush Saga Mod Apk online with other players?
-No, Candy Crush Saga Mod Apk is not compatible with the online features of the original game. You cannot connect your game to Facebook or play with your friends or other players online. You can only play offline with the mod apk.
-Can I update Candy Crush Saga Mod Apk to the latest version?
-Yes, you can update Candy Crush Saga Mod Apk to the latest version by downloading and installing the new mod apk file from the same source that you downloaded it from before. However, you should back up your progress and data before updating, as you may lose them during the process.
-Can I switch back to the original game after using the mod apk?
-Yes, you can switch back to the original game after using the mod apk by uninstalling the mod apk and reinstalling the original game from the official app store. However, you should back up your progress and data from the mod apk before uninstalling it, as you may lose them during the process.
-Can I use both the original game and the mod apk on the same device?
-No, you cannot use both the original game and the mod apk on the same device at the same time. You have to choose one version and uninstall the other one.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Championship Manager 01 02 Android REPACK.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Championship Manager 01 02 Android REPACK.md
deleted file mode 100644
index 2db3b9f14cc60972e143b1ecc075edd6e34f9b73..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Championship Manager 01 02 Android REPACK.md
+++ /dev/null
@@ -1,186 +0,0 @@
-
-How to Play Championship Manager 01/02 on Android
-Championship Manager 01/02 is a legendary football (or soccer) management game that retains a considerable fan base. It might not include a fancy 3D match engine, but for many fans this is the greatest game of its series that most of the more recent Football Manager titles have not eclipsed. This is a footie management game that gets the balance between realistic simulation and accessibility just right.
-If you're one of those fans who still enjoy playing CM 01/02, or if you're curious about what makes this game so special, you might be wondering how to play it on your Android device. After all, playing on a mobile device has many advantages, such as portability, convenience, and battery life. Luckily, there are ways to play CM 01/02 on Android, and in this article, we'll show you how.
-download championship manager 01 02 android DOWNLOAD 🌟 https://urlin.us/2uT2k8
-How to download and install CM 01/02 on Android
-The first thing you need to do is to download and install an Android emulator on your PC or Mac. An emulator is a software that mimics the Android operating system on your computer, allowing you to run Android apps and games. There are many emulators available, but we recommend BlueStacks , which is one of the most popular and easy-to-use options.
-Once you have BlueStacks installed, you need to download the CM 01/02 ISO file from the Champman 01/02 website . This is a free download that contains the original game files. You'll also need a software called Daemon Tools Lite, which can mount the ISO file as a virtual CD drive on your computer.
-Here are the steps to follow:
-
-Open BlueStacks and sign in with your Google account.
-Go to Champman 01/02 website and click on Downloads.
-Click on Championship Manager 01/02 - Official Download - Attempt #2.
-Click on Download Now and save the ZIP file on your computer.
-Extract the ZIP file using WinRAR or any other extraction tool.
-You should see a file called CM0102.iso. This is the ISO file that contains the game.
-Go to Daemon Tools Lite website and download and install the software.
-Open Daemon Tools Lite and click on Add Image.
-Browse to the location where you saved the CM0102.iso file and select it.
-The ISO file should appear as a virtual CD drive in Daemon Tools Lite.
-Right-click on it and select Mount.
-A window should pop up with the CM 01/02 installation wizard. Follow the instructions to install the game on your computer.
-You should see a shortcut for CM 01/02 on your desktop. Right-click on it and select Properties.
-Go to Compatibility tab and check Run this program in compatibility mode for Windows XP (Service Pack 3).
-Click OK to save the changes.
-Double-click on the shortcut to launch the game on your computer.
-
-Congratulations, you have successfully installed CM 01/02 on your PC or Mac. Now, you need to transfer the game files to your Android device. Here's how:
-
-Connect your Android device to your computer using a USB cable.
-Open File Explorer on your computer and navigate to the folder where you installed CM 01/02. It should be something like C:\Program Files (x86)\Eidos Interactive\Championship Manager 01-02.
-Select all the files and folders in that folder and copy them.
-Open File Explorer on your Android device and create a new folder called CM0102 in the internal storage or SD card.
-Paste the files and folders you copied into the CM0102 folder.
-Disconnect your Android device from your computer.
-
-Now, you need to download and install an app called ExaGear Strategies on your Android device. This is an emulator that can run PC games on Android, including CM 01/02. You can get it from the Google Play Store for free.
-
-Open Google Play Store on your Android device and search for ExaGear Strategies.
-Tap on Install and wait for the app to download and install.
-Open ExaGear Strategies and tap on the + button at the bottom right corner.
-Select Championship Manager 01/02 from the list of games. If you don't see it, tap on Browse and locate the CM0102 folder you created earlier.
-Select the CM0102.exe file and tap on OK.
-The game should appear on the main screen of ExaGear Strategies. Tap on it to launch it.
-
-You're done! You can now play CM 01/02 on your Android device. Enjoy managing your favorite team and leading them to glory!
-
-How to update the game with the latest data and patches
-One of the amazing things about CM 01/02 is that it still receives updates from its dedicated community of fans. You can find the latest data updates, patches, mods, and more on the Champman 01/02 website and forum. These updates can fix bugs, improve performance, add new features, and most importantly, update the player database with the latest transfers, ratings, and attributes.
-To update the game with the latest data and patches, you need to download them from the website or forum and replace the existing files in your CM0102 folder. Here are some of the most popular updates you can get:
-
-The October 2020 Data Update : This is the most recent data update that includes all the transfers and changes from the summer 2020 window. It also includes some tweaks and fixes to make the game more realistic and balanced.
-The Tapani Patch : This is a patch that adds many new features and options to the game, such as new leagues, new staff roles, new player attributes, new tactics, new skins, new sounds, and more. It also fixes some bugs and improves performance. There are different versions of this patch, but we recommend version 2.22 which is compatible with ExaGear Strategies.
-The Saturn Patch : This is another patch that adds some features and fixes to the game, such as improved AI, improved match engine, improved regens, improved finances, improved editor, improved compatibility, and more. It also works well with ExaGear Strategies.
-
-To install these updates, you need to follow these steps:
-
-Download the update files from the links above or from any other source you trust.
-Extract the files using WinRAR or any other extraction tool.
-Connect your Android device to your computer using a USB cable.
-Open File Explorer on your computer and navigate to the CM0102 folder on your Android device.
-Select all the files in that folder and copy them to a backup folder on your computer in case something goes wrong.
-Delete all the files in the CM0102 folder except for cm0102.exe and cm0102.gdi.
-Copy all the files from the update folder you extracted earlier into the CM0102 folder on your Android device.
-Disconnect your Android device from your computer.
-Open ExaGear Strategies and launch CM 01/02.
-You should see a message saying that the game has been updated. Click OK to continue.
-
-That's it! You have successfully updated the game with the latest data and patches. You can now enjoy the game with more realism and variety.
-How to choose the best formation and tactics
-One of the most important aspects of CM 01/02 is choosing the right formation and tactics for your team. There are many factors to consider, such as your players' attributes, roles, preferences, morale, fitness, and form, as well as your opponents' strengths and weaknesses, the weather, the pitch condition, and the match situation.
-There is no definitive answer to what is the best formation and tactics for every team and every match, but there are some general guidelines and tips that can help you make better decisions. Here are some of them:
-
-Know your players: Study your players' attributes, roles, and preferences, and try to find the best position and duty for each of them. For example, if you have a fast and skillful winger, you might want to play him as an attacking midfielder on the flank and give him a forward run instruction. If you have a strong and tall striker, you might want to play him as a target man and give him a hold up ball instruction.
-Know your opponents: Scout your opponents before each match and try to identify their strengths and weaknesses. For example, if they have a weak defense, you might want to play more aggressively and exploit their flanks. If they have a strong attack, you might want to play more defensively and mark their key players.
-Know your style: Decide what kind of football you want to play and choose a formation and tactics that suit your style. For example, if you want to play a possession-based game, you might want to choose a formation with more midfielders and less strikers, such as 4-5-1 or 4-3-3. If you want to play a counter-attacking game, you might want to choose a formation with more strikers and less midfielders, such as 4-4-2 or 3-5-2.
-Know your options: Experiment with different formations and tactics and see how they affect your team's performance. You can use the pre-match screen or the in-game screen to change your formation and tactics at any time. You can also use the tactic wizard or the preset tactics to get some suggestions based on your team's attributes.
-
-To give you some examples of some of the most effective formations and tactical options in CM 01/02, here are some tables that compare them:
-
-Formation Advantages Disadvantages
-4-4-2 A balanced formation that provides width, depth, and support in both attack and defense. A vulnerable formation that can be outnumbered in midfield or exposed on the flanks.
-4-3-3 A flexible formation that can create numerical superiority in attack or midfield depending on the movement of the wingers. A demanding formation that requires high stamina, work rate, and teamwork from the wingers.
-4-5-1 A defensive formation that can dominate midfield and frustrate opponents with its compactness. A boring formation that can lack creativity and firepower in attack.
-3-5-2 An attacking formation that can overload opponents with its wing-backs and strikers. A risky formation that can leave gaps in defense or midfield if the wing-backs or central midfielders are caught out of position.
-5-3-2 A solid formation that can provide security in defense and support in attack with its sweeper and wing-backs. A conservative formation that can be predictable and passive in attack.
-
-
-Tactic Advantages Disadvantages
-Attacking A positive tactic that can create more chances and score more goals. A risky tactic that can leave your defense exposed and concede more goals.
-Defensive A cautious tactic that can protect your lead and prevent your opponents from scoring. A negative tactic that can invite pressure and reduce your chances of scoring.
-Long Ball A direct tactic that can bypass midfield and exploit the pace and strength of your strikers. A crude tactic that can waste possession and rely on luck and individual quality.
-Short Passing A patient tactic that can control the game and create openings with clever movement and passing. A slow tactic that can be easily disrupted by aggressive pressing and tackling.
-Counter Attack A smart tactic that can exploit the space behind your opponents' defense and catch them off guard. A reactive tactic that can depend on your opponents' mistakes and require fast transitions.
-
- Of course, these are just some examples of formations and tactics, and you can always customize them to suit your preferences and needs. The key is to find the right balance between attack and defense, creativity and discipline, and risk and reward. Remember, you are the manager, and you have the final say on how your team plays.
-How to find and sign the best players
-No matter how good your formation and tactics are, you still need quality players to execute them. Finding and signing the best players is one of the most challenging and rewarding aspects of CM 01/02. There are thousands of players in the game, but not all of them are worth your time and money. You need to scout them, negotiate with them, and convince them to join your team.
-There are many factors to consider when looking for players, such as their attributes, potential, age, nationality, personality, wage, value, contract, availability, etc. You also need to consider your team's needs, budget, reputation, vision, etc. It's not easy to find the perfect player for every position, but there are some tips that can help you:
-
-Use filters: The game has a powerful search engine that allows you to filter players by various criteria. You can use this to narrow down your search and find players that match your requirements. For example, if you need a young striker with high finishing and pace attributes, you can set the filters accordingly and see who comes up.
-Use scouts: The game has a scouting system that allows you to assign scouts to different regions or countries. You can use this to discover new players that might not be in your database. For example, if you want to find some hidden gems from South America or Africa, you can send your scouts there and see what they report back.
-Use forums: The game has a vibrant community of fans who share their knowledge and experience on various forums. You can use this to get some recommendations or feedback on players that you might be interested in. For example, if you want to know if a certain player is worth signing or not, you can ask other players who have used him or faced him.
-
-To give you some examples of some of the best players in CM 01/02, here are some tables that list them by position:
-
-Goalkeepers Nationality Club
-Gianluigi Buffon Italy Juventus
-Iker Casillas Spain Real Madrid
-Fabien Barthez France Manchester United
-Oliver Kahn Germany Bayern Munich
-Petr Cech Czech Republic Rennes
-
-
-Defenders Nationality Club
-Lilian Thuram France Juventus
-Alessandro Nesta Italy Lazio
-Roberto Carlos Brazil Real Madrid
-Paolo Maldini Italy AC Milan
-Rio Ferdinand England Leeds United
-
-
-Midfielders Nationality Club
-Zinedine Zidane France Real Madrid
-Luis Figo Portugal Real Madrid
-David Beckham England Manchester United
-Pavel Nedved Czech Republic Juventus
-Ronaldinho Brazil <Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain Paris Saint-Germain [assistant](#message)
-Claude Makelele France < Real Madrid
-Pirlo Andrea Pirlo td >< td > Italy td >< td > AC Milan td > tr >
- < td > Steven Gerrard td >< td > England td >< td > Liverpool td > tr >
- < td > Michael Ballack td >< td > Germany td >< td > Bayer Leverkusen td > tr >
-
-
-< th > Strikers th >< th > Nationality th >< th > Club th > tr >
- < td > Ronaldo td >< td > Brazil td >< td > Real Madrid td > tr >
- < td > Thierry Henry td >< td > France td >< td > Arsenal td > tr >
- < td > Ruud van Nistelrooy td >< td > Netherlands td >< td > Manchester United td > tr >
- < td > Raul Gonzalez td >< td > Spain td >< td > Real Madrid td > tr >
- < td > Andriy Shevchenko td >< td > Ukraine td >< < a h r e f = " ( ^ 2 8 ^ ) " t i t l e = " A C M i l a n " t a r g e t = " _ b l a n k " r e l = " n o f o l l o w " c l a s s = " e x t e r n a l t e x t " h r e f = " ( ^ 2 8 ^ ) " t i t l e = " AC Milan
-
- These are just some of the best players in CM 01/02, but there are many more that you can discover and sign. You can also use the editor to create your own players or edit the existing ones. However, be careful not to ruin the balance and fun of the game by making unrealistic changes.
-How to rotate your squad and manage fitness
-Another important aspect of CM 01/02 is managing your squad and keeping your players fit and happy. You can have the best players in the world, but if they are tired, injured, or unhappy, they won't perform well on the pitch. You need to rotate your squad and give your players enough rest and recovery time, as well as motivate them and keep them satisfied.
-There are many factors that affect your players' fitness and morale, such as their age, injury history, personality, form, playing time, contract, etc. You need to monitor these factors and make adjustments accordingly. Here are some tips that can help you:
-
-Use the fitness report: The game has a fitness report that shows you the condition and stamina of each player in your squad. You can access it by clicking on Squad > Fitness Report. You can use this to see which players need a rest or a boost.
-Use the squad status: The game has a squad status that shows you the happiness and morale of each player in your squad. You can access it by clicking on Squad > Squad Status. You can use this to see which players are unhappy or unsettled.
-Use the rotation policy: The game has a rotation policy that allows you to set how often you want to rotate your players. You can access it by clicking on Tactics > Rotation Policy. You can use this to automate your rotation and save time.
-Use the team talks: The game has a team talk feature that allows you to talk to your players before, during, and after each match. You can access it by clicking on Team Talk. You can use this to motivate your players and influence their performance.
-Use the player interaction: The game has a player interaction feature that allows you to talk to your players individually. You can access it by clicking on Player > Interaction. You can use this to praise, criticize, warn, or advise your players and affect their morale and attitude.
-
-By using these features, you can rotate your squad and manage fitness effectively. Remember, a happy and fit squad is a winning squad.
-How to use cheats and editors
-Finally, we come to the controversial topic of cheats and editors. Some players might argue that using cheats and editors is unethical and spoils the fun of the game. Others might argue that using cheats and editors is harmless and enhances the fun of the game. Ultimately, it's up to you to decide whether you want to use them or not.
-Cheats and editors are tools that allow you to modify the game in various ways, such as changing player attributes, adding money, editing competitions, etc. There are many cheats and editors available for CM 01/02, but we recommend using them with caution and moderation. Here are some of the most popular cheats and editors:
-
-CM Scout : This is a tool that allows you to scout any player in the game without sending scouts or paying fees. You can see their attributes, potential, value, contract, etc.
-CM Explorer : This is a tool that allows you to edit any player or club in the game. You can change their attributes, finances, reputation, staff, etc.
-CM Cheat : This is a tool that allows you to cheat in various ways in the game. You can add money, heal injuries, improve morale, etc.
-
-To use these cheats and editors, you need to follow these steps:
-
-Download the cheat or editor files from the links above or from any other source you trust.
-Extract the files using WinRAR or any other extraction tool.
-Run the cheat or editor program on your computer.
-Select CM 01/02 as the target game.
-Make the changes you want using the cheat or editor interface.
-Save the changes and exit the program.
-Launch CM 01/02 on your Android device using ExaGear Strategies.
-You should see the changes reflected in the game.
-
-Please note that using cheats and editors might cause errors or crashes in the game, so use them at your own risk. Also note that using cheats and editors might ruin [user the challenge and fun of the game, so use them sparingly and wisely.
-Conclusion
-CM 01/02 is a timeless game that still offers hours of enjoyment and satisfaction to football fans. If you want to play it on your Android device, you can follow the steps in this article to download and install it, as well as update it with the latest data and patches. You can also learn some tips on how to choose the best formation and tactics, how to find and sign the best players, how to rotate your squad and manage fitness, and how to use cheats and editors. We hope this article has been helpful and informative, and we wish you good luck and have fun with CM 01/02!
-FAQs
-Here are some of the most frequently asked questions and answers about CM 01/02:
-
-Is CM 01/02 free? Yes, CM 01/02 is free to download and play. You can get it from the Champman 01/02 website or from any other source you trust.
-Is CM 01/02 compatible with Windows 10? Yes, CM 01/02 is compatible with Windows 10. You just need to run it in compatibility mode for Windows XP (Service Pack 3) and as an administrator.
-Is CM 01/02 multiplayer? Yes, CM 01/02 is multiplayer. You can play it online or offline with up to 16 human players. You just need to create or join a network game and follow the instructions.
-What are some of the best mods for CM 01/02? There are many mods for CM 01/02 that add new features, graphics, sounds, leagues, etc. Some of the best mods are CM Legends , CM Club Update , CM Retro , CM World , and CM Fantasy .
-Where can I get more help or support for CM 01/02? You can get more help or support for CM 01/02 from the Champman 01/02 website and forum, where you can find guides, tutorials, tips, tricks, FAQs, downloads, etc. You can also join the CM 01/02 Facebook group or the CM 01/02 Discord server , where you can chat with other players and fans.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Most Realistic Football Management Simulation with Real Football Manager 2009 Java Game.md b/spaces/1phancelerku/anime-remove-background/Experience the Most Realistic Football Management Simulation with Real Football Manager 2009 Java Game.md
deleted file mode 100644
index e7463097918fffdda23adc374492fc99c0a6505d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Most Realistic Football Management Simulation with Real Football Manager 2009 Java Game.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-Download Java Real Football Manager 2009: A Guide for Football Fans
-If you are a football fan and you love to play games on your mobile phone, then you might be interested in downloading Java Real Football Manager 2009. This is a game that lets you experience the thrill of managing your own football club, from choosing your players to competing in matches. In this article, we will show you how to download and play this game, as well as some tips and tricks to help you succeed.
-Introduction
-What is Java Real Football Manager 2009?
-Java Real Football Manager 2009 is a mobile game developed by Gameloft, one of the leading publishers of mobile games. It is part of the Real Football series, which also includes games like Real Football 2010 and Real Football 2011. Java Real Football Manager 2009 is a game that focuses on the managerial aspect of football, rather than the gameplay. You can choose from one of eight different leagues and more than 200 teams, and all transfers are updated for the 2008/2009 season. You can also create your own custom team and league, if you prefer.
-download java real football manager 2009 Download Zip ☆☆☆☆☆ https://jinyurl.com/2uNS2k
-Why should you download it?
-There are many reasons why you should download Java Real Football Manager 2009, if you are a fan of football and mobile games. Here are some of them:
-
-It is fun and addictive. You will enjoy the challenge of managing your own football club, from signing players to winning trophies.
-It is realistic and immersive. You will feel like a real football manager, as you deal with various aspects of running a club, such as finances, morale, injuries, media, etc.
-It is easy and convenient. You can play it on any device that supports Java games, such as Nokia, Samsung, Motorola, Sony Ericsson, etc. You can also play it offline, without needing an internet connection.
-
-How to download Java Real Football Manager 2009
-Step 1: Find a reliable source
-The first step to download Java Real Football Manager 2009 is to find a reliable source that offers the game for free or for a reasonable price. There are many websites that claim to offer the game, but some of them might be scams or contain viruses. Therefore, you should be careful and do some research before downloading anything. One of the websites that we recommend is dedomil.net, which has a large collection of Java games for various devices and resolutions.
-Step 2: Choose your device and resolution
-The next step is to choose your device and resolution from the list of available options on the website. This will ensure that you download the right version of the game that is compatible with your device. For example, if you have a Nokia N95 phone with a resolution of 240x320 pixels, then you should select "Real Football: Manager Edition 2009 (CZ/N95) (240x320)". If you are not sure about your device or resolution, you can check it online or in your phone settings.
-Step 3: Download and install the game
-The final step is to download and install the game
The final step is to download and install the game on your device. You can do this by clicking on the download link on the website, or by scanning the QR code with your phone's camera. The game will be downloaded as a .jar file, which is the format for Java games. To install the game, you need to transfer the .jar file to your device, either by using a USB cable, Bluetooth, or a memory card. Then, you need to locate the file on your device and open it. The game will be installed automatically and you can start playing it.
-download java real football manager 2009 240x320
-download java real football manager 2009 dedomil
-download java real football manager 2009 gameloft
-download java real football manager 2009 for nokia
-download java real football manager 2009 jar
-download java real football manager 2009 mobile game
-download java real football manager 2009 free
-download java real football manager 2009 full version
-download java real football manager 2009 online
-download java real football manager 2009 cheats
-download java real football manager 2009 hack
-download java real football manager 2009 mod
-download java real football manager 2009 apk
-download java real football manager 2009 android
-download java real football manager 2009 samsung
-download java real football manager 2009 sony ericsson
-download java real football manager 2009 lg
-download java real football manager 2009 motorola
-download java real football manager 2009 touchscreen
-download java real football manager 2009 landscape
-download java real football manager 2009 review
-download java real football manager 2009 tips
-download java real football manager 2009 guide
-download java real football manager 2009 tutorial
-download java real football manager 2009 transfer update
-download java real football manager edition 2009
-download java gameloft real football manager edition 2009
-download java rfme09.jar
-download java rfme09 dedomil.net[^1^]
-download java rfme09 gameloft.com[^2^]
-how to download java real football manager 2009
-where to download java real football manager 2009
-best site to download java real football manager 2009
-latest version of java real football manager 2009
-old version of java real football manager 2009
-new features of java real football manager 2009
-play java real football manager 2009 on pc
-play java real football manager 2009 on phone
-play online java games like real football manager 2009[^3^]
-play offline java games like real football manager 2009
-compare java games: real football vs. real football manager
-compare gameloft games: rfme09 vs. rfme10
-compare dedomil.net games: rfme09 vs. rfme10
-compare jar games: rfme09 vs. rfme10
-compare mobile games: rfme09 vs. rfme10
-compare nokia games: rfme09 vs. rfme10
-compare samsung games: rfme09 vs. rfme10
-compare sony ericsson games: rfme09 vs. rfme10
-compare lg games: rfme09 vs. rfme10
-How to play Java Real Football Manager 2009
-Step 1: Choose your club and league
-When you start the game, you will be asked to choose your club and league from the available options. You can select one of the eight leagues, such as England, Spain, Italy, France, Germany, etc., or you can create your own custom league with your favorite teams. You can also choose your club from more than 200 teams, or you can create your own custom team with your own name, logo, and colors. You can also edit the players' names, skills, and appearances.
-Step 2: Manage your team and transfers
-Once you have chosen your club and league, you will enter the main menu of the game, where you can access various options to manage your team and transfers. You can view your squad, tactics, fixtures, standings, statistics, etc. You can also buy and sell players in the transfer market, where you can bid for players or accept offers from other clubs. You can also scout for new talents or loan players from other teams. You have to balance your budget and keep an eye on your players' contracts and salaries.
-Step 3: Compete in matches and tournaments
-The most exciting part of the game is competing in matches and tournaments against other teams. You can play in various competitions, such as league matches, cup matches, friendly matches, etc. You can also participate in international tournaments, such as the World Cup or the European Championship. Before each match, you can set your lineup, formation, strategy, etc. During the match, you can watch the action unfold on the screen, or you can skip to the result. You can also make substitutions or change tactics during the match. After each match, you can view the highlights, statistics, ratings, etc.
-Tips and tricks for Java Real Football Manager 2009
-Tip 1: Use the simplified interface and improved AI
-One of the features of Java Real Football Manager 2009 is that it has a simplified interface and improved AI compared to previous versions of the game. This means that you can navigate through the menus faster and easier, and that the game will run smoother and more realistic on your device. The AI of the game will also adapt to your style of play and offer you more challenge and variety.
-Tip 2: Keep an eye on your finances and morale
-Another important aspect of managing a football club is keeping an eye on your finances and morale. You have to make sure that you have enough money to pay for your players' salaries, transfers, scouts, etc., as well as for maintaining your stadium and facilities. You also have to make sure that your players are happy and motivated, as this will affect their performance on the pitch. You can improve your finances and morale by winning matches and trophies, selling tickets and merchandise, signing sponsors, etc.
-Tip 3: Experiment with different tactics and formations
-A final tip for playing Java Real Football Manager 2009 is to experiment with different tactics and formations for your team. You can choose from various options,
A final tip for playing Java Real Football Manager 2009 is to experiment with different tactics and formations for your team. You can choose from various options, such as 4-4-2, 4-3-3, 3-5-2, etc., and you can also adjust the roles and positions of your players. You can also change your strategy during the match, such as attacking, defending, counter-attacking, etc. You should try to find the best combination that suits your team and your opponents.
-Conclusion
-Java Real Football Manager 2009 is a great game for football fans who want to manage their own club and compete in various matches and tournaments. It is fun, realistic, easy, and convenient to play on any device that supports Java games. It also has a simplified interface and improved AI that make the game more enjoyable and challenging. If you want to download and play this game, you can follow the steps and tips that we have provided in this article. We hope that you will have a great time playing Java Real Football Manager 2009.
-FAQs
-Here are some of the frequently asked questions about Java Real Football Manager 2009:
-
-Q: How much does the game cost?
-A: The game is free to download from some websites, such as dedomil.net, but it might have some ads or limitations. You can also buy the game from other websites or app stores, such as Gameloft.com, for a reasonable price.
-Q: What are the minimum requirements for the game?
-A: The game requires a device that supports Java games, such as Nokia, Samsung, Motorola, Sony Ericsson, etc. The game also requires a resolution of at least 128x128 pixels, but it might vary depending on the device and version of the game.
-Q: How can I update the game?
-A: The game does not have an official update feature, but you can download the latest version of the game from the website or app store where you got it. You can also check for updates on Gameloft's website or social media pages.
-Q: How can I contact Gameloft for support or feedback?
-A: You can contact Gameloft for support or feedback by visiting their website or social media pages, or by sending an email to support@gameloft.com. You can also check their FAQ section or forum for more information.
-Q: How can I share my progress or achievements with other players?
-A: You can share your progress or achievements with other players by using the online mode of the game, which allows you to connect with other players via Bluetooth or Wi-Fi. You can also share your screenshots or videos of the game on social media platforms, such as Facebook, Twitter, Instagram, etc.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Color Rings Puzzle - Android Game APK.md b/spaces/1phancelerku/anime-remove-background/Free Download Color Rings Puzzle - Android Game APK.md
deleted file mode 100644
index ba7cdfeb4c149e4a76f7d2ec0308fe35ce33acfa..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Free Download Color Rings Puzzle - Android Game APK.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-Color Rings Puzzle APK Download: A Fun and Relaxing Game for Android
- If you are looking for a simple yet addictive game that can keep you entertained and relaxed, you might want to try Color Rings Puzzle. This is a free game that you can download and play on your Android device. In this article, we will tell you what Color Rings Puzzle is, how to play it, why you should download it, how to download and install the APK file, and some tips and tricks for playing it.
-color rings puzzle apk download Download Zip ✒ ✒ ✒ https://jinyurl.com/2uNS65
- What is Color Rings Puzzle?
- Color Rings Puzzle is a casual puzzle game that challenges your brain and your eyes. The goal of the game is to arrange colorful rings on a board in such a way that they form rows, columns, or diagonals of the same color. You can move the rings from one slot to another, but you cannot overlap them. The game ends when there is no more space on the board for new rings.
- How to play Color Rings Puzzle
- The game is very easy to play. You just need to tap on the screen to select a ring from the bottom row, and then tap on an empty slot on the board to place it. You can also drag and drop the rings if you prefer. You will get points for every line of three or more rings of the same color that you create. The more rings you clear at once, the higher your score will be. You will also get bonus points for clearing multiple lines at once.
- Why you should download Color Rings Puzzle
- There are many reasons why you should download Color Rings Puzzle on your Android device. Here are some of them:
-
-The game is fun and relaxing. You can play it anytime and anywhere, without any time limit or pressure. You can also pause and resume the game whenever you want.
-The game is suitable for all ages and skill levels. You can choose from different modes and difficulty levels, depending on your preference and mood. You can also switch between different themes and backgrounds, such as classic, neon, wood, or marble.
-The game is good for your brain and your eyes. It can help you improve your concentration, memory, logic, and color perception. It can also help you reduce stress and boredom.
-The game is free and safe to download. You don't need to pay anything or register anything to play the game. You also don't need to worry about any viruses or malware infecting your device.
-
- How to download Color Rings Puzzle APK
- If you want to download Color Rings Puzzle on your Android device, you have two options: you can either download it from the Google Play Store or from an APK file. In this section, we will explain what an APK file is and how to download and install it.
-color rings puzzle game apk free download
-how to play color rings puzzle on android
-color rings puzzle mod apk unlimited money
-best color matching puzzle games for android
-download color rings puzzle latest version apk
-color rings puzzle apk download for pc
-color rings puzzle tips and tricks
-color rings puzzle online play
-color rings puzzle apk download uptodown
-color rings puzzle hack apk download
-color rings puzzle app review
-color rings puzzle apk download apkpure
-color rings puzzle cheats and codes
-color rings puzzle apk download for ios
-color rings puzzle level 1000
-color rings puzzle apk download old version
-color rings puzzle similar games
-color rings puzzle apk download no ads
-color rings puzzle strategy guide
-color rings puzzle apk download for windows 10
-color rings puzzle achievements and rewards
-color rings puzzle apk download for mac
-color rings puzzle themes and backgrounds
-color rings puzzle apk download for chromebook
-color rings puzzle leaderboard and rankings
-color rings puzzle apk download for laptop
-color rings puzzle features and benefits
-color rings puzzle apk download for tablet
-color rings puzzle challenges and quests
-color rings puzzle apk download for kindle fire
-color rings puzzle screenshots and videos
-color rings puzzle apk download for firestick
-color rings puzzle updates and news
-color rings puzzle apk download for smart tv
-color rings puzzle ratings and feedbacks
-color rings puzzle apk download for android tv box
-color rings puzzle bugs and issues
-color rings puzzle apk download for roku
-color rings puzzle support and contact
-color rings puzzle apk download for nvidia shield tv
- What is an APK file and why you need it
- An APK file is a file format that contains all the data and code needed to run an Android app. It stands for Android Package Kit. You can think of it as a zip file that contains everything that an app needs to work properly.
- You might need an APK file if you want to download an app that is not available in your region or in the Google Play Store. You might also need it if you want to update an app manually or install an older version of an app.
- How to install Color Rings Puzzle APK on your Android device
- To install Color Rings Puzzle APK on your Android device, you need to follow these steps:
-
-Download the APK file from a trusted source, such as [APKCombo](^1 ^)^. You can use the link below to access the download page. Make sure you download the latest version of the game, which is 3.0.9 as of June 2023.
-Before you install the APK file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might also need to grant permission to your browser or file manager to install apps.
-Locate the downloaded APK file on your device, either in your Downloads folder or in the folder where you saved it. Tap on the file and follow the instructions on the screen to install it. You might need to allow some permissions for the app to work properly.
-Once the installation is complete, you can launch the game from your app drawer or home screen and enjoy playing it.
-
- How to update Color Rings Puzzle APK
- To update Color Rings Puzzle APK, you need to repeat the same steps as above, but with a newer version of the APK file. You can check for updates on the [APKCombo] website or on the app itself. You don't need to uninstall the previous version of the app before installing the new one.
- Tips and tricks for playing Color Rings Puzzle
- Now that you have downloaded and installed Color Rings Puzzle APK on your Android device, you might want to know some tips and tricks for playing it better. Here are some of them:
- Use the undo button wisely
- The game has an undo button that allows you to undo your last move if you make a mistake or change your mind. However, you can only use it once per game, so use it wisely. Don't waste it on a minor error or a move that doesn't affect your score much. Save it for a situation where you really need it, such as when you are about to lose the game or when you can clear a lot of rings with one move.
- Plan ahead and avoid filling up the board
- The game gets harder as you progress, as more rings of different colors appear on the bottom row. You need to plan ahead and think carefully before placing each ring on the board. Try to create as many lines as possible with each move, and avoid placing rings that don't match any existing lines. Also, avoid filling up the board with rings that have no space to move or clear. Leave some empty slots for future moves and opportunities.
- Try different modes and themes
- The game has different modes and themes that you can try for more variety and fun. You can choose from Classic, Time Attack, Move Limit, Bomb, Hexa, and Star modes, each with its own rules and challenges. You can also switch between different themes and backgrounds, such as Classic, Neon, Wood, or Marble, each with its own colors and sounds. Experiment with different combinations and find your favorite one.
- Conclusion
- Color Rings Puzzle is a fun and relaxing game that you can download and play on your Android device. It is easy to play but hard to master, and it can keep you entertained and relaxed for hours. You can download it from the Google Play Store or from an APK file, depending on your preference and availability. You can also follow some tips and tricks for playing it better, such as using the undo button wisely, planning ahead, and trying different modes and themes. If you are looking for a simple yet addictive puzzle game that challenges your brain and your eyes, you should give Color Rings Puzzle a try.
- FAQs
- Here are some frequently asked questions about Color Rings Puzzle:
-
-Is Color Rings Puzzle free?
-Yes, Color Rings Puzzle is free to download and play. However, it contains ads that you can remove by purchasing the ad-free version of the game.
-Is Color Rings Puzzle offline?
-Yes, Color Rings Puzzle is an offline game that does not require an internet connection to play. However, you might need an internet connection to download updates or access some features of the game.
-Is Color Rings Puzzle safe?
-Yes, Color Rings Puzzle is safe to download and play. It does not contain any viruses or malware that can harm your device. However, make sure you download it from a trusted source, such as [APKCombo] or the Google Play Store.
-How do I get more points in Color Rings Puzzle?
-You can get more points in Color Rings Puzzle by clearing more rings at once, clearing multiple lines at once, clearing special rings such as bombs or stars, and playing on higher difficulty levels or modes.
-How do I reset Color Rings Puzzle?
-You can reset Color Rings Puzzle by clearing the app data and cache on your device. To do this, go to Settings > Apps > Color Rings Puzzle > Storage > Clear Data and Clear Cache. This will erase your progress and settings, and restore the game to its default state.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/lib/bots/bing/sr.ts b/spaces/2023Liu2023/bingo/src/lib/bots/bing/sr.ts
deleted file mode 100644
index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/lib/bots/bing/sr.ts
+++ /dev/null
@@ -1,106 +0,0 @@
-// @ts-ignore
-const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? (
- // @ts-ignore
- window.SpeechRecognition ||
- window.webkitSpeechRecognition ||
- // @ts-ignore
- window.mozSpeechRecognition ||
- // @ts-ignore
- window.msSpeechRecognition ||
- // @ts-ignore
- window.oSpeechRecognition
-) as typeof webkitSpeechRecognition : undefined
-
-type subscriber = (msg: string, command?: string) => void
-
-export class SR {
- recognition?: SpeechRecognition
- onchange?: subscriber
- transcript: boolean = false
- listening: boolean = false
- private commandsRe?: RegExp
- constructor(commands: string[]) {
- this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined
- if (!this.recognition) {
- return
- }
- this.configuration('zh-CN')
- if (commands.length) {
- this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`)
- }
- this.recognition.onresult = this.speechRecognition
- this.recognition.onerror = (err) => {
- console.log('err', err.error)
- this.stop()
- }
- this.recognition.onend = () => {
- if (this.recognition && this.listening) {
- this.recognition.start()
- }
- }
- }
-
- speechRecognition = (event: SpeechRecognitionEvent) => {
- if (!this.listening) return
- for (var i = event.resultIndex; i < event.results.length; i++) {
- let result = event.results[i]
- if (result.isFinal) {
- var alt = result[0]
- const text = alt.transcript.trim()
- if (this.commandsRe && this.commandsRe.test(text)) {
- return this.onchange?.('', RegExp.$1)
- }
- if (!this.transcript) return
- this.onchange?.(text)
- }
- }
- }
-
- private configuration = async (lang: string = 'zh-CN') => {
- return new Promise((resolve) => {
- if (this.recognition) {
- this.recognition.continuous = true
- this.recognition.lang = lang
- this.recognition.onstart = resolve
- }
- })
- }
-
- start = async () => {
- if (this.recognition && !this.listening) {
- await this.recognition.start()
- this.transcript = true
- this.listening = true
- }
- }
-
- stop = () => {
- if (this.recognition) {
- this.recognition.stop()
- this.transcript = false
- this.listening = false
- }
- }
-
-
- pause = () => {
- if (this.recognition) {
- this.transcript = false
- }
- }
-
- resume = () => {
- if (this.recognition) {
- this.transcript = true
- }
- }
-
- abort = () => {
- if (this.recognition && this.transcript) {
- this.recognition.abort()
- this.transcript = false
- this.listening = false
- }
- }
-}
-
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/prior.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/prior.py
deleted file mode 100644
index 7f13806dd1f6607507b0c7e5ad463b3fb0026be8..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/prior.py
+++ /dev/null
@@ -1,230 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
-# holder of all proprietary rights on this computer program.
-# You can only use this computer program if you have closed
-# a license agreement with MPG or you get the right to use the computer
-# program from someone who is authorized to grant you that right.
-# Any use of the computer program without a valid license is prohibited and
-# liable to prosecution.
-#
-# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
-# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
-# for Intelligent Systems. All rights reserved.
-#
-# Contact: ps-license@tuebingen.mpg.de
-
-from __future__ import absolute_import
-from __future__ import print_function
-from __future__ import division
-
-import sys
-import os
-
-import time
-import pickle
-
-import numpy as np
-
-import torch
-import torch.nn as nn
-
-DEFAULT_DTYPE = torch.float32
-
-
-def create_prior(prior_type, **kwargs):
- if prior_type == 'gmm':
- prior = MaxMixturePrior(**kwargs)
- elif prior_type == 'l2':
- return L2Prior(**kwargs)
- elif prior_type == 'angle':
- return SMPLifyAnglePrior(**kwargs)
- elif prior_type == 'none' or prior_type is None:
- # Don't use any pose prior
- def no_prior(*args, **kwargs):
- return 0.0
- prior = no_prior
- else:
- raise ValueError('Prior {}'.format(prior_type) + ' is not implemented')
- return prior
-
-
-class SMPLifyAnglePrior(nn.Module):
- def __init__(self, dtype=torch.float32, **kwargs):
- super(SMPLifyAnglePrior, self).__init__()
-
- # Indices for the roration angle of
- # 55: left elbow, 90deg bend at -np.pi/2
- # 58: right elbow, 90deg bend at np.pi/2
- # 12: left knee, 90deg bend at np.pi/2
- # 15: right knee, 90deg bend at np.pi/2
- angle_prior_idxs = np.array([55, 58, 12, 15], dtype=np.int64)
- angle_prior_idxs = torch.tensor(angle_prior_idxs, dtype=torch.long)
- self.register_buffer('angle_prior_idxs', angle_prior_idxs)
-
- angle_prior_signs = np.array([1, -1, -1, -1],
- dtype=np.float32 if dtype == torch.float32
- else np.float64)
- angle_prior_signs = torch.tensor(angle_prior_signs,
- dtype=dtype)
- self.register_buffer('angle_prior_signs', angle_prior_signs)
-
- def forward(self, pose, with_global_pose=False):
- ''' Returns the angle prior loss for the given pose
-
- Args:
- pose: (Bx[23 + 1] * 3) torch tensor with the axis-angle
- representation of the rotations of the joints of the SMPL model.
- Kwargs:
- with_global_pose: Whether the pose vector also contains the global
- orientation of the SMPL model. If not then the indices must be
- corrected.
- Returns:
- A sze (B) tensor containing the angle prior loss for each element
- in the batch.
- '''
- angle_prior_idxs = self.angle_prior_idxs - (not with_global_pose) * 3
- return torch.exp(pose[:, angle_prior_idxs] *
- self.angle_prior_signs).pow(2)
-
-
-class L2Prior(nn.Module):
- def __init__(self, dtype=DEFAULT_DTYPE, reduction='sum', **kwargs):
- super(L2Prior, self).__init__()
-
- def forward(self, module_input, *args):
- return torch.sum(module_input.pow(2))
-
-
-class MaxMixturePrior(nn.Module):
-
- def __init__(self, prior_folder='prior',
- num_gaussians=6, dtype=DEFAULT_DTYPE, epsilon=1e-16,
- use_merged=True,
- **kwargs):
- super(MaxMixturePrior, self).__init__()
-
- if dtype == DEFAULT_DTYPE:
- np_dtype = np.float32
- elif dtype == torch.float64:
- np_dtype = np.float64
- else:
- print('Unknown float type {}, exiting!'.format(dtype))
- sys.exit(-1)
-
- self.num_gaussians = num_gaussians
- self.epsilon = epsilon
- self.use_merged = use_merged
- gmm_fn = 'gmm_{:02d}.pkl'.format(num_gaussians)
-
- full_gmm_fn = os.path.join(prior_folder, gmm_fn)
- if not os.path.exists(full_gmm_fn):
- print('The path to the mixture prior "{}"'.format(full_gmm_fn) +
- ' does not exist, exiting!')
- sys.exit(-1)
-
- with open(full_gmm_fn, 'rb') as f:
- gmm = pickle.load(f, encoding='latin1')
-
- if type(gmm) == dict:
- means = gmm['means'].astype(np_dtype)
- covs = gmm['covars'].astype(np_dtype)
- weights = gmm['weights'].astype(np_dtype)
- elif 'sklearn.mixture.gmm.GMM' in str(type(gmm)):
- means = gmm.means_.astype(np_dtype)
- covs = gmm.covars_.astype(np_dtype)
- weights = gmm.weights_.astype(np_dtype)
- else:
- print('Unknown type for the prior: {}, exiting!'.format(type(gmm)))
- sys.exit(-1)
-
- self.register_buffer('means', torch.tensor(means, dtype=dtype))
-
- self.register_buffer('covs', torch.tensor(covs, dtype=dtype))
-
- precisions = [np.linalg.inv(cov) for cov in covs]
- precisions = np.stack(precisions).astype(np_dtype)
-
- self.register_buffer('precisions',
- torch.tensor(precisions, dtype=dtype))
-
- # The constant term:
- sqrdets = np.array([(np.sqrt(np.linalg.det(c)))
- for c in gmm['covars']])
- const = (2 * np.pi)**(69 / 2.)
-
- nll_weights = np.asarray(gmm['weights'] / (const *
- (sqrdets / sqrdets.min())))
- nll_weights = torch.tensor(nll_weights, dtype=dtype).unsqueeze(dim=0)
- self.register_buffer('nll_weights', nll_weights)
-
- weights = torch.tensor(gmm['weights'], dtype=dtype).unsqueeze(dim=0)
- self.register_buffer('weights', weights)
-
- self.register_buffer('pi_term',
- torch.log(torch.tensor(2 * np.pi, dtype=dtype)))
-
- cov_dets = [np.log(np.linalg.det(cov.astype(np_dtype)) + epsilon)
- for cov in covs]
- self.register_buffer('cov_dets',
- torch.tensor(cov_dets, dtype=dtype))
-
- # The dimensionality of the random variable
- self.random_var_dim = self.means.shape[1]
-
- def get_mean(self):
- ''' Returns the mean of the mixture '''
- mean_pose = torch.matmul(self.weights, self.means)
- return mean_pose
-
- def merged_log_likelihood(self, pose, betas):
- diff_from_mean = pose.unsqueeze(dim=1) - self.means
-
- prec_diff_prod = torch.einsum('mij,bmj->bmi',
- [self.precisions, diff_from_mean])
- diff_prec_quadratic = (prec_diff_prod * diff_from_mean).sum(dim=-1)
-
- curr_loglikelihood = 0.5 * diff_prec_quadratic - \
- torch.log(self.nll_weights)
- # curr_loglikelihood = 0.5 * (self.cov_dets.unsqueeze(dim=0) +
- # self.random_var_dim * self.pi_term +
- # diff_prec_quadratic
- # ) - torch.log(self.weights)
-
- min_likelihood, _ = torch.min(curr_loglikelihood, dim=1)
- return min_likelihood
-
- def log_likelihood(self, pose, betas, *args, **kwargs):
- ''' Create graph operation for negative log-likelihood calculation
- '''
- likelihoods = []
-
- for idx in range(self.num_gaussians):
- mean = self.means[idx]
- prec = self.precisions[idx]
- cov = self.covs[idx]
- diff_from_mean = pose - mean
-
- curr_loglikelihood = torch.einsum('bj,ji->bi',
- [diff_from_mean, prec])
- curr_loglikelihood = torch.einsum('bi,bi->b',
- [curr_loglikelihood,
- diff_from_mean])
- cov_term = torch.log(torch.det(cov) + self.epsilon)
- curr_loglikelihood += 0.5 * (cov_term +
- self.random_var_dim *
- self.pi_term)
- likelihoods.append(curr_loglikelihood)
-
- log_likelihoods = torch.stack(likelihoods, dim=1)
- min_idx = torch.argmin(log_likelihoods, dim=1)
- weight_component = self.nll_weights[:, min_idx]
- weight_component = -torch.log(weight_component)
-
- return weight_component + log_likelihoods[:, min_idx]
-
- def forward(self, pose, betas):
- if self.use_merged:
- return self.merged_log_likelihood(pose, betas)
- else:
- return self.log_likelihood(pose, betas)
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/normalizing_flow/res_flow.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/normalizing_flow/res_flow.py
deleted file mode 100644
index d0d13285704543ec28fe37d82346011240bdcaf8..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/normalizing_flow/res_flow.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch import nn
-from modules.commons.conv import ConditionalConvBlocks
-from modules.commons.wavenet import WN
-
-
-class FlipLayer(nn.Module):
- def forward(self, x, nonpadding, cond=None, reverse=False):
- x = torch.flip(x, [1])
- return x
-
-
-class CouplingLayer(nn.Module):
- def __init__(self, c_in, hidden_size, kernel_size, n_layers, p_dropout=0, c_in_g=0, nn_type='wn'):
- super().__init__()
- self.channels = c_in
- self.hidden_size = hidden_size
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.c_half = c_in // 2
-
- self.pre = nn.Conv1d(self.c_half, hidden_size, 1)
- if nn_type == 'wn':
- self.enc = WN(hidden_size, kernel_size, 1, n_layers, p_dropout=p_dropout,
- c_cond=c_in_g)
- elif nn_type == 'conv':
- self.enc = ConditionalConvBlocks(
- hidden_size, c_in_g, hidden_size, None, kernel_size,
- layers_in_block=1, is_BTC=False, num_layers=n_layers)
- self.post = nn.Conv1d(hidden_size, self.c_half, 1)
-
- def forward(self, x, nonpadding, cond=None, reverse=False):
- x0, x1 = x[:, :self.c_half], x[:, self.c_half:]
- x_ = self.pre(x0) * nonpadding
- x_ = self.enc(x_, nonpadding=nonpadding, cond=cond)
- m = self.post(x_)
- x1 = m + x1 if not reverse else x1 - m
- x = torch.cat([x0, x1], 1)
- return x * nonpadding
-
-
-class ResFlow(nn.Module):
- def __init__(self,
- c_in,
- hidden_size,
- kernel_size,
- n_flow_layers,
- n_flow_steps=4,
- c_cond=0,
- nn_type='wn'):
- super().__init__()
- self.flows = nn.ModuleList()
- for i in range(n_flow_steps):
- self.flows.append(
- CouplingLayer(c_in, hidden_size, kernel_size, n_flow_layers, c_in_g=c_cond, nn_type=nn_type))
- self.flows.append(FlipLayer())
-
- def forward(self, x, nonpadding, cond=None, reverse=False):
- for flow in (self.flows if not reverse else reversed(self.flows)):
- x = flow(x, nonpadding, cond=cond, reverse=reverse)
- return x
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/setup.py b/spaces/AbandonedMuse/UnlimitedMusicGen/setup.py
deleted file mode 100644
index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/setup.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""
- Copyright (c) Meta Platforms, Inc. and affiliates.
- All rights reserved.
-
- This source code is licensed under the license found in the
- LICENSE file in the root directory of this source tree.
-
-"""
-
-from pathlib import Path
-
-from setuptools import setup, find_packages
-
-
-NAME = 'audiocraft'
-DESCRIPTION = 'Audio research library for PyTorch'
-
-URL = 'https://github.com/fairinternal/audiocraft'
-AUTHOR = 'FAIR Speech & Audio'
-EMAIL = 'defossez@meta.com'
-REQUIRES_PYTHON = '>=3.8.0'
-
-for line in open('audiocraft/__init__.py'):
- line = line.strip()
- if '__version__' in line:
- context = {}
- exec(line, context)
- VERSION = context['__version__']
-
-HERE = Path(__file__).parent
-
-try:
- with open(HERE / "README.md", encoding='utf-8') as f:
- long_description = '\n' + f.read()
-except FileNotFoundError:
- long_description = DESCRIPTION
-
-REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')]
-
-setup(
- name=NAME,
- version=VERSION,
- description=DESCRIPTION,
- author_email=EMAIL,
- long_description=long_description,
- long_description_content_type='text/markdown',
- author=AUTHOR,
- url=URL,
- python_requires=REQUIRES_PYTHON,
- install_requires=REQUIRED,
- extras_require={
- 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'],
- },
- packages=find_packages(),
- package_data={'audiocraft': ['py.typed']},
- include_package_data=True,
- license='MIT License',
- classifiers=[
- # Trove classifiers
- # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
- 'License :: OSI Approved :: MIT License',
- 'Topic :: Multimedia :: Sound/Audio',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- ],
-)
diff --git a/spaces/AchyuthGamer/OpenGPT-v1/Dockerfile b/spaces/AchyuthGamer/OpenGPT-v1/Dockerfile
deleted file mode 100644
index 00d58eb13e53cf49f4cbda825fb91eff58078641..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-v1/Dockerfile
+++ /dev/null
@@ -1,15 +0,0 @@
-# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
-# you will also find guides on how best to write your Dockerfile
-
-FROM python:3.10
-
-COPY app.py .
-COPY requirements.txt .
-
-RUN python -m venv venv
-RUN ./venv/bin/pip install -r requirements.txt
-
-ENV H2O_WAVE_LISTEN=":7860"
-ENV H2O_WAVE_ADDRESS="http://127.0.0.1:7860"
-
-CMD ["./venv/bin/wave", "run", "app.py", "--no-reload"]
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Forefront.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Forefront.py
deleted file mode 100644
index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Forefront.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import os
-import json
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://forefront.com'
-model = ['gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- json_data = {
- 'text': messages[-1]['content'],
- 'action': 'noauth',
- 'id': '',
- 'parentId': '',
- 'workspaceId': '',
- 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0',
- 'model': 'gpt-4',
- 'messages': messages[:-1] if len(messages) > 1 else [],
- 'internetMode': 'auto'
- }
- response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat',
- json=json_data, stream=True)
- for token in response.iter_lines():
- if b'delta' in token:
- token = json.loads(token.decode().split('data: ')[1])['delta']
- yield (token)
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/Factory.d.ts
deleted file mode 100644
index d852a6d46b1fb9350422c112dc351560661a057e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/Factory.d.ts
+++ /dev/null
@@ -1,39 +0,0 @@
-import NinePatch from "./NinePatch";
-
-export default function (
- config?: NinePatch.IConfig
-): NinePatch;
-
-export default function (
- x: number, y: number,
- config?: NinePatch.IConfig
-): NinePatch;
-
-export default function (
- x: number, y: number,
- width: number, height: number,
- config?: NinePatch.IConfig
-): NinePatch;
-
-export default function (
- x: number, y: number,
- width: number, height: number,
- key: string,
- config?: NinePatch.IConfig
-): NinePatch;
-
-export default function (
- x: number, y: number,
- width: number, height: number,
- key: string,
- columns: (number | undefined)[], rows: (number | undefined)[],
- config?: NinePatch.IConfig
-): NinePatch;
-
-export default function (
- x: number, y: number,
- width: number, height: number,
- key: string, baseFrame: string,
- columns: (number | undefined)[], rows: (number | undefined)[],
- config?: NinePatch.IConfig
-): NinePatch;
\ No newline at end of file
diff --git a/spaces/Alealejandrooo/deathCertReader/app.py b/spaces/Alealejandrooo/deathCertReader/app.py
deleted file mode 100644
index 9fda7683b765a84440725ed00901b947101ae8eb..0000000000000000000000000000000000000000
--- a/spaces/Alealejandrooo/deathCertReader/app.py
+++ /dev/null
@@ -1,299 +0,0 @@
-import re
-import cv2
-import numpy as np
-from paddleocr import PaddleOCR
-from PIL import Image
-import matplotlib.pyplot as plt
-import pandas as pd
-import matplotlib.pyplot as plt
-import onnxruntime
-import gradio as gr
-
-# initialize the OCR
-ocr = PaddleOCR(lang='sl',
- enable_mkldnn=True,
- cls=False,
- show_log= False)
-
-# initialize the models
-model_deskew = onnxruntime.InferenceSession("./models/CNN_deskew_v0.0.2.onnx")
-model_denoise = onnxruntime.InferenceSession("./models/autoencoder_denoise_v0.0.2.onnx")
-
-##### All Functions #####
-
-def preprocess_image(image):
- '''
- Function: preprocess image to make it lighter to work on
- Input: resized image
- Output: image
- '''
- image = np.array(image)
- scale = 1.494
- width = int(image.shape[1] / scale)
- height = int(image.shape[0] / scale)
- dim = (width, height)
- image = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
- return image
-
-
-def deskew(image, model):
- '''
- Function: deskew an image
- Input: takes an image as an array
- Output: deskewed image
- '''
- # map the model classes to the actual degree of skew
- map = { 0: '-1', 1: '-10', 2: '-11', 3: '-12', 4: '-13',
- 5: '-14',6: '-15', 7: '-2', 8: '-3', 9: '-4',
- 10: '-5',11: '-6',12: '-7', 13: '-8', 14: '-9',
- 15: '0', 16: '1', 17: '10', 18: '11', 19: '12',
- 20: '13',21: '14',22: '15', 23: '180',24: '2',
- 25: '270',26: '3',27: '4', 28: '5', 29: '6',
- 30: '7', 31: '8',32: '9', 33: '90'}
-
- image_d = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
- width = int(image_d.shape[1] * 0.2)
- height = int(image_d.shape[0] * 0.2)
- dim = (width, height)
- # resize image
- res = cv2.resize(image_d, dim, interpolation = cv2.INTER_AREA)
- resized = cv2.resize(res, (200, 200))
- # add two dimensions to feed to the model
- resized = resized.astype('float32').reshape(1, 200, 200 ,1)
- # normalize
- resized = resized/255
- # predictions
- predictions = model.run(None, {'conv2d_input': resized})
- # best prediction
- pred = predictions[0].argmax()
- # angle of skew
- angle = int(map[pred])
- skew_confidence = predictions[0][0][pred] * 100
- # deskew original image
- if angle == 90:
- deskewed_image = cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE)
- return deskewed_image, angle, skew_confidence
- if angle == 270:
- deskewed_image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
- return deskewed_image, angle, skew_confidence
-
- (h, w) = image.shape[:2]
- center = (w // 2, h // 2)
- M = cv2.getRotationMatrix2D(center, -angle, 1.0)
- deskewed_image = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC,
- borderMode=cv2.BORDER_REPLICATE)
- return deskewed_image, angle, skew_confidence
-
-
-def prepare_image_to_autoencoder(image):
- '''
- Function: prepare the image to be passed to the autoencoder.
- Input: image (_type_): deskewed image
- Output: resized image to be passed to the autoencoder
- '''
- height, width = image.shape[:2]
- target_height = 600
- target_width = 600
- image = image[int(height/3.6): int(height/1.87), int(width/3.67): int(width/1.575)]
- # reshape image to fixed size
- image = cv2.resize(image, (target_width, target_height))
- image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
- # normalize images
- image = image / 255.0
- # reshape to pass image to autoencoder
- image = image.reshape(target_height, target_width, 1)
- return image
-
-
-def autoencode_ONNX(image, model):
- '''
- Function: remove noise from image
- Input: image and autoencoder model
- Output: image
- '''
- image = image.astype(np.float32).reshape(1, 600, 600, 1)
- image = model.run(None, {'input_2': image})
- image = image[0]
- image = image.squeeze()
- image = image * 255
- image = image.astype('uint8')
- return image
-
-def extract_detected_entries_pdl(image):
- """
- Extracts text, scores, and boundary boxes from an image using OCR and returns a DataFrame.
-
- This function takes an input image, applies OCR to detect text in the image, and then extracts
- the detected text, confidence scores, and boundary boxes for each text entry. The extracted
- information is returned in a DataFrame with columns "Text", "Score", and "Boundary Box".
-
- Parameters
- ----------
- image : numpy.ndarray
- The input image to be processed.
-
- Returns
- -------
- pandas.DataFrame
- A DataFrame containing the extracted text, confidence scores, and boundary boxes
- for each detected text entry. The DataFrame has the following columns:
- - "Text": The detected text.
- - "Score": The confidence score for the detected text.
- - "Boundary Box": The coordinates of the boundary box for the detected text.
- """
- # run the OCR
- result = ocr.ocr(image)
- # creates Pandas Dataframe
- txt = []
- scores = []
- boxes = []
- for r in result[0]:
- txt.append(cleanString_basic(r[-1][0]))
- scores.append(r[-1][1])
- boxes.append(tuple(map(tuple, r[0])))
-
- return pd.DataFrame({"Text": txt, "Score": scores, "Boundary Box": boxes})
-
-def cleanString_basic(word):
- word = word.replace("$", "s")
- return word
-
-def clean_string_start(string: 'str'):
-
- names_flags = "√"
- chars_to_remove = ['!', "'", '[', ']', '*', '|', '.', ':', '\\', '/']
- if string.startswith(tuple(chars_to_remove)):
- names_flags = string[0]
- string = string[1:]
- return string, names_flags
-
-def clean_string_end(string: 'str'):
-
- names_flags = "√"
- chars_to_remove = ['!', "'", '[', ']', '*', '|', '.', ':', '\\', '/']
- if string.endswith(tuple(chars_to_remove)):
- names_flags = string[-1]
- string = string[:-1]
- return string, names_flags
-
-def clean_dates(date: 'str'):
- '''
- Function: cleans the fields "datum smrti" and returns the char removed.
- Input: date (string format)
- Output: cleaned frame
- '''
-
- date_flags = "Y"
- # finds special characters in the string
- special_char = re.findall(r'[a-zA-Z!\[\|]', date)
- if len(special_char) > 0:
- date_flags = special_char
- # remove special characters in the string
- string = re.sub(r'[a-zA-Z!\[\|]', '', date)
- return string, date_flags
-
-
-##### Main Function #####
-
-def pdf_extract_gr(image):
- extractimg = preprocess_image(image)
- #extractimg = np.array(image)
- # deskew the image
- deskewed_image, angle, skew_confidence = deskew(extractimg, model_deskew)
- # prepare the image for the autoencoder
- cleanimg = prepare_image_to_autoencoder(deskewed_image)
- # clean the image
- img = autoencode_ONNX(cleanimg, model_denoise)
- # extract the entries from the image
- df = extract_detected_entries_pdl(img)
- # first name
- firstnamerow = df.iloc[0]
- firstname = firstnamerow[0]
- firstnameconfidence = round(float(firstnamerow[1]) * 100,3)
- firstnameconfidence = f"{firstnameconfidence}%"
- # surname
- surnamerow = df.iloc[1]
- surname = surnamerow[0]
- surnameconfidence = round(float(surnamerow[1]) * 100,3)
- surnameconfidence = f"{surnameconfidence}%"
- # death date condifence
- dodrow = df.iloc[2]
- dodname = dodrow[0]
- dodconfidence = round(float(dodrow[1]) * 100,3)
- dodconfidence = f"{dodconfidence}%"
- # return all the results
- return df, deskewed_image, angle, skew_confidence, img, firstname, firstnameconfidence, surname, surnameconfidence, dodname, dodconfidence
-
-
-##### Gradio Style #####
-
-css = """
-.run_container {
- display: flex;
- flex-direction: column;
- align-items: center;
- gap: 10px;
-}
-.run_btn {
- margin: auto;
- width: 50%;
- display: flex;
-}
-.upload_cell {
- margin: auto;
- display: flex;
-}
-.results_container {
- display: flex;
- justify-content: space-evenly;
-}
-.results_cell {
-}
-"""
-
-##### Gradio Blocks #####
-
-with gr.Blocks(css = css) as demo:
- gr.Markdown("""
- # Death Certificate Extraction
- """, elem_classes = "h1")
- gr.Markdown("Upload a PDF, extract data")
- with gr.Box(elem_classes = "run_container"):
- # ExtractInput = gr.File(label = "Death Certificate", elem_classes="upload_cell")
- ExtractButton = gr.Button(label = "Extract", elem_classes="run_btn")
- with gr.Row(elem_id = "hide"):
- with gr.Column():
- ExtractInput = gr.Image()
- with gr.Column():
- # ExtractResult = gr.Image(label = "result")
- with gr.Row(elem_classes = "results_container"):
- FirstNameBox = gr.Textbox(label = "First Name", elem_classes = "results_cell")
- FirstNameConfidenceBox = gr.Textbox(label = "First Name Confidence", elem_classes = "results_cell")
- with gr.Row(elem_classes = "results_container"):
- SurnameNameBox = gr.Textbox(label = "Surname", elem_classes = "results_cell")
- SurnameNameConfidenceBox = gr.Textbox(label = "Surname Confidence", elem_classes = "results_cell")
- with gr.Row(elem_classes = "results_container"):
- DODBox = gr.Textbox(label = "Date of Death", elem_classes = "results_cell")
- DODConfidenceBox = gr.Textbox(label = "Date of Death Confidence", elem_classes = "results_cell")
-
- with gr.Accordion("Full Results", open = False):
- ExtractDF = gr.Dataframe(label = "Results")
-
- with gr.Accordion("Clean Image", open = False):
- CleanOutput = gr.Image()
-
- with gr.Accordion("Deskew", open = False):
- DeskewOutput = gr.Image()
- with gr.Column():
- DeskewAngle = gr.Number(label = "Angle")
- with gr.Column():
- DeskewConfidence = gr.Number(label = "Confidence")
-
- ExtractButton.click(fn=pdf_extract_gr,
- inputs = ExtractInput,
- outputs = [ExtractDF, DeskewOutput, DeskewAngle,
- DeskewConfidence, CleanOutput, FirstNameBox,
- FirstNameConfidenceBox, SurnameNameBox,
- SurnameNameConfidenceBox, DODBox, DODConfidenceBox])
-
-demo.launch(show_api=True, share=False, debug=True)
\ No newline at end of file
diff --git a/spaces/Aleistair/anything5/README.md b/spaces/Aleistair/anything5/README.md
deleted file mode 100644
index 2d1e591213ba10bc36696f230e03dfda2e81e5e6..0000000000000000000000000000000000000000
--- a/spaces/Aleistair/anything5/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anything V3.0
-emoji: 🏃
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-duplicated_from: vntonie/anything-v3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/utils.py b/spaces/AlexWang/lama/saicinpainting/evaluation/utils.py
deleted file mode 100644
index 6d7c15c9242ed8a9bc59fbb3b450cca394720bb8..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/evaluation/utils.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from enum import Enum
-
-import yaml
-from easydict import EasyDict as edict
-import torch.nn as nn
-import torch
-
-
-def load_yaml(path):
- with open(path, 'r') as f:
- return edict(yaml.safe_load(f))
-
-
-def move_to_device(obj, device):
- if isinstance(obj, nn.Module):
- return obj.to(device)
- if torch.is_tensor(obj):
- return obj.to(device)
- if isinstance(obj, (tuple, list)):
- return [move_to_device(el, device) for el in obj]
- if isinstance(obj, dict):
- return {name: move_to_device(val, device) for name, val in obj.items()}
- raise ValueError(f'Unexpected type {type(obj)}')
-
-
-class SmallMode(Enum):
- DROP = "drop"
- UPSCALE = "upscale"
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/mps.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/mps.md
deleted file mode 100644
index cd04d6d1103d5ecd83d7c983a99110928eb85c7e..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/mps.md
+++ /dev/null
@@ -1,71 +0,0 @@
-
-
-# Apple Silicon (M1/M2)에서 Stable Diffusion을 사용하는 방법
-
-Diffusers는 Stable Diffusion 추론을 위해 PyTorch `mps`를 사용해 Apple 실리콘과 호환됩니다. 다음은 Stable Diffusion이 있는 M1 또는 M2 컴퓨터를 사용하기 위해 따라야 하는 단계입니다.
-
-## 요구 사항
-
-- Apple silicon (M1/M2) 하드웨어의 Mac 컴퓨터.
-- macOS 12.6 또는 이후 (13.0 또는 이후 추천).
-- Python arm64 버전
-- PyTorch 2.0(추천) 또는 1.13(`mps`를 지원하는 최소 버전). Yhttps://pytorch.org/get-started/locally/의 지침에 따라 `pip` 또는 `conda`로 설치할 수 있습니다.
-
-
-## 추론 파이프라인
-
-아래 코도는 익숙한 `to()` 인터페이스를 사용하여 `mps` 백엔드로 Stable Diffusion 파이프라인을 M1 또는 M2 장치로 이동하는 방법을 보여줍니다.
-
-
-
-
-**PyTorch 1.13을 사용 중일 때 ** 추가 일회성 전달을 사용하여 파이프라인을 "프라이밍"하는 것을 추천합니다. 이것은 발견한 이상한 문제에 대한 임시 해결 방법입니다. 첫 번째 추론 전달은 후속 전달와 약간 다른 결과를 생성합니다. 이 전달은 한 번만 수행하면 되며 추론 단계를 한 번만 사용하고 결과를 폐기해도 됩니다.
-
-
-
-이전 팁에서 설명한 것들을 포함한 여러 문제를 해결하므로 PyTorch 2 이상을 사용하는 것이 좋습니다.
-
-
-```python
-# `huggingface-cli login`에 로그인되어 있음을 확인
-from diffusers import DiffusionPipeline
-
-pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-pipe = pipe.to("mps")
-
-# 컴퓨터가 64GB 이하의 RAM 램일 때 추천
-pipe.enable_attention_slicing()
-
-prompt = "a photo of an astronaut riding a horse on mars"
-
-# 처음 "워밍업" 전달 (위 설명을 보세요)
-_ = pipe(prompt, num_inference_steps=1)
-
-# 결과는 워밍업 전달 후의 CPU 장치의 결과와 일치합니다.
-image = pipe(prompt).images[0]
-```
-
-## 성능 추천
-
-M1/M2 성능은 메모리 압력에 매우 민감합니다. 시스템은 필요한 경우 자동으로 스왑되지만 스왑할 때 성능이 크게 저하됩니다.
-
-
-특히 컴퓨터의 시스템 RAM이 64GB 미만이거나 512 × 512픽셀보다 큰 비표준 해상도에서 이미지를 생성하는 경우, 추론 중에 메모리 압력을 줄이고 스와핑을 방지하기 위해 *어텐션 슬라이싱*을 사용하는 것이 좋습니다. 어텐션 슬라이싱은 비용이 많이 드는 어텐션 작업을 한 번에 모두 수행하는 대신 여러 단계로 수행합니다. 일반적으로 범용 메모리가 없는 컴퓨터에서 ~20%의 성능 영향을 미치지만 64GB 이상이 아닌 경우 대부분의 Apple Silicon 컴퓨터에서 *더 나은 성능*이 관찰되었습니다.
-
-```python
-pipeline.enable_attention_slicing()
-```
-
-## Known Issues
-
-- 여러 프롬프트를 배치로 생성하는 것은 [충돌이 발생하거나 안정적으로 작동하지 않습니다](https://github.com/huggingface/diffusers/issues/363). 우리는 이것이 [PyTorch의 `mps` 백엔드](https://github.com/pytorch/pytorch/issues/84039)와 관련이 있다고 생각합니다. 이 문제는 해결되고 있지만 지금은 배치 대신 반복 방법을 사용하는 것이 좋습니다.
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_base_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_base_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py
deleted file mode 100644
index e3e44ee6121f3c8b5f83f263303bbcd4370eea71..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_base_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py
+++ /dev/null
@@ -1,140 +0,0 @@
-_base_ = [
- '../_base_/models/cascade_mask_rcnn_swin_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=128,
- depths=[2, 2, 18, 2],
- num_heads=[4, 8, 16, 32],
- window_size=7,
- ape=False,
- drop_path_rate=0.3,
- patch_norm=True,
- use_checkpoint=False
- ),
- neck=dict(in_channels=[128, 256, 512, 1024]),
- roi_head=dict(
- bbox_head=[
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.05, 0.05, 0.1, 0.1]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.033, 0.033, 0.067, 0.067]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
- ]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[27, 33])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/__init__.py
deleted file mode 100644
index a3537297f57e4c3670afdb97b5fcb1b2d775e5f3..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from .assigners import (AssignResult, BaseAssigner, CenterRegionAssigner,
- MaxIoUAssigner, RegionAssigner)
-from .builder import build_assigner, build_bbox_coder, build_sampler
-from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, PseudoBBoxCoder,
- TBLRBBoxCoder)
-from .iou_calculators import BboxOverlaps2D, bbox_overlaps
-from .samplers import (BaseSampler, CombinedSampler,
- InstanceBalancedPosSampler, IoUBalancedNegSampler,
- OHEMSampler, PseudoSampler, RandomSampler,
- SamplingResult, ScoreHLRSampler)
-from .transforms import (bbox2distance, bbox2result, bbox2roi,
- bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping,
- bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh,
- distance2bbox, roi2bbox)
-
-__all__ = [
- 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', 'MaxIoUAssigner',
- 'AssignResult', 'BaseSampler', 'PseudoSampler', 'RandomSampler',
- 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler',
- 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'build_assigner',
- 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back',
- 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance',
- 'build_bbox_coder', 'BaseBBoxCoder', 'PseudoBBoxCoder',
- 'DeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'CenterRegionAssigner',
- 'bbox_rescale', 'bbox_cxcywh_to_xyxy', 'bbox_xyxy_to_cxcywh',
- 'RegionAssigner'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/gcnet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/gcnet_r50-d8.py
deleted file mode 100644
index 3d2ad69f5c22adfe79d5fdabf920217628987166..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/gcnet_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='GCHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- ratio=1 / 4.,
- pooling_type='att',
- fusion_types=('channel_add', ),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/tin_shift.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/tin_shift.py
deleted file mode 100644
index 472c9fcfe45a124e819b7ed5653e585f94a8811e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/tin_shift.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Code reference from "Temporal Interlacing Network"
-# https://github.com/deepcs233/TIN/blob/master/cuda_shift/rtc_wrap.py
-# Hao Shao, Shengju Qian, Yu Liu
-# shaoh19@mails.tsinghua.edu.cn, sjqian@cse.cuhk.edu.hk, yuliu@ee.cuhk.edu.hk
-
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext',
- ['tin_shift_forward', 'tin_shift_backward'])
-
-
-class TINShiftFunction(Function):
-
- @staticmethod
- def forward(ctx, input, shift):
- C = input.size(2)
- num_segments = shift.size(1)
- if C // num_segments <= 0 or C % num_segments != 0:
- raise ValueError('C should be a multiple of num_segments, '
- f'but got C={C} and num_segments={num_segments}.')
-
- ctx.save_for_backward(shift)
-
- out = torch.zeros_like(input)
- ext_module.tin_shift_forward(input, shift, out)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
-
- shift = ctx.saved_tensors[0]
- data_grad_input = grad_output.new(*grad_output.size()).zero_()
- shift_grad_input = shift.new(*shift.size()).zero_()
- ext_module.tin_shift_backward(grad_output, shift, data_grad_input)
-
- return data_grad_input, shift_grad_input
-
-
-tin_shift = TINShiftFunction.apply
-
-
-class TINShift(nn.Module):
- """Temporal Interlace Shift.
-
- Temporal Interlace shift is a differentiable temporal-wise frame shifting
- which is proposed in "Temporal Interlacing Network"
-
- Please refer to https://arxiv.org/abs/2001.06499 for more details.
- Code is modified from https://github.com/mit-han-lab/temporal-shift-module
- """
-
- def forward(self, input, shift):
- """Perform temporal interlace shift.
-
- Args:
- input (Tensor): Feature map with shape [N, num_segments, C, H * W].
- shift (Tensor): Shift tensor with shape [N, num_segments].
-
- Returns:
- Feature map after temporal interlace shift.
- """
- return tin_shift(input, shift)
diff --git a/spaces/Arsenii2023/Demo1/logistic.py b/spaces/Arsenii2023/Demo1/logistic.py
deleted file mode 100644
index 7f95f5d5a3887a815b4ac1a207ef08ef5f2c4f3c..0000000000000000000000000000000000000000
--- a/spaces/Arsenii2023/Demo1/logistic.py
+++ /dev/null
@@ -1,40 +0,0 @@
-#Author: Arsenii Kostenko
-import numpy as np
-from sklearn.linear_model import LogisticRegression
-import gradio as gr
-
-# Данные для обучения модели
-x_train = np.array([[0, 0], [1, 1], [2, 2]])
-y_train = np.array([0, 1, 2])
-
-# Обучение модели
-model = LogisticRegression()
-model.fit(x_train, y_train)
-
-# Функция для предсказания значений
-def predict(x, y):
- # Преобразование строк в списки списков
- x_nested_list = [list(map(int, sublist.split(","))) for sublist in x.split(";")]
- y_nested_list = [list(map(int, sublist.split(","))) for sublist in y.split(";")]
-
- # Преобразование списков списков в numpy arrays
- x_array = np.array(x_nested_list)
- y_array = np.array(y_nested_list)
-
- # Проверка исходных данных на соответствие
- if x_array.shape != y_array.shape:
- return "Ошибка: x и y должны иметь одинаковую размерность"
-
- # Предсказание значений
- predictions = model.predict(x_array)
-
- return predictions
-
-# Создание интерфейса gradio
-iface = gr.Interface(
- fn=predict,
- inputs=["text", "text"],
- outputs="text"
-)
-
-iface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/bcppcompiler.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/bcppcompiler.py
deleted file mode 100644
index 80b6bd852269afc075e38a4280c728f0777c923f..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/bcppcompiler.py
+++ /dev/null
@@ -1,408 +0,0 @@
-"""distutils.bcppcompiler
-
-Contains BorlandCCompiler, an implementation of the abstract CCompiler class
-for the Borland C++ compiler.
-"""
-
-# This implementation by Lyle Johnson, based on the original msvccompiler.py
-# module and using the directions originally published by Gordon Williams.
-
-# XXX looks like there's a LOT of overlap between these two classes:
-# someone should sit down and factor out the common code as
-# WindowsCCompiler! --GPW
-
-
-import os
-import warnings
-
-from distutils.errors import (
- DistutilsExecError,
- CompileError,
- LibError,
- LinkError,
- UnknownFileError,
-)
-from distutils.ccompiler import CCompiler, gen_preprocess_options
-from distutils.file_util import write_file
-from distutils.dep_util import newer
-from distutils import log
-
-
-warnings.warn(
- "bcppcompiler is deprecated and slated to be removed "
- "in the future. Please discontinue use or file an issue "
- "with pypa/distutils describing your use case.",
- DeprecationWarning,
-)
-
-
-class BCPPCompiler(CCompiler):
- """Concrete class that implements an interface to the Borland C/C++
- compiler, as defined by the CCompiler abstract class.
- """
-
- compiler_type = 'bcpp'
-
- # Just set this so CCompiler's constructor doesn't barf. We currently
- # don't use the 'set_executables()' bureaucracy provided by CCompiler,
- # as it really isn't necessary for this sort of single-compiler class.
- # Would be nice to have a consistent interface with UnixCCompiler,
- # though, so it's worth thinking about.
- executables = {}
-
- # Private class data (need to distinguish C from C++ source for compiler)
- _c_extensions = ['.c']
- _cpp_extensions = ['.cc', '.cpp', '.cxx']
-
- # Needed for the filename generation methods provided by the
- # base class, CCompiler.
- src_extensions = _c_extensions + _cpp_extensions
- obj_extension = '.obj'
- static_lib_extension = '.lib'
- shared_lib_extension = '.dll'
- static_lib_format = shared_lib_format = '%s%s'
- exe_extension = '.exe'
-
- def __init__(self, verbose=0, dry_run=0, force=0):
-
- super().__init__(verbose, dry_run, force)
-
- # These executables are assumed to all be in the path.
- # Borland doesn't seem to use any special registry settings to
- # indicate their installation locations.
-
- self.cc = "bcc32.exe"
- self.linker = "ilink32.exe"
- self.lib = "tlib.exe"
-
- self.preprocess_options = None
- self.compile_options = ['/tWM', '/O2', '/q', '/g0']
- self.compile_options_debug = ['/tWM', '/Od', '/q', '/g0']
-
- self.ldflags_shared = ['/Tpd', '/Gn', '/q', '/x']
- self.ldflags_shared_debug = ['/Tpd', '/Gn', '/q', '/x']
- self.ldflags_static = []
- self.ldflags_exe = ['/Gn', '/q', '/x']
- self.ldflags_exe_debug = ['/Gn', '/q', '/x', '/r']
-
- # -- Worker methods ------------------------------------------------
-
- def compile( # noqa: C901
- self,
- sources,
- output_dir=None,
- macros=None,
- include_dirs=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- depends=None,
- ):
-
- macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
- output_dir, macros, include_dirs, sources, depends, extra_postargs
- )
- compile_opts = extra_preargs or []
- compile_opts.append('-c')
- if debug:
- compile_opts.extend(self.compile_options_debug)
- else:
- compile_opts.extend(self.compile_options)
-
- for obj in objects:
- try:
- src, ext = build[obj]
- except KeyError:
- continue
- # XXX why do the normpath here?
- src = os.path.normpath(src)
- obj = os.path.normpath(obj)
- # XXX _setup_compile() did a mkpath() too but before the normpath.
- # Is it possible to skip the normpath?
- self.mkpath(os.path.dirname(obj))
-
- if ext == '.res':
- # This is already a binary file -- skip it.
- continue # the 'for' loop
- if ext == '.rc':
- # This needs to be compiled to a .res file -- do it now.
- try:
- self.spawn(["brcc32", "-fo", obj, src])
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue # the 'for' loop
-
- # The next two are both for the real compiler.
- if ext in self._c_extensions:
- input_opt = ""
- elif ext in self._cpp_extensions:
- input_opt = "-P"
- else:
- # Unknown file type -- no extra options. The compiler
- # will probably fail, but let it just in case this is a
- # file the compiler recognizes even if we don't.
- input_opt = ""
-
- output_opt = "-o" + obj
-
- # Compiler command line syntax is: "bcc32 [options] file(s)".
- # Note that the source file names must appear at the end of
- # the command line.
- try:
- self.spawn(
- [self.cc]
- + compile_opts
- + pp_opts
- + [input_opt, output_opt]
- + extra_postargs
- + [src]
- )
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- return objects
-
- # compile ()
-
- def create_static_lib(
- self, objects, output_libname, output_dir=None, debug=0, target_lang=None
- ):
-
- (objects, output_dir) = self._fix_object_args(objects, output_dir)
- output_filename = self.library_filename(output_libname, output_dir=output_dir)
-
- if self._need_link(objects, output_filename):
- lib_args = [output_filename, '/u'] + objects
- if debug:
- pass # XXX what goes here?
- try:
- self.spawn([self.lib] + lib_args)
- except DistutilsExecError as msg:
- raise LibError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- # create_static_lib ()
-
- def link( # noqa: C901
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
-
- # XXX this ignores 'build_temp'! should follow the lead of
- # msvccompiler.py
-
- (objects, output_dir) = self._fix_object_args(objects, output_dir)
- (libraries, library_dirs, runtime_library_dirs) = self._fix_lib_args(
- libraries, library_dirs, runtime_library_dirs
- )
-
- if runtime_library_dirs:
- log.warn(
- "I don't know what to do with 'runtime_library_dirs': %s",
- str(runtime_library_dirs),
- )
-
- if output_dir is not None:
- output_filename = os.path.join(output_dir, output_filename)
-
- if self._need_link(objects, output_filename):
-
- # Figure out linker args based on type of target.
- if target_desc == CCompiler.EXECUTABLE:
- startup_obj = 'c0w32'
- if debug:
- ld_args = self.ldflags_exe_debug[:]
- else:
- ld_args = self.ldflags_exe[:]
- else:
- startup_obj = 'c0d32'
- if debug:
- ld_args = self.ldflags_shared_debug[:]
- else:
- ld_args = self.ldflags_shared[:]
-
- # Create a temporary exports file for use by the linker
- if export_symbols is None:
- def_file = ''
- else:
- head, tail = os.path.split(output_filename)
- modname, ext = os.path.splitext(tail)
- temp_dir = os.path.dirname(objects[0]) # preserve tree structure
- def_file = os.path.join(temp_dir, '%s.def' % modname)
- contents = ['EXPORTS']
- for sym in export_symbols or []:
- contents.append(' {}=_{}'.format(sym, sym))
- self.execute(write_file, (def_file, contents), "writing %s" % def_file)
-
- # Borland C++ has problems with '/' in paths
- objects2 = map(os.path.normpath, objects)
- # split objects in .obj and .res files
- # Borland C++ needs them at different positions in the command line
- objects = [startup_obj]
- resources = []
- for file in objects2:
- (base, ext) = os.path.splitext(os.path.normcase(file))
- if ext == '.res':
- resources.append(file)
- else:
- objects.append(file)
-
- for ell in library_dirs:
- ld_args.append("/L%s" % os.path.normpath(ell))
- ld_args.append("/L.") # we sometimes use relative paths
-
- # list of object files
- ld_args.extend(objects)
-
- # XXX the command-line syntax for Borland C++ is a bit wonky;
- # certain filenames are jammed together in one big string, but
- # comma-delimited. This doesn't mesh too well with the
- # Unix-centric attitude (with a DOS/Windows quoting hack) of
- # 'spawn()', so constructing the argument list is a bit
- # awkward. Note that doing the obvious thing and jamming all
- # the filenames and commas into one argument would be wrong,
- # because 'spawn()' would quote any filenames with spaces in
- # them. Arghghh!. Apparently it works fine as coded...
-
- # name of dll/exe file
- ld_args.extend([',', output_filename])
- # no map file and start libraries
- ld_args.append(',,')
-
- for lib in libraries:
- # see if we find it and if there is a bcpp specific lib
- # (xxx_bcpp.lib)
- libfile = self.find_library_file(library_dirs, lib, debug)
- if libfile is None:
- ld_args.append(lib)
- # probably a BCPP internal library -- don't warn
- else:
- # full name which prefers bcpp_xxx.lib over xxx.lib
- ld_args.append(libfile)
-
- # some default libraries
- ld_args.append('import32')
- ld_args.append('cw32mt')
-
- # def file for export symbols
- ld_args.extend([',', def_file])
- # add resource files
- ld_args.append(',')
- ld_args.extend(resources)
-
- if extra_preargs:
- ld_args[:0] = extra_preargs
- if extra_postargs:
- ld_args.extend(extra_postargs)
-
- self.mkpath(os.path.dirname(output_filename))
- try:
- self.spawn([self.linker] + ld_args)
- except DistutilsExecError as msg:
- raise LinkError(msg)
-
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- # link ()
-
- # -- Miscellaneous methods -----------------------------------------
-
- def find_library_file(self, dirs, lib, debug=0):
- # List of effective library names to try, in order of preference:
- # xxx_bcpp.lib is better than xxx.lib
- # and xxx_d.lib is better than xxx.lib if debug is set
- #
- # The "_bcpp" suffix is to handle a Python installation for people
- # with multiple compilers (primarily Distutils hackers, I suspect
- # ;-). The idea is they'd have one static library for each
- # compiler they care about, since (almost?) every Windows compiler
- # seems to have a different format for static libraries.
- if debug:
- dlib = lib + "_d"
- try_names = (dlib + "_bcpp", lib + "_bcpp", dlib, lib)
- else:
- try_names = (lib + "_bcpp", lib)
-
- for dir in dirs:
- for name in try_names:
- libfile = os.path.join(dir, self.library_filename(name))
- if os.path.exists(libfile):
- return libfile
- else:
- # Oops, didn't find it in *any* of 'dirs'
- return None
-
- # overwrite the one from CCompiler to support rc and res-files
- def object_filenames(self, source_filenames, strip_dir=0, output_dir=''):
- if output_dir is None:
- output_dir = ''
- obj_names = []
- for src_name in source_filenames:
- # use normcase to make sure '.rc' is really '.rc' and not '.RC'
- (base, ext) = os.path.splitext(os.path.normcase(src_name))
- if ext not in (self.src_extensions + ['.rc', '.res']):
- raise UnknownFileError(
- "unknown file type '{}' (from '{}')".format(ext, src_name)
- )
- if strip_dir:
- base = os.path.basename(base)
- if ext == '.res':
- # these can go unchanged
- obj_names.append(os.path.join(output_dir, base + ext))
- elif ext == '.rc':
- # these need to be compiled to .res-files
- obj_names.append(os.path.join(output_dir, base + '.res'))
- else:
- obj_names.append(os.path.join(output_dir, base + self.obj_extension))
- return obj_names
-
- # object_filenames ()
-
- def preprocess(
- self,
- source,
- output_file=None,
- macros=None,
- include_dirs=None,
- extra_preargs=None,
- extra_postargs=None,
- ):
-
- (_, macros, include_dirs) = self._fix_compile_args(None, macros, include_dirs)
- pp_opts = gen_preprocess_options(macros, include_dirs)
- pp_args = ['cpp32.exe'] + pp_opts
- if output_file is not None:
- pp_args.append('-o' + output_file)
- if extra_preargs:
- pp_args[:0] = extra_preargs
- if extra_postargs:
- pp_args.extend(extra_postargs)
- pp_args.append(source)
-
- # We need to preprocess: either we're being forced to, or the
- # source file is newer than the target (or the target doesn't
- # exist).
- if self.force or output_file is None or newer(source, output_file):
- if output_file:
- self.mkpath(os.path.dirname(output_file))
- try:
- self.spawn(pp_args)
- except DistutilsExecError as msg:
- print(msg)
- raise CompileError(msg)
-
- # preprocess()
diff --git a/spaces/Audio-AGI/WavJourney/examples/examples.py b/spaces/Audio-AGI/WavJourney/examples/examples.py
deleted file mode 100644
index 9a44a090cadc95d594eac3c2d47c1ac7a2c10466..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/WavJourney/examples/examples.py
+++ /dev/null
@@ -1,87 +0,0 @@
-
-example1 = {
- 'text': "An introduction to AI-assisted audio content creation.",
- 'table_script': """
-| Audio Type | Layout | ID | Character | Action | Volume | Description | Length |
-|--------------|------------|----|-----------|--------|--------|------------------------------------------------------------------|--------|
-| music | background | 1 | N/A | begin | -35 | Inspirational technology-themed music | Auto |
-| speech | foreground | N/A| Narrator | N/A | -15 | Welcome to the future of audio content creation. | Auto |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Digital startup sound | 2 |
-| speech | foreground | N/A| Narrator | N/A | -15 | With evolving technology, we are introducing AI-assisted tools for pristine audio production. | Auto |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Keyboard typing noise | 3 |
-| speech | foreground | N/A| Narrator | N/A | -15 | Imagine crafting audio content with the power of AI at your fingertips. | Auto |
-| sound_effect | background | 2 | N/A | begin | -35 | Ambiance of a busy control room | Auto |
-| speech | foreground | N/A| Narrator | N/A | -15 | Enhanced quality, efficient production and limitless creativity, all under one roof. | Auto |
-| sound_effect | background | 2 | N/A | end | N/A | N/A | Auto |
-| speech | foreground | N/A| Narrator | N/A | -15 | Unleash your potential with AI-assisted audio content creation. | Auto |
-| music | background | 1 | N/A | end | N/A | N/A | Auto |
-
-""",
- 'table_voice': """
-| Character | Voice |
-|-------------|-----------|
-| Narrator | News_Male_En |
-
-""",
- 'wav_file': 'examples/1.mp4',
-}
-
-example2 = {
- 'text': "A couple dating in a cafe.",
- 'table_script': """
-| Audio Type | Layout | ID | Character | Action | Volume | Description | Length |
-|--------------|------------|----|-----------|--------|--------|-----------------------------------------------|--------|
-| sound_effect | background | 1 | N/A | begin | -35 | Soft chattering in a cafe | Auto |
-| sound_effect | background | 2 | N/A | begin | -38 | Coffee brewing noises | Auto |
-| music | background | 3 | N/A | begin | -35 | Soft jazz playing in the background | Auto |
-| speech | foreground | N/A| Man | N/A | -15 | It’s really nice to finally get out and relax a little, isn’t it? | Auto |
-| speech | foreground | N/A| Woman | N/A | -15 | I know, right? We should do this more often. | Auto |
-| sound_effect | background | 2 | N/A | end | N/A | N/A | Auto |
-| speech | foreground | N/A| Man | N/A | -15 | Here’s your coffee, just as you like it. | Auto |
-| speech | foreground | N/A| Woman | N/A | -15 | Thank you, it smells wonderful. | Auto |
-| music | background | 3 | N/A | end | N/A | N/A | Auto |
-| sound_effect | background | 1 | N/A | end | N/A | N/A | Auto |
-
-""",
- 'table_voice': """
-| Character | Voice |
-|-------------|-----------|
-| Man | Male1_En |
-| Woman | Female1_En |
-
-""",
- 'wav_file': 'examples/2.mp4',
-}
-
-
-example3 = {
- 'text': "A child is participating in a farting contest.",
- 'table_script': """
-| Audio Type | Layout | ID | Character | Action | Volume | Description | Length |
-|--------------|------------|----|-----------|--------|--------|------------------------------------------------------|--------|
-| sound_effect | background | 1 | N/A | begin | -35 | Outdoor park ambiance, people chattering | Auto |
-| music | background | 2 | N/A | begin | -35 | Light comedy theme music, quirky | Auto |
-| speech | foreground | N/A| Host | N/A | -15 | Welcome to the annual Fart Competition. | Auto |
-| speech | foreground | N/A| Host | N/A | -15 | Now, let’s welcome our youngest participant. | Auto |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Clapping sound | 2 |
-| speech | foreground | N/A| Child | N/A | -15 | Hi, I’m excited to be here. | Auto |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Short, cartoonish duration of a fart sound | 4 |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Audience laughing and applauding | 2 |
-| speech | foreground | N/A| Host | N/A | -15 | Wow, that was impressive! Let’s give another round of applause! | Auto |
-| sound_effect | foreground | N/A| N/A | N/A | -35 | Audience clapping and cheering | 3 |
-| music | background | 2 | N/A | end | N/A | N/A | Auto |
-| sound_effect | background | 1 | N/A | end | N/A | N/A | Auto |
-""",
- 'table_voice': """
-| Character | Voice |
-|-------------|-----------|
-| Host | Male1_En |
-| Child | Child_En |
-
-""",
- 'wav_file': 'examples/3.mp4',
-}
-
-
-
-examples = [example1, example2, example3]
\ No newline at end of file
diff --git a/spaces/Awesimo/jojogan/e4e/criteria/id_loss.py b/spaces/Awesimo/jojogan/e4e/criteria/id_loss.py
deleted file mode 100644
index bab806172eff18c0630536ae96817508c3197b8b..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/criteria/id_loss.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import torch
-from torch import nn
-from configs.paths_config import model_paths
-from models.encoders.model_irse import Backbone
-
-
-class IDLoss(nn.Module):
- def __init__(self):
- super(IDLoss, self).__init__()
- print('Loading ResNet ArcFace')
- self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se')
- self.facenet.load_state_dict(torch.load(model_paths['ir_se50']))
- self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112))
- self.facenet.eval()
- for module in [self.facenet, self.face_pool]:
- for param in module.parameters():
- param.requires_grad = False
-
- def extract_feats(self, x):
- x = x[:, :, 35:223, 32:220] # Crop interesting region
- x = self.face_pool(x)
- x_feats = self.facenet(x)
- return x_feats
-
- def forward(self, y_hat, y, x):
- n_samples = x.shape[0]
- x_feats = self.extract_feats(x)
- y_feats = self.extract_feats(y) # Otherwise use the feature from there
- y_hat_feats = self.extract_feats(y_hat)
- y_feats = y_feats.detach()
- loss = 0
- sim_improvement = 0
- id_logs = []
- count = 0
- for i in range(n_samples):
- diff_target = y_hat_feats[i].dot(y_feats[i])
- diff_input = y_hat_feats[i].dot(x_feats[i])
- diff_views = y_feats[i].dot(x_feats[i])
- id_logs.append({'diff_target': float(diff_target),
- 'diff_input': float(diff_input),
- 'diff_views': float(diff_views)})
- loss += 1 - diff_target
- id_diff = float(diff_target) - float(diff_views)
- sim_improvement += id_diff
- count += 1
-
- return loss / count, sim_improvement / count, id_logs
diff --git a/spaces/Bart92/RVC_HF/tools/torchgate/torchgate.py b/spaces/Bart92/RVC_HF/tools/torchgate/torchgate.py
deleted file mode 100644
index 086f2ab38e4ad79e432a51c38ed7e59defae0acd..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/tools/torchgate/torchgate.py
+++ /dev/null
@@ -1,264 +0,0 @@
-import torch
-from torch.nn.functional import conv1d, conv2d
-from typing import Union, Optional
-from .utils import linspace, temperature_sigmoid, amp_to_db
-
-
-class TorchGate(torch.nn.Module):
- """
- A PyTorch module that applies a spectral gate to an input signal.
-
- Arguments:
- sr {int} -- Sample rate of the input signal.
- nonstationary {bool} -- Whether to use non-stationary or stationary masking (default: {False}).
- n_std_thresh_stationary {float} -- Number of standard deviations above mean to threshold noise for
- stationary masking (default: {1.5}).
- n_thresh_nonstationary {float} -- Number of multiplies above smoothed magnitude spectrogram. for
- non-stationary masking (default: {1.3}).
- temp_coeff_nonstationary {float} -- Temperature coefficient for non-stationary masking (default: {0.1}).
- n_movemean_nonstationary {int} -- Number of samples for moving average smoothing in non-stationary masking
- (default: {20}).
- prop_decrease {float} -- Proportion to decrease signal by where the mask is zero (default: {1.0}).
- n_fft {int} -- Size of FFT for STFT (default: {1024}).
- win_length {[int]} -- Window length for STFT. If None, defaults to `n_fft` (default: {None}).
- hop_length {[int]} -- Hop length for STFT. If None, defaults to `win_length` // 4 (default: {None}).
- freq_mask_smooth_hz {float} -- Frequency smoothing width for mask (in Hz). If None, no smoothing is applied
- (default: {500}).
- time_mask_smooth_ms {float} -- Time smoothing width for mask (in ms). If None, no smoothing is applied
- (default: {50}).
- """
-
- @torch.no_grad()
- def __init__(
- self,
- sr: int,
- nonstationary: bool = False,
- n_std_thresh_stationary: float = 1.5,
- n_thresh_nonstationary: float = 1.3,
- temp_coeff_nonstationary: float = 0.1,
- n_movemean_nonstationary: int = 20,
- prop_decrease: float = 1.0,
- n_fft: int = 1024,
- win_length: bool = None,
- hop_length: int = None,
- freq_mask_smooth_hz: float = 500,
- time_mask_smooth_ms: float = 50,
- ):
- super().__init__()
-
- # General Params
- self.sr = sr
- self.nonstationary = nonstationary
- assert 0.0 <= prop_decrease <= 1.0
- self.prop_decrease = prop_decrease
-
- # STFT Params
- self.n_fft = n_fft
- self.win_length = self.n_fft if win_length is None else win_length
- self.hop_length = self.win_length // 4 if hop_length is None else hop_length
-
- # Stationary Params
- self.n_std_thresh_stationary = n_std_thresh_stationary
-
- # Non-Stationary Params
- self.temp_coeff_nonstationary = temp_coeff_nonstationary
- self.n_movemean_nonstationary = n_movemean_nonstationary
- self.n_thresh_nonstationary = n_thresh_nonstationary
-
- # Smooth Mask Params
- self.freq_mask_smooth_hz = freq_mask_smooth_hz
- self.time_mask_smooth_ms = time_mask_smooth_ms
- self.register_buffer("smoothing_filter", self._generate_mask_smoothing_filter())
-
- @torch.no_grad()
- def _generate_mask_smoothing_filter(self) -> Union[torch.Tensor, None]:
- """
- A PyTorch module that applies a spectral gate to an input signal using the STFT.
-
- Returns:
- smoothing_filter (torch.Tensor): a 2D tensor representing the smoothing filter,
- with shape (n_grad_freq, n_grad_time), where n_grad_freq is the number of frequency
- bins to smooth and n_grad_time is the number of time frames to smooth.
- If both self.freq_mask_smooth_hz and self.time_mask_smooth_ms are None, returns None.
- """
- if self.freq_mask_smooth_hz is None and self.time_mask_smooth_ms is None:
- return None
-
- n_grad_freq = (
- 1
- if self.freq_mask_smooth_hz is None
- else int(self.freq_mask_smooth_hz / (self.sr / (self.n_fft / 2)))
- )
- if n_grad_freq < 1:
- raise ValueError(
- f"freq_mask_smooth_hz needs to be at least {int((self.sr / (self._n_fft / 2)))} Hz"
- )
-
- n_grad_time = (
- 1
- if self.time_mask_smooth_ms is None
- else int(self.time_mask_smooth_ms / ((self.hop_length / self.sr) * 1000))
- )
- if n_grad_time < 1:
- raise ValueError(
- f"time_mask_smooth_ms needs to be at least {int((self.hop_length / self.sr) * 1000)} ms"
- )
-
- if n_grad_time == 1 and n_grad_freq == 1:
- return None
-
- v_f = torch.cat(
- [
- linspace(0, 1, n_grad_freq + 1, endpoint=False),
- linspace(1, 0, n_grad_freq + 2),
- ]
- )[1:-1]
- v_t = torch.cat(
- [
- linspace(0, 1, n_grad_time + 1, endpoint=False),
- linspace(1, 0, n_grad_time + 2),
- ]
- )[1:-1]
- smoothing_filter = torch.outer(v_f, v_t).unsqueeze(0).unsqueeze(0)
-
- return smoothing_filter / smoothing_filter.sum()
-
- @torch.no_grad()
- def _stationary_mask(
- self, X_db: torch.Tensor, xn: Optional[torch.Tensor] = None
- ) -> torch.Tensor:
- """
- Computes a stationary binary mask to filter out noise in a log-magnitude spectrogram.
-
- Arguments:
- X_db (torch.Tensor): 2D tensor of shape (frames, freq_bins) containing the log-magnitude spectrogram.
- xn (torch.Tensor): 1D tensor containing the audio signal corresponding to X_db.
-
- Returns:
- sig_mask (torch.Tensor): Binary mask of the same shape as X_db, where values greater than the threshold
- are set to 1, and the rest are set to 0.
- """
- if xn is not None:
- XN = torch.stft(
- xn,
- n_fft=self.n_fft,
- hop_length=self.hop_length,
- win_length=self.win_length,
- return_complex=True,
- pad_mode="constant",
- center=True,
- window=torch.hann_window(self.win_length).to(xn.device),
- )
-
- XN_db = amp_to_db(XN).to(dtype=X_db.dtype)
- else:
- XN_db = X_db
-
- # calculate mean and standard deviation along the frequency axis
- std_freq_noise, mean_freq_noise = torch.std_mean(XN_db, dim=-1)
-
- # compute noise threshold
- noise_thresh = mean_freq_noise + std_freq_noise * self.n_std_thresh_stationary
-
- # create binary mask by thresholding the spectrogram
- sig_mask = X_db > noise_thresh.unsqueeze(2)
- return sig_mask
-
- @torch.no_grad()
- def _nonstationary_mask(self, X_abs: torch.Tensor) -> torch.Tensor:
- """
- Computes a non-stationary binary mask to filter out noise in a log-magnitude spectrogram.
-
- Arguments:
- X_abs (torch.Tensor): 2D tensor of shape (frames, freq_bins) containing the magnitude spectrogram.
-
- Returns:
- sig_mask (torch.Tensor): Binary mask of the same shape as X_abs, where values greater than the threshold
- are set to 1, and the rest are set to 0.
- """
- X_smoothed = (
- conv1d(
- X_abs.reshape(-1, 1, X_abs.shape[-1]),
- torch.ones(
- self.n_movemean_nonstationary,
- dtype=X_abs.dtype,
- device=X_abs.device,
- ).view(1, 1, -1),
- padding="same",
- ).view(X_abs.shape)
- / self.n_movemean_nonstationary
- )
-
- # Compute slowness ratio and apply temperature sigmoid
- slowness_ratio = (X_abs - X_smoothed) / (X_smoothed + 1e-6)
- sig_mask = temperature_sigmoid(
- slowness_ratio, self.n_thresh_nonstationary, self.temp_coeff_nonstationary
- )
-
- return sig_mask
-
- def forward(
- self, x: torch.Tensor, xn: Optional[torch.Tensor] = None
- ) -> torch.Tensor:
- """
- Apply the proposed algorithm to the input signal.
-
- Arguments:
- x (torch.Tensor): The input audio signal, with shape (batch_size, signal_length).
- xn (Optional[torch.Tensor]): The noise signal used for stationary noise reduction. If `None`, the input
- signal is used as the noise signal. Default: `None`.
-
- Returns:
- torch.Tensor: The denoised audio signal, with the same shape as the input signal.
- """
- assert x.ndim == 2
- if x.shape[-1] < self.win_length * 2:
- raise Exception(f"x must be bigger than {self.win_length * 2}")
-
- assert xn is None or xn.ndim == 1 or xn.ndim == 2
- if xn is not None and xn.shape[-1] < self.win_length * 2:
- raise Exception(f"xn must be bigger than {self.win_length * 2}")
-
- # Compute short-time Fourier transform (STFT)
- X = torch.stft(
- x,
- n_fft=self.n_fft,
- hop_length=self.hop_length,
- win_length=self.win_length,
- return_complex=True,
- pad_mode="constant",
- center=True,
- window=torch.hann_window(self.win_length).to(x.device),
- )
-
- # Compute signal mask based on stationary or nonstationary assumptions
- if self.nonstationary:
- sig_mask = self._nonstationary_mask(X.abs())
- else:
- sig_mask = self._stationary_mask(amp_to_db(X), xn)
-
- # Propagate decrease in signal power
- sig_mask = self.prop_decrease * (sig_mask * 1.0 - 1.0) + 1.0
-
- # Smooth signal mask with 2D convolution
- if self.smoothing_filter is not None:
- sig_mask = conv2d(
- sig_mask.unsqueeze(1),
- self.smoothing_filter.to(sig_mask.dtype),
- padding="same",
- )
-
- # Apply signal mask to STFT magnitude and phase components
- Y = X * sig_mask.squeeze(1)
-
- # Inverse STFT to obtain time-domain signal
- y = torch.istft(
- Y,
- n_fft=self.n_fft,
- hop_length=self.hop_length,
- win_length=self.win_length,
- center=True,
- window=torch.hann_window(self.win_length).to(Y.device),
- )
-
- return y.to(dtype=x.dtype)
diff --git a/spaces/Benson/text-generation/Examples/9 Yukle Apps.md b/spaces/Benson/text-generation/Examples/9 Yukle Apps.md
deleted file mode 100644
index 0ad32a151bc547bcb46d61fccb1a966e17e29aa4..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/9 Yukle Apps.md
+++ /dev/null
@@ -1,167 +0,0 @@
-
-9apps yukle: Cómo descargar y usar la mejor tienda de aplicaciones para Android
-Si usted está buscando una manera de descubrir y descargar las mejores aplicaciones y juegos para su dispositivo Android, es posible que desee probar 9apps yukle. Esta es una tienda de aplicaciones potente y versátil que le ofrece una amplia gama de aplicaciones útiles, desde el entretenimiento a la productividad, desde las redes sociales a la educación, y más. En este artículo, te mostraremos qué es 9apps yukle, por qué lo necesitas, cómo descargarlo y usarlo en tu dispositivo Android y cómo ejecutarlo en tu PC o Mac con BlueStacks. También compartiremos contigo algunas de las mejores aplicaciones y juegos que puedes encontrar en 9apps yukle, para que puedas disfrutarlas en tu dispositivo u ordenador.
- ¿Qué es 9apps yukle y por qué lo necesita
-9apps yukle es una aplicación de herramientas desarrollada por 9Apps que sirve como una tienda de aplicaciones alternativa para usuarios de Android. A diferencia de la Google Play Store predeterminada, 9apps yukle le ofrece más opciones, más características y más beneficios cuando se trata de encontrar y descargar aplicaciones y juegos. Estas son algunas de las razones por las que necesita 9apps yukle:
-9 yukle apps Download Zip ››››› https://bltlly.com/2v6ITi
-
-Tiene una gran colección de aplicaciones y juegos en varias categorías y géneros, por lo que siempre puede encontrar algo que se adapte a sus necesidades y preferencias.
- Tiene un sistema de recomendación inteligente que le sugiere las mejores aplicaciones y juegos basados en sus intereses, hábitos y calificaciones.
- Tiene un proceso de descarga rápido y fácil que le ahorra tiempo y datos. También puede pausar y reanudar sus descargas en cualquier momento.
- Tiene una interfaz fácil de usar que hace que sea fácil navegar, buscar y administrar sus aplicaciones y juegos.
- Tiene actualizaciones regulares que mantienen sus aplicaciones y juegos actualizados con las últimas características y correcciones de errores.
- Tiene una plataforma segura que protege su dispositivo de malware, virus y otras amenazas.
-
- Cómo descargar e instalar 9apps yukle en tu dispositivo Android
-
-
-Ir al sitio web oficial de 9apps yukle (https://www.9appsyukle.com/) o escanear el código QR a continuación con la cámara de su dispositivo.
-
-Toque en el botón "Descargar" para comenzar a descargar el archivo APK de 9apps yukle.
-Una vez que la descarga se haya completado, abra el archivo APK desde el administrador de archivos o la barra de notificaciones de su dispositivo.
-Si se le solicita, active la opción "Fuentes desconocidas" en la configuración de su dispositivo para permitir la instalación de aplicaciones desde fuentes distintas de Google Play Store. Siga las instrucciones en pantalla para completar la instalación de 9apps yukle.
-Inicie 9apps yukle desde el cajón de aplicaciones de su dispositivo o la pantalla de inicio.
-
-Felicidades! Usted ha instalado con éxito 9apps yukle en su dispositivo Android. Ahora puedes empezar a explorar y descargar las mejores aplicaciones y juegos para tu dispositivo.
- Cómo usar 9apps yukle para encontrar y descargar las mejores aplicaciones y juegos
-Usar 9apps yukle para encontrar y descargar las mejores aplicaciones y juegos es muy fácil y divertido. Aquí hay algunos consejos sobre cómo usar 9apps yukle:
-
-En la pantalla de inicio de 9apps yukle, puede ver las aplicaciones y juegos destacados, las últimas actualizaciones, las listas de éxitos y las categorías. Puede deslizar hacia la izquierda o hacia la derecha para navegar a través de ellos.
- También puede utilizar la barra de búsqueda en la parte superior para escribir el nombre o la palabra clave de la aplicación o juego que está buscando.
-Cuando encuentre una aplicación o juego que le interese, puede pulsar en ella para ver más detalles, como la descripción, capturas de pantalla, calificaciones, reseñas y aplicaciones y juegos relacionados.
- Si desea descargar una aplicación o juego, puede tocar en el "Descargar" botón en la parte inferior. También puedes pulsar en el botón "Compartir" para compartir la aplicación o el juego con tus amigos a través de redes sociales, correo electrónico u otras aplicaciones.
-
-Puede administrar sus aplicaciones y juegos descargados tocando el icono "Aplicaciones" en la esquina inferior izquierda de la pantalla. También puede desinstalar, actualizar o mover sus aplicaciones y juegos desde allí.
-
-¡Eso es todo! Has aprendido a usar 9apps yukle para encontrar y descargar las mejores aplicaciones y juegos para tu dispositivo Android. ¡Disfruta!
- Las ventajas de usar 9apps yukle en PC y Mac
-Si desea disfrutar de las mejores aplicaciones y juegos en una pantalla más grande, con mejores gráficos, sonido y rendimiento, es posible que desee probar el uso de 9apps yukle en su PC o Mac. Esto es posible con la ayuda de un emulador de Android llamado BlueStacks. BlueStacks es un software que te permite ejecutar aplicaciones y juegos para Android en tu PC o Mac como si fueran aplicaciones nativas. Estas son algunas de las ventajas de usar 9apps yukle en PC y Mac con BlueStacks:
-
- Puede acceder a una colección más grande de aplicaciones y juegos que no están disponibles o compatibles con su dispositivo Android.
- Usted puede jugar juegos de Android con mejores gráficos, sonido y rendimiento, sin retraso o estrellarse.
-Puedes usar tu teclado, ratón o gamepad para controlar tus juegos para Android, lo que puede darte una ventaja sobre otros jugadores.
- Puede realizar múltiples tareas y ejecutar múltiples aplicaciones y juegos al mismo tiempo en diferentes ventanas o pestañas.
-Puede hacer copias de seguridad y sincronizar sus datos y configuraciones en su dispositivo Android y PC o Mac con Google Play Services.
- Cómo ejecutar 9apps yukle en tu PC o Mac con BlueStacks
-Ejecutar 9apps yukle en tu PC o Mac con BlueStacks es muy fácil y conveniente. Solo tienes que seguir estos pasos:
-
-
-Descargue e instale BlueStacks en su PC o Mac desde el sitio web oficial (https://www.bluestacks.com/) o desde los enlaces de abajo .
-Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva.
-
-Seleccione el archivo APK de 9apps yukle que ha descargado antes y espere a que se instale.
-Una vez que la instalación se ha completado, puede ver 9apps yukle en la pantalla de inicio de BlueStacks. Haga clic en él para lanzarlo.
-
-Felicidades! Ha ejecutado con éxito 9apps yukle en su PC o Mac con BlueStacks. Ahora puedes disfrutar de las mejores aplicaciones y juegos en tu ordenador.
- Los beneficios de usar BlueStacks para jugar juegos Android en tu PC o Mac
-Usar BlueStacks para jugar juegos Android en tu PC o Mac tiene muchos beneficios que pueden mejorar tu experiencia de juego. Estos son algunos de ellos:
-
-Puedes usar la función BlueStacks Game Controls para personalizar la configuración de tu teclado, ratón o gamepad para cada juego. También puedes usar los controles de juego predefinidos para juegos populares o crear los tuyos propios.
-Puedes usar la función BlueStacks Eco Mode para optimizar el uso de tu CPU y RAM y reducir el consumo de energía mientras juegas varios juegos al mismo tiempo.
-Puede utilizar la función BlueStacks Multi-Instance para crear y ejecutar varias instancias de BlueStacks con diferentes cuentas, configuraciones y aplicaciones. También puede sincronizar sus acciones en todas las instancias con la función de sincronización de múltiples posiciones.
-Puedes usar la función BlueStacks Macros para grabar y reproducir tus acciones en cualquier juego con una sola pulsación. También puede editar, compartir e importar macros de otros usuarios.
-Puede utilizar la función BlueStacks Screen Recorder para capturar y guardar sus vídeos de juego en alta calidad. También puede transmitir su juego en vivo a Twitch, YouTube, Facebook u otras plataformas con la función BlueStacks Streaming Mode.
-
- Cómo personalizar la configuración de BlueStacks para un rendimiento y experiencia óptimos
-Para personalizar la configuración de BlueStacks para un rendimiento y una experiencia óptimos, puede seguir estos consejos:
-
-
-Para la configuración de pantalla, puede elegir la resolución, la orientación y el DPI de la ventana BlueStacks. También puede activar o desactivar el modo de pantalla completa, las altas tasas de fotogramas y las notificaciones.
- Para la configuración de sonido, puede ajustar el volumen de los altavoces y el micrófono. También puede activar o desactivar los efectos de sonido y el chat de voz.
-Para la configuración del motor, puede elegir el modo de rendimiento, el modo de gráficos, el motor de gráficos y la asignación de memoria de sus BlueStacks. También puede activar o desactivar la tecnología de virtualización, la textura ASTC y la configuración ABI.
-Para la configuración de preferencias, puede elegir el idioma, la ubicación, el diseño del teclado y la zona horaria de sus BlueStacks. También puede habilitar o deshabilitar las actualizaciones automáticas, notificaciones de aplicaciones, recomendaciones de centros de aplicaciones, copias de seguridad de datos y limpieza de discos.
-Para la configuración de los controles del juego, puede personalizar la configuración del teclado, ratón o gamepad para cada juego. También puede habilitar o desactivar la guía del juego, los controles inteligentes, el modo MOBA, el modo de disparo y el modo de panorámica de objetivos.
- Las mejores aplicaciones y juegos que puedes encontrar en 9apps yukle
-Una de las mejores cosas de 9apps yukle es que tiene una gran colección de aplicaciones y juegos en varias categorías y géneros. Puedes encontrar aplicaciones y juegos para entretenimiento, productividad, redes sociales, educación, salud, estilo de vida y más. También puedes encontrar aplicaciones y juegos para diferentes grupos de edad, intereses y niveles de habilidad. Estas son algunas de las mejores aplicaciones y juegos que puedes encontrar en 9apps yukle:
- Las principales categorías y géneros de aplicaciones y juegos en 9apps yukle
-Según las estadísticas de 9apps yukle, las principales categorías y géneros de aplicaciones y juegos en 9apps yukle son los siguientes:
-
-
-Categoría
-Género
-Ejemplo
-
-
-Entretenimiento
-Reproductores de vídeo y editores
-VidMate, Reproductor MX, KineMaster
-
-
-Productividad
-Herramientas
-Shareit, Xender, CamScanner
-
-
-Redes sociales
-Comunicación
-WhatsApp, Facebook, Instagram
-
-
-Educación
-Educación
-Duolingo, Academia Khan, Udemy
-
-
-Salud
-Fitness
-Noom, Fitbit, Calma
-
-
-Estilo de vida
-Compras
-Amazon, Flipkart, AliExpress
-
-
-Juegos
-Acción
-PUBG móvil, fuego libre, llamada de servicio móvil
-
-
-Juegos
-Casual
-Candy Crush Saga, Surfistas del metro, Temple Run 2
-
-
-Juegos
-Puzzle
-Explosión de dibujos animados, Cerebro fuera, Cortar la cuerda 2
-
-
-
-Estas son algunas de las categorías y géneros más populares y ampliamente utilizados de aplicaciones y juegos en 9apps yukle. Puede explorar más categorías y géneros tocando el botón "Más" en la pantalla de inicio de 9apps yukle.
- Las aplicaciones y juegos más populares y populares en 9apps yukle
-Otra manera de encontrar las mejores aplicaciones y juegos en 9apps yukle es echar un vistazo a las aplicaciones y juegos más populares y trending en 9apps yukle. Estas son las aplicaciones y juegos que tienen la mayoría de descargas, calificaciones, comentarios y recomendaciones de otros usuarios. Puede ver las aplicaciones y juegos más populares y trending en 9apps yukle tocando el botón "Top" en la pantalla de inicio de 9apps yukle. Estas son algunas de las aplicaciones y juegos más populares y populares en 9apps yukle:
-
-VidMate: Un potente descargador de vídeos que te permite descargar vídeos de YouTube, Facebook, Instagram y otras plataformas en varios formatos y resoluciones.
-PUBG Mobile: Un emocionante juego de batalla real que enfrenta a otros 99 jugadores en una lucha por la supervivencia. Puedes jugar solo, dúo o modo escuadrón, y personalizar tus armas, vehículos, trajes y más.
-
-Candy Crush Saga: Un juego de puzzle dulce y adictivo que te desafía a combinar tres o más dulces del mismo color y limpiar el tablero. También puedes jugar con tus amigos y competir por la puntuación más alta.
-Duolingo: Una aplicación de aprendizaje de idiomas divertida y eficaz que te enseña un nuevo idioma a través de lecciones, juegos, concursos e historias. Puede elegir entre más de 30 idiomas y realizar un seguimiento de su progreso.
-
- Las gemas ocultas y aplicaciones y juegos subestimados en 9apps yukle
-Además de las aplicaciones y juegos más populares y populares en 9apps yukle, también hay algunas gemas ocultas y aplicaciones y juegos subestimados en 9apps yukle que merecen su atención. Estas son las aplicaciones y juegos que tienen gran calidad, características y potencial, pero no son tan conocidos o apreciados como deberían ser. Puede descubrir estas gemas ocultas y aplicaciones y juegos subestimados en 9apps yukle tocando el botón "Descubrir" en la pantalla de inicio de 9apps yukle. Aquí están algunas de las gemas ocultas y aplicaciones y juegos subestimados en 9apps yukle:
-
-CamScanner: Una práctica aplicación de escáner que convierte la cámara de tu dispositivo en un escáner. Puede escanear documentos, recibos, notas, fotos y más, y guardarlos como archivos PDF o JPG. También puede editar, compartir, imprimir o sincronizar sus escaneos.
-Free Fire: Un trepidante juego de battle royale que te ofrece una experiencia de supervivencia de 10 minutos. Puedes elegir tu punto de aterrizaje, saquear armas y objetos, disparar a los enemigos, y ser el último en pie.
-Instagram: Una popular aplicación de redes sociales que te permite compartir tus fotos y videos con tus seguidores. También puedes aplicar filtros, pegatinas, efectos y más a tus publicaciones. También puedes seguir a tus celebridades, marcas, influencers y mucho más.
-
-Udemy: Una plataforma de aprendizaje que ofrece miles de cursos sobre diversos temas, como negocios, diseño, fotografía, programación, desarrollo personal y más. Puedes aprender de instructores expertos a tu propio ritmo.
-
- Conclusión
-En conclusión, 9apps yukle es una gran tienda de aplicaciones para usuarios de Android que quieren descubrir y descargar las mejores aplicaciones y juegos para su dispositivo. Tiene una gran colección de aplicaciones y juegos en varias categorías y géneros, un sistema de recomendación inteligente, un proceso de descarga rápido y fácil, una interfaz fácil de usar, actualizaciones regulares y una plataforma segura. También te permite ejecutar aplicaciones y juegos de Android en tu PC o Mac con BlueStacks, que te ofrece muchos beneficios como mejores gráficos, sonido y rendimiento, controles de teclado, ratón y gamepad, multitarea, copia de seguridad y sincronización, y más. También puede encontrar las mejores aplicaciones y juegos en 9apps yukle revisando las principales categorías y géneros, las aplicaciones y juegos más populares y populares, y las gemas ocultas y las aplicaciones y juegos subestimados. Esperamos que haya disfrutado de este artículo y aprendido algo nuevo sobre 9apps yukle. Si estás interesado en probar 9apps yukle y BlueStacks, puedes descargarlos desde los enlaces de abajo. ¡Feliz descarga!
- Preguntas frecuentes
-Aquí están algunas de las preguntas más frecuentes sobre 9apps yukle y BlueStacks:
-
- ¿9apps yukle es libre de usar?
-Sí, 9apps yukle es de uso gratuito. Puede descargarlo e instalarlo en su dispositivo Android sin ningún costo. También puedes descargar y usar cualquier aplicación o juego en 9apps yukle gratis.
- ¿Es seguro usar 9apps yukle?
-Sí, 9apps yukle es seguro de usar. Tiene un estricto sistema de seguridad que escanea y verifica cada aplicación y juego antes de subirlo a la plataforma. También protege tu dispositivo de malware, virus y otras amenazas.
- ¿BlueStacks es de uso gratuito?
-
-¿Es seguro usar BlueStacks?
-Sí, BlueStacks es seguro de usar. Tiene un sistema de seguridad confiable que garantiza su privacidad y protección de datos. También cumple con las políticas y términos de servicio de Google Play.
- ¿Cómo puedo contactar al equipo de soporte de 9apps yukle o BlueStacks?
-Si tiene alguna pregunta, problema o comentario sobre 9apps yukle o BlueStacks, puede ponerse en contacto con su equipo de soporte visitando sus sitios web oficiales (https://www.9appsyukle.com/ o https:/www.bluestacks.com/) y haciendo clic en el botón "Contáctenos" o "Soporte". También puede enviarlos por correo electrónico a support@9appsyukle.com o support@bluestacks.com.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/coco.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/coco.py
deleted file mode 100644
index 2b2f7838448cb63dcf96daffe9470d58566d975a..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/coco.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import os
-import json
-import albumentations
-import numpy as np
-from PIL import Image
-from tqdm import tqdm
-from torch.utils.data import Dataset
-
-from taming.data.sflckr import SegmentationBase # for examples included in repo
-
-
-class Examples(SegmentationBase):
- def __init__(self, size=256, random_crop=False, interpolation="bicubic"):
- super().__init__(data_csv="data/coco_examples.txt",
- data_root="data/coco_images",
- segmentation_root="data/coco_segmentations",
- size=size, random_crop=random_crop,
- interpolation=interpolation,
- n_labels=183, shift_segmentation=True)
-
-
-class CocoBase(Dataset):
- """needed for (image, caption, segmentation) pairs"""
- def __init__(self, size=None, dataroot="", datajson="", onehot_segmentation=False, use_stuffthing=False,
- crop_size=None, force_no_crop=False, given_files=None):
- self.split = self.get_split()
- self.size = size
- if crop_size is None:
- self.crop_size = size
- else:
- self.crop_size = crop_size
-
- self.onehot = onehot_segmentation # return segmentation as rgb or one hot
- self.stuffthing = use_stuffthing # include thing in segmentation
- if self.onehot and not self.stuffthing:
- raise NotImplemented("One hot mode is only supported for the "
- "stuffthings version because labels are stored "
- "a bit different.")
-
- data_json = datajson
- with open(data_json) as json_file:
- self.json_data = json.load(json_file)
- self.img_id_to_captions = dict()
- self.img_id_to_filepath = dict()
- self.img_id_to_segmentation_filepath = dict()
-
- assert data_json.split("/")[-1] in ["captions_train2017.json",
- "captions_val2017.json"]
- if self.stuffthing:
- self.segmentation_prefix = (
- "data/cocostuffthings/val2017" if
- data_json.endswith("captions_val2017.json") else
- "data/cocostuffthings/train2017")
- else:
- self.segmentation_prefix = (
- "data/coco/annotations/stuff_val2017_pixelmaps" if
- data_json.endswith("captions_val2017.json") else
- "data/coco/annotations/stuff_train2017_pixelmaps")
-
- imagedirs = self.json_data["images"]
- self.labels = {"image_ids": list()}
- for imgdir in tqdm(imagedirs, desc="ImgToPath"):
- self.img_id_to_filepath[imgdir["id"]] = os.path.join(dataroot, imgdir["file_name"])
- self.img_id_to_captions[imgdir["id"]] = list()
- pngfilename = imgdir["file_name"].replace("jpg", "png")
- self.img_id_to_segmentation_filepath[imgdir["id"]] = os.path.join(
- self.segmentation_prefix, pngfilename)
- if given_files is not None:
- if pngfilename in given_files:
- self.labels["image_ids"].append(imgdir["id"])
- else:
- self.labels["image_ids"].append(imgdir["id"])
-
- capdirs = self.json_data["annotations"]
- for capdir in tqdm(capdirs, desc="ImgToCaptions"):
- # there are in average 5 captions per image
- self.img_id_to_captions[capdir["image_id"]].append(np.array([capdir["caption"]]))
-
- self.rescaler = albumentations.SmallestMaxSize(max_size=self.size)
- if self.split=="validation":
- self.cropper = albumentations.CenterCrop(height=self.crop_size, width=self.crop_size)
- else:
- self.cropper = albumentations.RandomCrop(height=self.crop_size, width=self.crop_size)
- self.preprocessor = albumentations.Compose(
- [self.rescaler, self.cropper],
- additional_targets={"segmentation": "image"})
- if force_no_crop:
- self.rescaler = albumentations.Resize(height=self.size, width=self.size)
- self.preprocessor = albumentations.Compose(
- [self.rescaler],
- additional_targets={"segmentation": "image"})
-
- def __len__(self):
- return len(self.labels["image_ids"])
-
- def preprocess_image(self, image_path, segmentation_path):
- image = Image.open(image_path)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
-
- segmentation = Image.open(segmentation_path)
- if not self.onehot and not segmentation.mode == "RGB":
- segmentation = segmentation.convert("RGB")
- segmentation = np.array(segmentation).astype(np.uint8)
- if self.onehot:
- assert self.stuffthing
- # stored in caffe format: unlabeled==255. stuff and thing from
- # 0-181. to be compatible with the labels in
- # https://github.com/nightrome/cocostuff/blob/master/labels.txt
- # we shift stuffthing one to the right and put unlabeled in zero
- # as long as segmentation is uint8 shifting to right handles the
- # latter too
- assert segmentation.dtype == np.uint8
- segmentation = segmentation + 1
-
- processed = self.preprocessor(image=image, segmentation=segmentation)
- image, segmentation = processed["image"], processed["segmentation"]
- image = (image / 127.5 - 1.0).astype(np.float32)
-
- if self.onehot:
- assert segmentation.dtype == np.uint8
- # make it one hot
- n_labels = 183
- flatseg = np.ravel(segmentation)
- onehot = np.zeros((flatseg.size, n_labels), dtype=np.bool)
- onehot[np.arange(flatseg.size), flatseg] = True
- onehot = onehot.reshape(segmentation.shape + (n_labels,)).astype(int)
- segmentation = onehot
- else:
- segmentation = (segmentation / 127.5 - 1.0).astype(np.float32)
- return image, segmentation
-
- def __getitem__(self, i):
- img_path = self.img_id_to_filepath[self.labels["image_ids"][i]]
- seg_path = self.img_id_to_segmentation_filepath[self.labels["image_ids"][i]]
- image, segmentation = self.preprocess_image(img_path, seg_path)
- captions = self.img_id_to_captions[self.labels["image_ids"][i]]
- # randomly draw one of all available captions per image
- caption = captions[np.random.randint(0, len(captions))]
- example = {"image": image,
- "caption": [str(caption[0])],
- "segmentation": segmentation,
- "img_path": img_path,
- "seg_path": seg_path,
- "filename_": img_path.split(os.sep)[-1]
- }
- return example
-
-
-class CocoImagesAndCaptionsTrain(CocoBase):
- """returns a pair of (image, caption)"""
- def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False):
- super().__init__(size=size,
- dataroot="data/coco/train2017",
- datajson="data/coco/annotations/captions_train2017.json",
- onehot_segmentation=onehot_segmentation,
- use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop)
-
- def get_split(self):
- return "train"
-
-
-class CocoImagesAndCaptionsValidation(CocoBase):
- """returns a pair of (image, caption)"""
- def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False,
- given_files=None):
- super().__init__(size=size,
- dataroot="data/coco/val2017",
- datajson="data/coco/annotations/captions_val2017.json",
- onehot_segmentation=onehot_segmentation,
- use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop,
- given_files=given_files)
-
- def get_split(self):
- return "validation"
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/wheel.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/wheel.py
deleted file mode 100644
index a8cd1330f0f73ac76832bdbd6b455b10bd91ba83..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/wheel.py
+++ /dev/null
@@ -1,740 +0,0 @@
-"""Support for installing and building the "wheel" binary package format.
-"""
-
-import collections
-import compileall
-import contextlib
-import csv
-import importlib
-import logging
-import os.path
-import re
-import shutil
-import sys
-import warnings
-from base64 import urlsafe_b64encode
-from email.message import Message
-from itertools import chain, filterfalse, starmap
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- BinaryIO,
- Callable,
- Dict,
- Generator,
- Iterable,
- Iterator,
- List,
- NewType,
- Optional,
- Sequence,
- Set,
- Tuple,
- Union,
- cast,
-)
-from zipfile import ZipFile, ZipInfo
-
-from pip._vendor.distlib.scripts import ScriptMaker
-from pip._vendor.distlib.util import get_export_entry
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.exceptions import InstallationError
-from pip._internal.locations import get_major_minor_version
-from pip._internal.metadata import (
- BaseDistribution,
- FilesystemWheel,
- get_wheel_distribution,
-)
-from pip._internal.models.direct_url import DIRECT_URL_METADATA_NAME, DirectUrl
-from pip._internal.models.scheme import SCHEME_KEYS, Scheme
-from pip._internal.utils.filesystem import adjacent_tmp_file, replace
-from pip._internal.utils.misc import captured_stdout, ensure_dir, hash_file, partition
-from pip._internal.utils.unpacking import (
- current_umask,
- is_within_directory,
- set_extracted_file_to_default_mode_plus_executable,
- zip_item_is_executable,
-)
-from pip._internal.utils.wheel import parse_wheel
-
-if TYPE_CHECKING:
- from typing import Protocol
-
- class File(Protocol):
- src_record_path: "RecordPath"
- dest_path: str
- changed: bool
-
- def save(self) -> None:
- pass
-
-
-logger = logging.getLogger(__name__)
-
-RecordPath = NewType("RecordPath", str)
-InstalledCSVRow = Tuple[RecordPath, str, Union[int, str]]
-
-
-def rehash(path: str, blocksize: int = 1 << 20) -> Tuple[str, str]:
- """Return (encoded_digest, length) for path using hashlib.sha256()"""
- h, length = hash_file(path, blocksize)
- digest = "sha256=" + urlsafe_b64encode(h.digest()).decode("latin1").rstrip("=")
- return (digest, str(length))
-
-
-def csv_io_kwargs(mode: str) -> Dict[str, Any]:
- """Return keyword arguments to properly open a CSV file
- in the given mode.
- """
- return {"mode": mode, "newline": "", "encoding": "utf-8"}
-
-
-def fix_script(path: str) -> bool:
- """Replace #!python with #!/path/to/python
- Return True if file was changed.
- """
- # XXX RECORD hashes will need to be updated
- assert os.path.isfile(path)
-
- with open(path, "rb") as script:
- firstline = script.readline()
- if not firstline.startswith(b"#!python"):
- return False
- exename = sys.executable.encode(sys.getfilesystemencoding())
- firstline = b"#!" + exename + os.linesep.encode("ascii")
- rest = script.read()
- with open(path, "wb") as script:
- script.write(firstline)
- script.write(rest)
- return True
-
-
-def wheel_root_is_purelib(metadata: Message) -> bool:
- return metadata.get("Root-Is-Purelib", "").lower() == "true"
-
-
-def get_entrypoints(dist: BaseDistribution) -> Tuple[Dict[str, str], Dict[str, str]]:
- console_scripts = {}
- gui_scripts = {}
- for entry_point in dist.iter_entry_points():
- if entry_point.group == "console_scripts":
- console_scripts[entry_point.name] = entry_point.value
- elif entry_point.group == "gui_scripts":
- gui_scripts[entry_point.name] = entry_point.value
- return console_scripts, gui_scripts
-
-
-def message_about_scripts_not_on_PATH(scripts: Sequence[str]) -> Optional[str]:
- """Determine if any scripts are not on PATH and format a warning.
- Returns a warning message if one or more scripts are not on PATH,
- otherwise None.
- """
- if not scripts:
- return None
-
- # Group scripts by the path they were installed in
- grouped_by_dir: Dict[str, Set[str]] = collections.defaultdict(set)
- for destfile in scripts:
- parent_dir = os.path.dirname(destfile)
- script_name = os.path.basename(destfile)
- grouped_by_dir[parent_dir].add(script_name)
-
- # We don't want to warn for directories that are on PATH.
- not_warn_dirs = [
- os.path.normcase(os.path.normpath(i)).rstrip(os.sep)
- for i in os.environ.get("PATH", "").split(os.pathsep)
- ]
- # If an executable sits with sys.executable, we don't warn for it.
- # This covers the case of venv invocations without activating the venv.
- not_warn_dirs.append(
- os.path.normcase(os.path.normpath(os.path.dirname(sys.executable)))
- )
- warn_for: Dict[str, Set[str]] = {
- parent_dir: scripts
- for parent_dir, scripts in grouped_by_dir.items()
- if os.path.normcase(os.path.normpath(parent_dir)) not in not_warn_dirs
- }
- if not warn_for:
- return None
-
- # Format a message
- msg_lines = []
- for parent_dir, dir_scripts in warn_for.items():
- sorted_scripts: List[str] = sorted(dir_scripts)
- if len(sorted_scripts) == 1:
- start_text = "script {} is".format(sorted_scripts[0])
- else:
- start_text = "scripts {} are".format(
- ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1]
- )
-
- msg_lines.append(
- "The {} installed in '{}' which is not on PATH.".format(
- start_text, parent_dir
- )
- )
-
- last_line_fmt = (
- "Consider adding {} to PATH or, if you prefer "
- "to suppress this warning, use --no-warn-script-location."
- )
- if len(msg_lines) == 1:
- msg_lines.append(last_line_fmt.format("this directory"))
- else:
- msg_lines.append(last_line_fmt.format("these directories"))
-
- # Add a note if any directory starts with ~
- warn_for_tilde = any(
- i[0] == "~" for i in os.environ.get("PATH", "").split(os.pathsep) if i
- )
- if warn_for_tilde:
- tilde_warning_msg = (
- "NOTE: The current PATH contains path(s) starting with `~`, "
- "which may not be expanded by all applications."
- )
- msg_lines.append(tilde_warning_msg)
-
- # Returns the formatted multiline message
- return "\n".join(msg_lines)
-
-
-def _normalized_outrows(
- outrows: Iterable[InstalledCSVRow],
-) -> List[Tuple[str, str, str]]:
- """Normalize the given rows of a RECORD file.
-
- Items in each row are converted into str. Rows are then sorted to make
- the value more predictable for tests.
-
- Each row is a 3-tuple (path, hash, size) and corresponds to a record of
- a RECORD file (see PEP 376 and PEP 427 for details). For the rows
- passed to this function, the size can be an integer as an int or string,
- or the empty string.
- """
- # Normally, there should only be one row per path, in which case the
- # second and third elements don't come into play when sorting.
- # However, in cases in the wild where a path might happen to occur twice,
- # we don't want the sort operation to trigger an error (but still want
- # determinism). Since the third element can be an int or string, we
- # coerce each element to a string to avoid a TypeError in this case.
- # For additional background, see--
- # https://github.com/pypa/pip/issues/5868
- return sorted(
- (record_path, hash_, str(size)) for record_path, hash_, size in outrows
- )
-
-
-def _record_to_fs_path(record_path: RecordPath, lib_dir: str) -> str:
- return os.path.join(lib_dir, record_path)
-
-
-def _fs_to_record_path(path: str, lib_dir: str) -> RecordPath:
- # On Windows, do not handle relative paths if they belong to different
- # logical disks
- if os.path.splitdrive(path)[0].lower() == os.path.splitdrive(lib_dir)[0].lower():
- path = os.path.relpath(path, lib_dir)
-
- path = path.replace(os.path.sep, "/")
- return cast("RecordPath", path)
-
-
-def get_csv_rows_for_installed(
- old_csv_rows: List[List[str]],
- installed: Dict[RecordPath, RecordPath],
- changed: Set[RecordPath],
- generated: List[str],
- lib_dir: str,
-) -> List[InstalledCSVRow]:
- """
- :param installed: A map from archive RECORD path to installation RECORD
- path.
- """
- installed_rows: List[InstalledCSVRow] = []
- for row in old_csv_rows:
- if len(row) > 3:
- logger.warning("RECORD line has more than three elements: %s", row)
- old_record_path = cast("RecordPath", row[0])
- new_record_path = installed.pop(old_record_path, old_record_path)
- if new_record_path in changed:
- digest, length = rehash(_record_to_fs_path(new_record_path, lib_dir))
- else:
- digest = row[1] if len(row) > 1 else ""
- length = row[2] if len(row) > 2 else ""
- installed_rows.append((new_record_path, digest, length))
- for f in generated:
- path = _fs_to_record_path(f, lib_dir)
- digest, length = rehash(f)
- installed_rows.append((path, digest, length))
- for installed_record_path in installed.values():
- installed_rows.append((installed_record_path, "", ""))
- return installed_rows
-
-
-def get_console_script_specs(console: Dict[str, str]) -> List[str]:
- """
- Given the mapping from entrypoint name to callable, return the relevant
- console script specs.
- """
- # Don't mutate caller's version
- console = console.copy()
-
- scripts_to_generate = []
-
- # Special case pip and setuptools to generate versioned wrappers
- #
- # The issue is that some projects (specifically, pip and setuptools) use
- # code in setup.py to create "versioned" entry points - pip2.7 on Python
- # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
- # the wheel metadata at build time, and so if the wheel is installed with
- # a *different* version of Python the entry points will be wrong. The
- # correct fix for this is to enhance the metadata to be able to describe
- # such versioned entry points, but that won't happen till Metadata 2.0 is
- # available.
- # In the meantime, projects using versioned entry points will either have
- # incorrect versioned entry points, or they will not be able to distribute
- # "universal" wheels (i.e., they will need a wheel per Python version).
- #
- # Because setuptools and pip are bundled with _ensurepip and virtualenv,
- # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we
- # override the versioned entry points in the wheel and generate the
- # correct ones. This code is purely a short-term measure until Metadata 2.0
- # is available.
- #
- # To add the level of hack in this section of code, in order to support
- # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment
- # variable which will control which version scripts get installed.
- #
- # ENSUREPIP_OPTIONS=altinstall
- # - Only pipX.Y and easy_install-X.Y will be generated and installed
- # ENSUREPIP_OPTIONS=install
- # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note
- # that this option is technically if ENSUREPIP_OPTIONS is set and is
- # not altinstall
- # DEFAULT
- # - The default behavior is to install pip, pipX, pipX.Y, easy_install
- # and easy_install-X.Y.
- pip_script = console.pop("pip", None)
- if pip_script:
- if "ENSUREPIP_OPTIONS" not in os.environ:
- scripts_to_generate.append("pip = " + pip_script)
-
- if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall":
- scripts_to_generate.append(
- "pip{} = {}".format(sys.version_info[0], pip_script)
- )
-
- scripts_to_generate.append(f"pip{get_major_minor_version()} = {pip_script}")
- # Delete any other versioned pip entry points
- pip_ep = [k for k in console if re.match(r"pip(\d+(\.\d+)?)?$", k)]
- for k in pip_ep:
- del console[k]
- easy_install_script = console.pop("easy_install", None)
- if easy_install_script:
- if "ENSUREPIP_OPTIONS" not in os.environ:
- scripts_to_generate.append("easy_install = " + easy_install_script)
-
- scripts_to_generate.append(
- "easy_install-{} = {}".format(
- get_major_minor_version(), easy_install_script
- )
- )
- # Delete any other versioned easy_install entry points
- easy_install_ep = [
- k for k in console if re.match(r"easy_install(-\d+\.\d+)?$", k)
- ]
- for k in easy_install_ep:
- del console[k]
-
- # Generate the console entry points specified in the wheel
- scripts_to_generate.extend(starmap("{} = {}".format, console.items()))
-
- return scripts_to_generate
-
-
-class ZipBackedFile:
- def __init__(
- self, src_record_path: RecordPath, dest_path: str, zip_file: ZipFile
- ) -> None:
- self.src_record_path = src_record_path
- self.dest_path = dest_path
- self._zip_file = zip_file
- self.changed = False
-
- def _getinfo(self) -> ZipInfo:
- return self._zip_file.getinfo(self.src_record_path)
-
- def save(self) -> None:
- # directory creation is lazy and after file filtering
- # to ensure we don't install empty dirs; empty dirs can't be
- # uninstalled.
- parent_dir = os.path.dirname(self.dest_path)
- ensure_dir(parent_dir)
-
- # When we open the output file below, any existing file is truncated
- # before we start writing the new contents. This is fine in most
- # cases, but can cause a segfault if pip has loaded a shared
- # object (e.g. from pyopenssl through its vendored urllib3)
- # Since the shared object is mmap'd an attempt to call a
- # symbol in it will then cause a segfault. Unlinking the file
- # allows writing of new contents while allowing the process to
- # continue to use the old copy.
- if os.path.exists(self.dest_path):
- os.unlink(self.dest_path)
-
- zipinfo = self._getinfo()
-
- with self._zip_file.open(zipinfo) as f:
- with open(self.dest_path, "wb") as dest:
- shutil.copyfileobj(f, dest)
-
- if zip_item_is_executable(zipinfo):
- set_extracted_file_to_default_mode_plus_executable(self.dest_path)
-
-
-class ScriptFile:
- def __init__(self, file: "File") -> None:
- self._file = file
- self.src_record_path = self._file.src_record_path
- self.dest_path = self._file.dest_path
- self.changed = False
-
- def save(self) -> None:
- self._file.save()
- self.changed = fix_script(self.dest_path)
-
-
-class MissingCallableSuffix(InstallationError):
- def __init__(self, entry_point: str) -> None:
- super().__init__(
- "Invalid script entry point: {} - A callable "
- "suffix is required. Cf https://packaging.python.org/"
- "specifications/entry-points/#use-for-scripts for more "
- "information.".format(entry_point)
- )
-
-
-def _raise_for_invalid_entrypoint(specification: str) -> None:
- entry = get_export_entry(specification)
- if entry is not None and entry.suffix is None:
- raise MissingCallableSuffix(str(entry))
-
-
-class PipScriptMaker(ScriptMaker):
- def make(
- self, specification: str, options: Optional[Dict[str, Any]] = None
- ) -> List[str]:
- _raise_for_invalid_entrypoint(specification)
- return super().make(specification, options)
-
-
-def _install_wheel(
- name: str,
- wheel_zip: ZipFile,
- wheel_path: str,
- scheme: Scheme,
- pycompile: bool = True,
- warn_script_location: bool = True,
- direct_url: Optional[DirectUrl] = None,
- requested: bool = False,
-) -> None:
- """Install a wheel.
-
- :param name: Name of the project to install
- :param wheel_zip: open ZipFile for wheel being installed
- :param scheme: Distutils scheme dictating the install directories
- :param req_description: String used in place of the requirement, for
- logging
- :param pycompile: Whether to byte-compile installed Python files
- :param warn_script_location: Whether to check that scripts are installed
- into a directory on PATH
- :raises UnsupportedWheel:
- * when the directory holds an unpacked wheel with incompatible
- Wheel-Version
- * when the .dist-info dir does not match the wheel
- """
- info_dir, metadata = parse_wheel(wheel_zip, name)
-
- if wheel_root_is_purelib(metadata):
- lib_dir = scheme.purelib
- else:
- lib_dir = scheme.platlib
-
- # Record details of the files moved
- # installed = files copied from the wheel to the destination
- # changed = files changed while installing (scripts #! line typically)
- # generated = files newly generated during the install (script wrappers)
- installed: Dict[RecordPath, RecordPath] = {}
- changed: Set[RecordPath] = set()
- generated: List[str] = []
-
- def record_installed(
- srcfile: RecordPath, destfile: str, modified: bool = False
- ) -> None:
- """Map archive RECORD paths to installation RECORD paths."""
- newpath = _fs_to_record_path(destfile, lib_dir)
- installed[srcfile] = newpath
- if modified:
- changed.add(newpath)
-
- def is_dir_path(path: RecordPath) -> bool:
- return path.endswith("/")
-
- def assert_no_path_traversal(dest_dir_path: str, target_path: str) -> None:
- if not is_within_directory(dest_dir_path, target_path):
- message = (
- "The wheel {!r} has a file {!r} trying to install"
- " outside the target directory {!r}"
- )
- raise InstallationError(
- message.format(wheel_path, target_path, dest_dir_path)
- )
-
- def root_scheme_file_maker(
- zip_file: ZipFile, dest: str
- ) -> Callable[[RecordPath], "File"]:
- def make_root_scheme_file(record_path: RecordPath) -> "File":
- normed_path = os.path.normpath(record_path)
- dest_path = os.path.join(dest, normed_path)
- assert_no_path_traversal(dest, dest_path)
- return ZipBackedFile(record_path, dest_path, zip_file)
-
- return make_root_scheme_file
-
- def data_scheme_file_maker(
- zip_file: ZipFile, scheme: Scheme
- ) -> Callable[[RecordPath], "File"]:
- scheme_paths = {key: getattr(scheme, key) for key in SCHEME_KEYS}
-
- def make_data_scheme_file(record_path: RecordPath) -> "File":
- normed_path = os.path.normpath(record_path)
- try:
- _, scheme_key, dest_subpath = normed_path.split(os.path.sep, 2)
- except ValueError:
- message = (
- "Unexpected file in {}: {!r}. .data directory contents"
- " should be named like: '/'."
- ).format(wheel_path, record_path)
- raise InstallationError(message)
-
- try:
- scheme_path = scheme_paths[scheme_key]
- except KeyError:
- valid_scheme_keys = ", ".join(sorted(scheme_paths))
- message = (
- "Unknown scheme key used in {}: {} (for file {!r}). .data"
- " directory contents should be in subdirectories named"
- " with a valid scheme key ({})"
- ).format(wheel_path, scheme_key, record_path, valid_scheme_keys)
- raise InstallationError(message)
-
- dest_path = os.path.join(scheme_path, dest_subpath)
- assert_no_path_traversal(scheme_path, dest_path)
- return ZipBackedFile(record_path, dest_path, zip_file)
-
- return make_data_scheme_file
-
- def is_data_scheme_path(path: RecordPath) -> bool:
- return path.split("/", 1)[0].endswith(".data")
-
- paths = cast(List[RecordPath], wheel_zip.namelist())
- file_paths = filterfalse(is_dir_path, paths)
- root_scheme_paths, data_scheme_paths = partition(is_data_scheme_path, file_paths)
-
- make_root_scheme_file = root_scheme_file_maker(wheel_zip, lib_dir)
- files: Iterator[File] = map(make_root_scheme_file, root_scheme_paths)
-
- def is_script_scheme_path(path: RecordPath) -> bool:
- parts = path.split("/", 2)
- return len(parts) > 2 and parts[0].endswith(".data") and parts[1] == "scripts"
-
- other_scheme_paths, script_scheme_paths = partition(
- is_script_scheme_path, data_scheme_paths
- )
-
- make_data_scheme_file = data_scheme_file_maker(wheel_zip, scheme)
- other_scheme_files = map(make_data_scheme_file, other_scheme_paths)
- files = chain(files, other_scheme_files)
-
- # Get the defined entry points
- distribution = get_wheel_distribution(
- FilesystemWheel(wheel_path),
- canonicalize_name(name),
- )
- console, gui = get_entrypoints(distribution)
-
- def is_entrypoint_wrapper(file: "File") -> bool:
- # EP, EP.exe and EP-script.py are scripts generated for
- # entry point EP by setuptools
- path = file.dest_path
- name = os.path.basename(path)
- if name.lower().endswith(".exe"):
- matchname = name[:-4]
- elif name.lower().endswith("-script.py"):
- matchname = name[:-10]
- elif name.lower().endswith(".pya"):
- matchname = name[:-4]
- else:
- matchname = name
- # Ignore setuptools-generated scripts
- return matchname in console or matchname in gui
-
- script_scheme_files: Iterator[File] = map(
- make_data_scheme_file, script_scheme_paths
- )
- script_scheme_files = filterfalse(is_entrypoint_wrapper, script_scheme_files)
- script_scheme_files = map(ScriptFile, script_scheme_files)
- files = chain(files, script_scheme_files)
-
- for file in files:
- file.save()
- record_installed(file.src_record_path, file.dest_path, file.changed)
-
- def pyc_source_file_paths() -> Generator[str, None, None]:
- # We de-duplicate installation paths, since there can be overlap (e.g.
- # file in .data maps to same location as file in wheel root).
- # Sorting installation paths makes it easier to reproduce and debug
- # issues related to permissions on existing files.
- for installed_path in sorted(set(installed.values())):
- full_installed_path = os.path.join(lib_dir, installed_path)
- if not os.path.isfile(full_installed_path):
- continue
- if not full_installed_path.endswith(".py"):
- continue
- yield full_installed_path
-
- def pyc_output_path(path: str) -> str:
- """Return the path the pyc file would have been written to."""
- return importlib.util.cache_from_source(path)
-
- # Compile all of the pyc files for the installed files
- if pycompile:
- with captured_stdout() as stdout:
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore")
- for path in pyc_source_file_paths():
- success = compileall.compile_file(path, force=True, quiet=True)
- if success:
- pyc_path = pyc_output_path(path)
- assert os.path.exists(pyc_path)
- pyc_record_path = cast(
- "RecordPath", pyc_path.replace(os.path.sep, "/")
- )
- record_installed(pyc_record_path, pyc_path)
- logger.debug(stdout.getvalue())
-
- maker = PipScriptMaker(None, scheme.scripts)
-
- # Ensure old scripts are overwritten.
- # See https://github.com/pypa/pip/issues/1800
- maker.clobber = True
-
- # Ensure we don't generate any variants for scripts because this is almost
- # never what somebody wants.
- # See https://bitbucket.org/pypa/distlib/issue/35/
- maker.variants = {""}
-
- # This is required because otherwise distlib creates scripts that are not
- # executable.
- # See https://bitbucket.org/pypa/distlib/issue/32/
- maker.set_mode = True
-
- # Generate the console and GUI entry points specified in the wheel
- scripts_to_generate = get_console_script_specs(console)
-
- gui_scripts_to_generate = list(starmap("{} = {}".format, gui.items()))
-
- generated_console_scripts = maker.make_multiple(scripts_to_generate)
- generated.extend(generated_console_scripts)
-
- generated.extend(maker.make_multiple(gui_scripts_to_generate, {"gui": True}))
-
- if warn_script_location:
- msg = message_about_scripts_not_on_PATH(generated_console_scripts)
- if msg is not None:
- logger.warning(msg)
-
- generated_file_mode = 0o666 & ~current_umask()
-
- @contextlib.contextmanager
- def _generate_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]:
- with adjacent_tmp_file(path, **kwargs) as f:
- yield f
- os.chmod(f.name, generated_file_mode)
- replace(f.name, path)
-
- dest_info_dir = os.path.join(lib_dir, info_dir)
-
- # Record pip as the installer
- installer_path = os.path.join(dest_info_dir, "INSTALLER")
- with _generate_file(installer_path) as installer_file:
- installer_file.write(b"pip\n")
- generated.append(installer_path)
-
- # Record the PEP 610 direct URL reference
- if direct_url is not None:
- direct_url_path = os.path.join(dest_info_dir, DIRECT_URL_METADATA_NAME)
- with _generate_file(direct_url_path) as direct_url_file:
- direct_url_file.write(direct_url.to_json().encode("utf-8"))
- generated.append(direct_url_path)
-
- # Record the REQUESTED file
- if requested:
- requested_path = os.path.join(dest_info_dir, "REQUESTED")
- with open(requested_path, "wb"):
- pass
- generated.append(requested_path)
-
- record_text = distribution.read_text("RECORD")
- record_rows = list(csv.reader(record_text.splitlines()))
-
- rows = get_csv_rows_for_installed(
- record_rows,
- installed=installed,
- changed=changed,
- generated=generated,
- lib_dir=lib_dir,
- )
-
- # Record details of all files installed
- record_path = os.path.join(dest_info_dir, "RECORD")
-
- with _generate_file(record_path, **csv_io_kwargs("w")) as record_file:
- # Explicitly cast to typing.IO[str] as a workaround for the mypy error:
- # "writer" has incompatible type "BinaryIO"; expected "_Writer"
- writer = csv.writer(cast("IO[str]", record_file))
- writer.writerows(_normalized_outrows(rows))
-
-
-@contextlib.contextmanager
-def req_error_context(req_description: str) -> Generator[None, None, None]:
- try:
- yield
- except InstallationError as e:
- message = "For req: {}. {}".format(req_description, e.args[0])
- raise InstallationError(message) from e
-
-
-def install_wheel(
- name: str,
- wheel_path: str,
- scheme: Scheme,
- req_description: str,
- pycompile: bool = True,
- warn_script_location: bool = True,
- direct_url: Optional[DirectUrl] = None,
- requested: bool = False,
-) -> None:
- with ZipFile(wheel_path, allowZip64=True) as z:
- with req_error_context(req_description):
- _install_wheel(
- name=name,
- wheel_zip=z,
- wheel_path=wheel_path,
- scheme=scheme,
- pycompile=pycompile,
- warn_script_location=warn_script_location,
- direct_url=direct_url,
- requested=requested,
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py
deleted file mode 100644
index d5e68a6e47199372c79ec094e0385f49a6600f22..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""
-distutils.command.install_egg_info
-
-Implements the Distutils 'install_egg_info' command, for installing
-a package's PKG-INFO metadata.
-"""
-
-import os
-import sys
-import re
-
-from distutils.cmd import Command
-from distutils import log, dir_util
-
-
-class install_egg_info(Command):
- """Install an .egg-info file for the package"""
-
- description = "Install package's PKG-INFO metadata as an .egg-info file"
- user_options = [
- ('install-dir=', 'd', "directory to install to"),
- ]
-
- def initialize_options(self):
- self.install_dir = None
-
- @property
- def basename(self):
- """
- Allow basename to be overridden by child class.
- Ref pypa/distutils#2.
- """
- return "%s-%s-py%d.%d.egg-info" % (
- to_filename(safe_name(self.distribution.get_name())),
- to_filename(safe_version(self.distribution.get_version())),
- *sys.version_info[:2],
- )
-
- def finalize_options(self):
- self.set_undefined_options('install_lib', ('install_dir', 'install_dir'))
- self.target = os.path.join(self.install_dir, self.basename)
- self.outputs = [self.target]
-
- def run(self):
- target = self.target
- if os.path.isdir(target) and not os.path.islink(target):
- dir_util.remove_tree(target, dry_run=self.dry_run)
- elif os.path.exists(target):
- self.execute(os.unlink, (self.target,), "Removing " + target)
- elif not os.path.isdir(self.install_dir):
- self.execute(
- os.makedirs, (self.install_dir,), "Creating " + self.install_dir
- )
- log.info("Writing %s", target)
- if not self.dry_run:
- with open(target, 'w', encoding='UTF-8') as f:
- self.distribution.metadata.write_pkg_file(f)
-
- def get_outputs(self):
- return self.outputs
-
-
-# The following routines are taken from setuptools' pkg_resources module and
-# can be replaced by importing them from pkg_resources once it is included
-# in the stdlib.
-
-
-def safe_name(name):
- """Convert an arbitrary string to a standard distribution name
-
- Any runs of non-alphanumeric/. characters are replaced with a single '-'.
- """
- return re.sub('[^A-Za-z0-9.]+', '-', name)
-
-
-def safe_version(version):
- """Convert an arbitrary string to a standard version string
-
- Spaces become dots, and all other non-alphanumeric characters become
- dashes, with runs of multiple dashes condensed to a single dash.
- """
- version = version.replace(' ', '.')
- return re.sub('[^A-Za-z0-9.]+', '-', version)
-
-
-def to_filename(name):
- """Convert a project or version name to its filename-escaped form
-
- Any '-' characters are currently replaced with '_'.
- """
- return name.replace('-', '_')
diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/models/__init__.py b/spaces/Billyosoro/ESRGAN/realesrgan/models/__init__.py
deleted file mode 100644
index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/realesrgan/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import model modules for registry
-# scan all the files that end with '_model.py' under the model folder
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
diff --git a/spaces/CAMP-ViL/Xplainer/descriptors.py b/spaces/CAMP-ViL/Xplainer/descriptors.py
deleted file mode 100644
index 37f78bb6fef329e93dee0ae8f19a87141da48065..0000000000000000000000000000000000000000
--- a/spaces/CAMP-ViL/Xplainer/descriptors.py
+++ /dev/null
@@ -1,204 +0,0 @@
-disease_descriptors_chexpert = {
- "No Finding": [
- "Clear lung fields",
- "Normal heart size and shape",
- "No Abnormal fluid buildup",
- "No Visible tumors or masses",
- "No Signs of bone fractures or dislocations"
- ],
- "Enlarged Cardiomediastinum": [
- "Increased width of the heart shadow",
- "Widened mediastinum",
- "Abnormal contour of the heart border",
- "Fluid or air within the pericardium",
- "Mass within the mediastinum",
- ],
- "Cardiomegaly": [
- "Increased size of the heart shadow",
- "Enlargement of the heart silhouette",
- "Increased diameter of the heart border",
- "Increased cardiothoracic ratio",
- ],
- "Lung Opacity": [
- "Increased density in the lung field",
- "Whitish or grayish area in the lung field",
- "Obscured or blurred margins of the lung field",
- "Loss of normal lung markings within the opacity",
- "Air bronchograms within the opacity",
- "Fluid levels within the opacity",
- "Silhouette sign loss with adjacent structures",
-
- ],
- "Lung Lesion": [
- "Consolidation of lung tissue",
- "Pleural effusion",
- "Cavities or abscesses in the lung",
- "Abnormal opacity or shadow in the lung",
- "Irregular or blurred margins of the lung",
-
- ],
- "Edema": [
- "Blurry vascular markings in the lungs",
- "Enlarged heart",
- "Kerley B lines",
- "Increased interstitial markings in the lungs",
- "Widening of interstitial spaces",
- ],
- "Consolidation": [
- "Loss of lung volume",
- "Increased density of lung tissue",
- "Obliteration of the diaphragmatic silhouette",
- "Presence of opacities",
- ],
- "Pneumonia": [
- "Consolidation of lung tissue",
- "Air bronchograms",
- "Cavitation",
- "Interstitial opacities",
- ],
- "Atelectasis": [
- "Increased opacity",
- "Volume loss of the affected lung region",
- "Blunting of the costophrenic angle",
- "Shifting of the mediastinum",
- ],
- "Pneumothorax": [
- "Tracheal deviation",
- "Deep sulcus sign",
- "Increased radiolucency",
- "Flattening of the hemidiaphragm",
- "Absence of lung markings",
- "Shifting of the mediastinum"
- ],
- "Pleural Effusion": [
- "Blunting of costophrenic angles",
- "Opacity in the lower lung fields",
- "Mediastinal shift",
- "Reduced lung volume",
- "Presence of meniscus sign or veil-like appearance"
- ],
- "Pleural Other": [
- "Pleural thickening",
- "Pleural calcification",
- "Pleural masses or nodules",
- "Pleural empyema",
- "Pleural fibrosis",
- "Pleural adhesions"
- ],
- "Fracture": [
- "Visible breaks in the continuity of the bone",
- "Misalignments of bone fragments",
- "Displacements of bone fragments",
- "Disruptions of the cortex or outer layer of the bone",
- "Visible callus or healing tissue",
- "Fracture lines that are jagged or irregular in shape",
- "Multiple fracture lines that intersect at different angles"
- ],
- "Support Devices": [
- "Artificial joints or implants",
- "Pacemakers or cardiac devices",
- "Stents or other vascular devices",
- "Prosthetic devices or limbs",
- "Breast implants",
- "Radiotherapy markers or seeds"
- ]
- }
-
-disease_descriptors_chestxray14 = {
-
- "No Finding": ["No Finding"],
- "Cardiomegaly": [
- "Increased size of the heart shadow",
- "Enlargement of the heart silhouette",
- "Increased diameter of the heart border",
- "Increased cardiothoracic ratio"
- ],
- "Edema": [
- "Blurry vascular markings in the lungs",
- "Kerley B lines",
- "Increased interstitial markings in the lungs",
- "Widening of interstitial spaces"
- ],
- "Consolidation": [
- "Loss of lung volume",
- "Increased density of lung tissue",
- "Obliteration of the diaphragmatic silhouette",
- "Presence of opacities"
- ],
- "Pneumonia": [
- "Consolidation of lung tissue",
- "Air bronchograms",
- "Cavitation",
- "Interstitial opacities"
- ],
- "Atelectasis": [
- "Increased opacity",
- "Volume loss of the affected lung region",
- "Displacement of the diaphragm",
- "Blunting of the costophrenic angle",
- "Shifting of the mediastinum"
- ],
- "Pneumothorax": [
- "Tracheal deviation",
- "Deep sulcus sign",
- "Increased radiolucency",
- "Flattening of the hemidiaphragm",
- "Absence of lung markings",
- "Shifting of the mediastinum"
- ],
- "Pleural Effusion": [
- "Blunting of costophrenic angles",
- "Opacity in the lower lung fields",
- "Mediastinal shift",
- "Reduced lung volume",
- "Meniscus sign or veil-like appearance"
- ],
- "Infiltration": [
- "Irregular or fuzzy borders around white areas",
- "Blurring",
- "Hazy or cloudy areas",
- "Increased density or opacity of lung tissue",
- "Air bronchograms",
- ],
- "Mass": [
- "Calcifications or mineralizations",
- "Shadowing",
- "Distortion or compression of tissues",
- "Anomalous structure or irregularity in shape"
- ],
- "Nodule": [
- "Nodular shape that protrudes into a cavity or airway",
- "Distinct edges or borders",
- "Calcifications or speckled areas",
- "Small round oral shaped spots",
- "White shadows"
- ],
- "Emphysema": [
- "Flattened hemidiaphragm",
- "Pulmonary bullae",
- "Hyperlucent lungs",
- "Horizontalisation of ribs",
- "Barrel Chest",
- ],
- "Fibrosis": [
- "Reticular shadowing of the lung peripheries",
- "Volume loss",
- "Thickened and irregular interstitial markings",
- "Bronchial dilation",
- "Shaggy heart borders"
- ],
- "Pleural Thickening": [
- "Thickened pleural line",
- "Loss of sharpness of the mediastinal border",
- "Calcifications on the pleura",
- "Lobulated peripheral shadowing",
- "Loss of lung volume",
- ],
- "Hernia": [
- "Bulge or swelling in the abdominal wall",
- "Protrusion of intestine or other abdominal tissue",
- "Swelling or enlargement of the herniated sac or surrounding tissues",
- "Retro-cardiac air-fluid level",
- "Thickening of intestinal folds"
- ]
- }
\ No newline at end of file
diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/__init__.py b/spaces/CVH-vn1210/make_hair/minigpt4/__init__.py
deleted file mode 100644
index ec06cef0e2e4e39e450746b0f3136776f6bcf143..0000000000000000000000000000000000000000
--- a/spaces/CVH-vn1210/make_hair/minigpt4/__init__.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import os
-import sys
-
-from omegaconf import OmegaConf
-
-from minigpt4.common.registry import registry
-
-from minigpt4.datasets.builders import *
-from minigpt4.models import *
-from minigpt4.processors import *
-from minigpt4.tasks import *
-
-
-root_dir = os.path.dirname(os.path.abspath(__file__))
-default_cfg = OmegaConf.load(os.path.join(root_dir, "configs/default.yaml"))
-
-registry.register_path("library_root", root_dir)
-repo_root = os.path.join(root_dir, "..")
-registry.register_path("repo_root", repo_root)
-cache_root = os.path.join(repo_root, default_cfg.env.cache_root)
-registry.register_path("cache_root", cache_root)
-
-registry.register("MAX_INT", sys.maxsize)
-registry.register("SPLIT_NAMES", ["train", "val", "test"])
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/model_zoo/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/model_zoo/__init__.py
deleted file mode 100644
index 886616f8e11ef31ea85d7a7ba9a75308befceedf..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/model_zoo/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Model Zoo API for Detectron2: a collection of functions to create common model architectures and
-optionally load pre-trained weights as released in
-`MODEL_ZOO.md `_.
-"""
-from .model_zoo import get, get_config_file, get_checkpoint_url
-
-__all__ = ["get_checkpoint_url", "get", "get_config_file"]
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_enum.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_enum.cpp
deleted file mode 100644
index 3153089208c964346e2fc39cafad8d0b372f1154..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_enum.cpp
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- tests/test_enums.cpp -- enumerations
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-
-TEST_SUBMODULE(enums, m) {
- // test_unscoped_enum
- enum UnscopedEnum {
- EOne = 1,
- ETwo,
- EThree
- };
- py::enum_(m, "UnscopedEnum", py::arithmetic(), "An unscoped enumeration")
- .value("EOne", EOne, "Docstring for EOne")
- .value("ETwo", ETwo, "Docstring for ETwo")
- .value("EThree", EThree, "Docstring for EThree")
- .export_values();
-
- // test_scoped_enum
- enum class ScopedEnum {
- Two = 2,
- Three
- };
- py::enum_(m, "ScopedEnum", py::arithmetic())
- .value("Two", ScopedEnum::Two)
- .value("Three", ScopedEnum::Three);
-
- m.def("test_scoped_enum", [](ScopedEnum z) {
- return "ScopedEnum::" + std::string(z == ScopedEnum::Two ? "Two" : "Three");
- });
-
- // test_binary_operators
- enum Flags {
- Read = 4,
- Write = 2,
- Execute = 1
- };
- py::enum_(m, "Flags", py::arithmetic())
- .value("Read", Flags::Read)
- .value("Write", Flags::Write)
- .value("Execute", Flags::Execute)
- .export_values();
-
- // test_implicit_conversion
- class ClassWithUnscopedEnum {
- public:
- enum EMode {
- EFirstMode = 1,
- ESecondMode
- };
-
- static EMode test_function(EMode mode) {
- return mode;
- }
- };
- py::class_ exenum_class(m, "ClassWithUnscopedEnum");
- exenum_class.def_static("test_function", &ClassWithUnscopedEnum::test_function);
- py::enum_(exenum_class, "EMode")
- .value("EFirstMode", ClassWithUnscopedEnum::EFirstMode)
- .value("ESecondMode", ClassWithUnscopedEnum::ESecondMode)
- .export_values();
-
- // test_enum_to_int
- m.def("test_enum_to_int", [](int) { });
- m.def("test_enum_to_uint", [](uint32_t) { });
- m.def("test_enum_to_long_long", [](long long) { });
-
- // test_duplicate_enum_name
- enum SimpleEnum
- {
- ONE, TWO, THREE
- };
-
- m.def("register_bad_enum", [m]() {
- py::enum_(m, "SimpleEnum")
- .value("ONE", SimpleEnum::ONE) //NOTE: all value function calls are called with the same first parameter value
- .value("ONE", SimpleEnum::TWO)
- .value("ONE", SimpleEnum::THREE)
- .export_values();
- });
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/execution_policy.h
deleted file mode 100644
index 60a4caba0f3bdb5215a5642c82ef1efc668dfda3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/execution_policy.h
+++ /dev/null
@@ -1,396 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file thrust/execution_policy.h
- * \brief Thrust execution policies.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-
-//! \cond
-
-// #include the host system's execution_policy header
-#define __THRUST_HOST_SYSTEM_EXECUTION_POLICY_HEADER <__THRUST_HOST_SYSTEM_ROOT/execution_policy.h>
-#include __THRUST_HOST_SYSTEM_EXECUTION_POLICY_HEADER
-#undef __THRUST_HOST_SYSTEM_EXECUTION_POLICY_HEADER
-
-// #include the device system's execution_policy.h header
-#define __THRUST_DEVICE_SYSTEM_EXECUTION_POLICY_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/execution_policy.h>
-#include __THRUST_DEVICE_SYSTEM_EXECUTION_POLICY_HEADER
-#undef __THRUST_DEVICE_SYSTEM_EXECUTION_POLICY_HEADER
-
-//! \endcond
-
-namespace thrust
-{
-
-
-/*! \cond
- */
-
-
-namespace detail
-{
-
-
-typedef thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::detail::par_t host_t;
-
-
-typedef thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::detail::par_t device_t;
-
-
-} // end detail
-
-
-/*! \endcond
- */
-
-
-/*! \addtogroup execution_policies Parallel Execution Policies
- * \{
- */
-
-
-// define execution_policy for the purpose of Doxygenating it
-// it is actually defined elsewhere
-#if 0
-/*! \p execution_policy is the base class for all Thrust parallel execution policies
- * like \p thrust::host, \p thrust::device, and each backend system's tag type.
- *
- * Custom user-defined backends should derive a policy from this type in order to
- * interoperate with Thrust algorithm dispatch.
- *
- * The following code snippet demonstrates how to derive a standalone custom execution policy
- * from \p thrust::execution_policy to implement a backend which only implements \p for_each:
- *
- * \code
- * #include
- * #include
- *
- * // define a type derived from thrust::execution_policy to distinguish our custom execution policy:
- * struct my_policy : thrust::execution_policy {};
- *
- * // overload for_each on my_policy
- * template
- * Iterator for_each(my_policy, Iterator first, Iterator last, Function f)
- * {
- * std::cout << "Hello, world from for_each(my_policy)!" << std::endl;
- *
- * for(; first < last; ++first)
- * {
- * f(*first);
- * }
- *
- * return first;
- * }
- *
- * struct ignore_argument
- * {
- * void operator()(int) {}
- * };
- *
- * int main()
- * {
- * int data[4];
- *
- * // dispatch thrust::for_each using our custom policy:
- * my_policy exec;
- * thrust::for_each(exec, data, data + 4, ignore_argument());
- *
- * // can't dispatch thrust::transform because no overload exists for my_policy:
- * //thrust::transform(exec, data, data, + 4, data, thrust::identity()); // error!
- *
- * return 0;
- * }
- * \endcode
- *
- * \see host_execution_policy
- * \see device_execution_policy
- */
-template
-struct execution_policy : thrust::detail::execution_policy_base
-{};
-#endif
-
-
-/*! \p host_execution_policy is the base class for all Thrust parallel execution policies
- * which are derived from Thrust's default host backend system configured with the \p THRUST_HOST_SYSTEM
- * macro.
- *
- * Custom user-defined backends which wish to inherit the functionality of Thrust's host backend system
- * should derive a policy from this type in order to interoperate with Thrust algorithm dispatch.
- *
- * The following code snippet demonstrates how to derive a standalone custom execution policy from
- * \p thrust::host_execution_policy to implement a backend which specializes \p for_each while inheriting
- * the behavior of every other algorithm from the host system:
- *
- * \code
- * #include
- * #include
- *
- * // define a type derived from thrust::host_execution_policy to distinguish our custom execution policy:
- * struct my_policy : thrust::host_execution_policy {};
- *
- * // overload for_each on my_policy
- * template
- * Iterator for_each(my_policy, Iterator first, Iterator last, Function f)
- * {
- * std::cout << "Hello, world from for_each(my_policy)!" << std::endl;
- *
- * for(; first < last; ++first)
- * {
- * f(*first);
- * }
- *
- * return first;
- * }
- *
- * struct ignore_argument
- * {
- * void operator()(int) {}
- * };
- *
- * int main()
- * {
- * int data[4];
- *
- * // dispatch thrust::for_each using our custom policy:
- * my_policy exec;
- * thrust::for_each(exec, data, data + 4, ignore_argument());
- *
- * // dispatch thrust::transform whose behavior our policy inherits
- * thrust::transform(exec, data, data, + 4, data, thrust::identity());
- *
- * return 0;
- * }
- * \endcode
- *
- * \see execution_policy
- * \see device_execution_policy
- */
-template
- struct host_execution_policy
- : thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::execution_policy
-{};
-
-
-/*! \p device_execution_policy is the base class for all Thrust parallel execution policies
- * which are derived from Thrust's default device backend system configured with the \p THRUST_DEVICE_SYSTEM
- * macro.
- *
- * Custom user-defined backends which wish to inherit the functionality of Thrust's device backend system
- * should derive a policy from this type in order to interoperate with Thrust algorithm dispatch.
- *
- * The following code snippet demonstrates how to derive a standalone custom execution policy from
- * \p thrust::device_execution_policy to implement a backend which specializes \p for_each while inheriting
- * the behavior of every other algorithm from the device system:
- *
- * \code
- * #include
- * #include
- *
- * // define a type derived from thrust::device_execution_policy to distinguish our custom execution policy:
- * struct my_policy : thrust::device_execution_policy {};
- *
- * // overload for_each on my_policy
- * template
- * Iterator for_each(my_policy, Iterator first, Iterator last, Function f)
- * {
- * std::cout << "Hello, world from for_each(my_policy)!" << std::endl;
- *
- * for(; first < last; ++first)
- * {
- * f(*first);
- * }
- *
- * return first;
- * }
- *
- * struct ignore_argument
- * {
- * void operator()(int) {}
- * };
- *
- * int main()
- * {
- * int data[4];
- *
- * // dispatch thrust::for_each using our custom policy:
- * my_policy exec;
- * thrust::for_each(exec, data, data + 4, ignore_argument());
- *
- * // dispatch thrust::transform whose behavior our policy inherits
- * thrust::transform(exec, data, data, + 4, data, thrust::identity());
- *
- * return 0;
- * }
- * \endcode
- *
- * \see execution_policy
- * \see host_execution_policy
- */
-template
- struct device_execution_policy
- : thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::execution_policy
-{};
-
-
-/*! \p thrust::host is the default parallel execution policy associated with Thrust's host backend system
- * configured by the \p THRUST_HOST_SYSTEM macro.
- *
- * Instead of relying on implicit algorithm dispatch through iterator system tags, users may directly target
- * algorithm dispatch at Thrust's host system by providing \p thrust::host as an algorithm parameter.
- *
- * Explicit dispatch can be useful in avoiding the introduction of data copies into containers such as
- * \p thrust::host_vector.
- *
- * Note that even though \p thrust::host targets the host CPU, it is a parallel execution policy. That is,
- * the order that an algorithm invokes functors or dereferences iterators is not defined.
- *
- * The type of \p thrust::host is implementation-defined.
- *
- * The following code snippet demonstrates how to use \p thrust::host to explicitly dispatch an invocation
- * of \p thrust::for_each to the host backend system:
- *
- * \code
- * #include
- * #include
- * #include
- *
- * struct printf_functor
- * {
- * __host__ __device__
- * void operator()(int x)
- * {
- * printf("%d\n", x);
- * }
- * };
- * ...
- * int vec(3);
- * vec[0] = 0; vec[1] = 1; vec[2] = 2;
- *
- * thrust::for_each(thrust::host, vec.begin(), vec.end(), printf_functor());
- *
- * // 0 1 2 is printed to standard output in some unspecified order
- * \endcode
- *
- * \see host_execution_policy
- * \see thrust::device
- */
-static const detail::host_t host;
-
-
-/*! \p thrust::device is the default parallel execution policy associated with Thrust's device backend system
- * configured by the \p THRUST_DEVICE_SYSTEM macro.
- *
- * Instead of relying on implicit algorithm dispatch through iterator system tags, users may directly target
- * algorithm dispatch at Thrust's device system by providing \p thrust::device as an algorithm parameter.
- *
- * Explicit dispatch can be useful in avoiding the introduction of data copies into containers such as
- * \p thrust::device_vector or to avoid wrapping e.g. raw pointers allocated by the CUDA API with types
- * such as \p thrust::device_ptr.
- *
- * The user must take care to guarantee that the iterators provided to an algorithm are compatible with
- * the device backend system. For example, raw pointers allocated by std::malloc typically
- * cannot be dereferenced by a GPU. For this reason, raw pointers allocated by host APIs should not be mixed
- * with a \p thrust::device algorithm invocation when the device backend is CUDA.
- *
- * The type of \p thrust::device is implementation-defined.
- *
- * The following code snippet demonstrates how to use \p thrust::device to explicitly dispatch an invocation
- * of \p thrust::for_each to the device backend system:
- *
- * \code
- * #include
- * #include
- * #include
- * #include
- *
- * struct printf_functor
- * {
- * __host__ __device__
- * void operator()(int x)
- * {
- * printf("%d\n", x);
- * }
- * };
- * ...
- * thrust::device_vector vec(3);
- * vec[0] = 0; vec[1] = 1; vec[2] = 2;
- *
- * thrust::for_each(thrust::device, vec.begin(), vec.end(), printf_functor());
- *
- * // 0 1 2 is printed to standard output in some unspecified order
- * \endcode
- *
- * \see host_execution_policy
- * \see thrust::device
- */
-THRUST_INLINE_CONSTANT detail::device_t device;
-
-
-// define seq for the purpose of Doxygenating it
-// it is actually defined elsewhere
-#if 0
-/*! \p thrust::seq is an execution policy which requires an algorithm invocation to execute sequentially
- * in the current thread. It can not be configured by a compile-time macro.
- *
- * The type of \p thrust::seq is implementation-defined.
- *
- * The following code snippet demonstrates how to use \p thrust::seq to explicitly execute an invocation
- * of \p thrust::for_each sequentially:
- *
- * \code
- * #include
- * #include
- * #include
- * #include
- *
- * struct printf_functor
- * {
- * __host__ __device__
- * void operator()(int x)
- * {
- * printf("%d\n", x);
- * }
- * };
- * ...
- * std::vector vec(3);
- * vec[0] = 0; vec[1] = 1; vec[2] = 2;
- *
- * thrust::for_each(thrust::seq, vec.begin(), vec.end(), printf_functor());
- *
- * // 0 1 2 is printed to standard output in sequential order
- * \endcode
- *
- * \see thrust::host
- * \see thrust::device
- */
-static const detail::seq_t seq;
-#endif
-
-
-/*! \}
- */
-
-
-} // end thrust
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/clip_datasets/clip_img_txt_pair_tsv.py b/spaces/CVPR/regionclip-demo/detectron2/data/clip_datasets/clip_img_txt_pair_tsv.py
deleted file mode 100644
index 49487b81952459b11d37684fb2e6b9fefead0d9f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/clip_datasets/clip_img_txt_pair_tsv.py
+++ /dev/null
@@ -1,602 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import os
-from io import BytesIO
-import json
-import logging
-import base64
-import threading
-import random
-import numpy as np
-from typing import Callable, List, Tuple, Union
-from PIL import Image
-from PIL import ImageFile
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-import torch
-import torch.utils.data as data
-from .oscar_tsv import InputExample, convert_example_to_features
-from detectron2.structures.tsv_file import TSVFile, CompositeTSVFile
-from detectron2.data.clip_datasets.clip_prompt_engineering import get_prompt_templates, prompt_engineering
-#import spacy
-
-def pre_fetch(tsv_filename: str):
- logging.info('Pre-loading %s ...' % tsv_filename)
- with open(tsv_filename, 'r'):
- logging.info('Pre-loading %s ended.' % tsv_filename)
-
-class CLIPImgTxtPairTSVDataset(data.Dataset):
- """
- This class is intended for encapsulating Image/Text pair data for contrastive learning described in
- the following paper,
- "Learning Transferable Visual Models From Natural Language Supervision" (a.k.a CLIP)
- Specifically, it is used to accomadate the tsv data format from Azure Cognition Service Group.
- """
- def __init__(self,
- image_tsv_file: Union[str, List[str]],
- text_tsv_file: Union[str, List[str]],
- transforms: Callable = None,
- tokenizer: Callable = None,
- seq_len = 0, context_length = 77, target_offset=0,
- args = None,
- dataset_name = "",
- tokenizer_type = "bert",
- is_train = True,
- map_file = None,
- filtered_datasets = ''):
- self.args = args
- self.is_train = is_train
- self.dataset_names = dataset_name
- self.tokenizer_type = tokenizer_type
- self.target_offset = target_offset
- self.seq_len = seq_len
-
- self.transforms = transforms
- self.tokenizer = tokenizer
- self._chunk_sizes = None
- self.context_length = context_length
-
- self.prompt_templates = get_prompt_templates() # [:2]
- self.spacy_nlp = None # spacy.load('en_core_web_sm')
-
- self.class_selector = None
- # self.class_selector = list(self.label2idx.keys()) if self.label2idx else None
-
- self.label2idx = {}
- self.idx2label = {}
- self.classnames = {}
- self.dataset_target_offsets = {}; offset = 0
-
- self.num_classes = sum([len(val) for val in self.classnames.values()])
-
- self.filtered_classnames = []
-
- if isinstance(image_tsv_file, str) and isinstance(text_tsv_file, str):
- # single tsv file
- if (
- os.path.splitext(image_tsv_file)[1].lower() == '.tsv'
- and os.path.splitext(text_tsv_file)[1].lower() == '.tsv'
- ):
- self.image_tsv_file = TSVFile(image_tsv_file, if_generate_lineidx=True)
- self.text_tsv_file = TSVFile(text_tsv_file, if_generate_lineidx=True)
- # multiple tsv files specified in a text file
- elif (
- os.path.splitext(image_tsv_file)[1].lower() == '.txt'
- and os.path.splitext(text_tsv_file)[1].lower() == '.txt'
- ):
- self.image_tsv_file = CompositeTSVFile(image_tsv_file)
- self.text_tsv_file = CompositeTSVFile(text_tsv_file)
- self._chunk_sizes = self.image_tsv_file.get_chunk_size()
- else:
- raise ValueError("Invalid input! Please check the tsv filenames.")
- # multiple tsv files specified in a list
- elif (
- isinstance(image_tsv_file, list)
- and isinstance(text_tsv_file, list)
- ):
- assert len(image_tsv_file) == len(text_tsv_file), \
- "Inconsistent number of Image/Text tsv files!"
- assert len(image_tsv_file) == len(text_tsv_file), \
- "Inconsistent number of Image/Text tsv files!"
- self.image_tsv_path = image_tsv_file
- self.text_tsv_path = text_tsv_file
- self.image_tsv_file = CompositeTSVFile(image_tsv_file, class_selector=self.class_selector)
- self.text_tsv_file = CompositeTSVFile(text_tsv_file, class_selector=self.class_selector)
- self._chunk_sizes = self.image_tsv_file.get_chunk_size()
- self._accumulated_chunk_sizes = np.cumsum(self._chunk_sizes).tolist()
- else:
- raise ValueError("Invalid input! Please check the tsv filenames.")
-
- assert len(self.image_tsv_file) == len(self.text_tsv_file), \
- "Inconsistent size of Image/Text ({}/{}) data!".format(
- len(self.image_tsv_file), len(self.text_tsv_file)
- )
-
- def get_chunk_sizes(self):
- return self._chunk_sizes
-
- def get_class_boundaries(self):
- # The samples of each class are organized class-by-class.
- # _class_boundaries stores the lower- and upper-bound of each class.
- return self.image_tsv_file.get_class_boundaries()
-
- def _load_map(self, map_file: str):
- if not map_file:
- return None
-
- label2idx = {}
- with open(map_file) as f:
- for line in f:
- items = line.strip().split('\t')
- label2idx[items[0]] = int(items[1])
-
- return label2idx
-
- def _load_darknet_map(self, map_file):
- if not map_file:
- return None
-
- label2idx = {}
- with open(map_file) as f:
- linenum = 0
- for l in f:
- item = l.strip()
- label2idx[item] = linenum
- linenum += 1
-
- return label2idx
-
- def _pre_tokenize(self):
- """
- pre-tokenize class names
- """
- input_ids_all = []
- input_masks_all = []
- segment_ids_all = []
- for k in range(len(self.classnames["imagenet"])):
- cur_id = 0; img_id = 0
- scale = 1.0
-
- v = self.classnames["imagenet"].label_to_name(k)
- if isinstance(v, str):
- vs = [v]
- elif isinstance(v, list):
- vs = v
- t1s = []
- t2s = []
- for v in vs:
- for pt in self.prompt_templates:
- t1s.append(prompt_engineering(v, template=pt))
- t2s.append("")
- input_ids = []
- input_masks = []
- segment_ids = []
- is_next_labels = [0] * len(t1s)
- is_img_matchs = [1] * len(t1s)
- img_feat_len = 0
- for t1, t2, is_next_label, is_img_match in zip(t1s, t2s, is_next_labels, is_img_matchs):
- if self.tokenizer_type == "bert":
- # tokenize
- tokens_a = self.tokenizer.tokenize(t1)
- tokens_b = None
-
- # combine to one sample
- cur_example = InputExample(guid=cur_id, tokens_a=tokens_a,
- tokens_b=tokens_b, is_next=is_next_label,
- img_id=img_id, is_img_match=is_img_match)
-
- # transform sample to features
- cur_features = convert_example_to_features(self.args, cur_example,
- self.seq_len, self.tokenizer,
- img_feat_len)
-
- input_ids.append(torch.tensor(cur_features.input_ids, dtype=torch.long))
- input_masks.append(torch.tensor(cur_features.input_mask, dtype=torch.long))
- segment_ids.append(torch.tensor(cur_features.segment_ids, dtype=torch.long))
-
- elif self.tokenizer_type == "bpe":
- tokens_a = t1; tokens_b = None
- # combine to one sample
- cur_example = InputExample(guid=cur_id, tokens_a=tokens_a,
- tokens_b=tokens_b, is_next=is_next_label,
- img_id=img_id, is_img_match=is_img_match)
-
- # transform sample to features
- cur_features = convert_example_to_features_bpe(self.args, cur_example,
- self.seq_len, self.tokenizer,
- img_feat_len)
-
- input_ids.append(torch.tensor(cur_features.input_ids, dtype=torch.long))
- input_masks.append(torch.tensor(cur_features.input_mask, dtype=torch.long))
- segment_ids.append(torch.tensor(cur_features.segment_ids, dtype=torch.long))
-
- else:
- raise NotImplementedError
- input_ids_all.append(torch.stack(input_ids, 0))
- input_masks_all.append(torch.stack(input_masks, 0))
- segment_ids_all.append(torch.stack(segment_ids, 0))
-
- self.input_ids_all_classes = torch.stack(input_ids_all, 0)
- self.input_mask_all_classes = torch.stack(input_masks_all, 0)
- self.segment_ids_all_classes = torch.stack(segment_ids_all, 0)
-
- def _online_tokenize(self, text):
-
- # random select a prompt template
- temp_idx = np.random.randint(len(self.prompt_templates))
- pt = self.prompt_templates[temp_idx]
-
- names = text.split(";")
- num_names = np.random.randint(len(names)) + 1
- names_sampled = random.sample(names, num_names)
- text = ", ".join(names_sampled)
-
- t1 = prompt_engineering(text, template=pt)
-
- cur_id = 0; img_id = 0; scale = 1.0
- is_next_label = 0; is_img_match = 1
- img_feat_len = 0
-
- if self.tokenizer_type == "bert":
- # tokenize
- tokens_a = self.tokenizer.tokenize(t1)
- tokens_b = None
-
- # combine to one sample
- cur_example = InputExample(guid=cur_id, tokens_a=tokens_a,
- tokens_b=tokens_b, is_next=is_next_label,
- img_id=img_id, is_img_match=is_img_match)
-
- # transform sample to features
- cur_features = convert_example_to_features(self.args, cur_example,
- self.context_length, self.tokenizer,
- img_feat_len)
-
-
- elif self.tokenizer_type == "bpe":
- tokens_a = t1; tokens_b = None
- # combine to one sample
- cur_example = InputExample(guid=cur_id, tokens_a=tokens_a,
- tokens_b=tokens_b, is_next=is_next_label,
- img_id=img_id, is_img_match=is_img_match)
-
- # transform sample to features
- cur_features = convert_example_to_features_bpe(self.args, cur_example,
- self.context_length, self.tokenizer,
- img_feat_len)
-
- return torch.tensor(cur_features.input_ids, dtype=torch.long), \
- torch.tensor(cur_features.input_mask, dtype=torch.long), \
- torch.tensor(cur_features.segment_ids, dtype=torch.long)
-
- def get_dataset_name(self, index):
- """
- get dataset name according to index
- """
- assert index < self._accumulated_chunk_sizes[-1], "index must in the range of accumulated data size"
- for k, boundary in enumerate(self._accumulated_chunk_sizes):
- if index < boundary:
- return self.dataset_names[k], k
-
- def get_target_offset(self, dataset_name):
- return self.dataset_target_offsets[dataset_name]
-
- def get_img_label_pair(self, items_image, index):
- dataset_name, chunk_id = self.get_dataset_name(index)
- target_offset = self.get_target_offset(dataset_name)
- _, target, img = self._decode_data(items_image, dataset_name)
-
- if self.transforms:
- img = self.transforms(img)
-
- if target == -1:
- input_ids, input_mask, segment_ids = \
- self._online_tokenize("uncovered image")
- else:
- classname = self.classnames[dataset_name].labels2names[self.idx2label[dataset_name][target]]
- if classname in self.filtered_classnames:
- # we filter these classnames for training
- target = -1
- input_ids, input_mask, segment_ids = \
- self._online_tokenize("uncovered image")
- else:
- input_ids, input_mask, segment_ids = \
- self._online_tokenize(classname)
- target += target_offset
- return img, \
- input_ids, \
- input_mask, \
- segment_ids, \
- torch.LongTensor([target]), \
- dataset_name
-
- def get_img_txt_pair(self, items_image, items_text, index):
- dataset_name, chunk_id = self.get_dataset_name(index)
- assert items_text[0] == items_image[0], \
- 'keys do not match for image ({}) and text ({}) for {} at chunk {}-{}'.format(
- len(items_text[0]), len(items_image[0]), dataset_name, chunk_id, self.image_tsv_path[chunk_id]
- )
-
- img = self._decode_image(items_image, dataset_name)
- # print("index {}, chunk id {}, name {}".format(index, chunk_id, self.image_tsv_path[chunk_id]))
- # raise TypeError("cannot decode current item")
- img_width, img_height = img.size # img_height, img_width = np.array(img).shape
-
- txts = self._decode_text(items_text)
- if self.spacy_nlp is not None:
- np_input_ids, np_input_masks, np_segment_ids = self.create_phrase_text(txts)
-
- if self.transforms:
- img = self.transforms(img)
-
- if isinstance(txts, str):
- input_ids, input_masks, segment_ids = \
- convert_txt_to_tokens_bpe(txts, self.tokenizer, self.context_length)
- all_str2id_links = []
- elif isinstance(txts, list):
- input_ids = []
- input_masks = []
- segment_ids = []
- all_str2id_links = []
- for txt in txts:
- input_id, input_mask, segment_id, str2id_links = \
- convert_txt_to_tokens_bpe(txt, self.tokenizer, self.context_length, return_link=True)
- input_ids += input_id
- input_masks += input_mask
- segment_ids += segment_id
- all_str2id_links += [str2id_links]
- scale = 1.0
- img_id = 0
-
- if self.spacy_nlp is not None:
- return img, \
- torch.tensor(input_ids).long().view(-1), \
- torch.tensor(input_masks).long().view(-1), \
- torch.tensor(segment_ids).long().view(-1), \
- torch.LongTensor([1e5]), \
- dataset_name, \
- torch.tensor(np_input_ids).long().view(-1), \
- torch.tensor(np_input_masks).long().view(-1), \
- torch.tensor(np_segment_ids).long().view(-1)
- else:
- return img, \
- torch.tensor(input_ids).long().view(-1), \
- torch.tensor(input_masks).long().view(-1), \
- torch.tensor(segment_ids).long().view(-1), \
- torch.LongTensor([1e5]), \
- (dataset_name, items_text[0], (img_height, img_width), all_str2id_links) # dataset name, image id, image height&width, links bet string and tokenized texts
-
- def create_phrase_text(self, txt_list):
- """ Use NLP tool to detect noun phrases in captions, fill each identified phrase into a random prompt to create a sentence,
- and convert each sentence to bpe tokens
- """
- if isinstance(txt_list, str):
- txt_list = [txt_list]
- # detect noun phrase
- noun_phrase = []
- for txt in txt_list:
- doc = self.spacy_nlp(txt.lower())
- this_text = [nc.text for nc in doc.noun_chunks]
- this_text = [nc.replace('a ', '').replace('the ', '') for nc in this_text]
- noun_phrase.extend(this_text)
- noun_phrase = list(set(noun_phrase))
- # fill each phrase into a random prompt
- text_list = []
- pts = random.sample(self.prompt_templates, len(noun_phrase))
- for i, np in enumerate(noun_phrase):
- text_list.append(prompt_engineering(np, pts[i]))
- # convert string into bpe tokens
- input_ids = []
- input_masks = []
- segment_ids = []
- for txt in text_list:
- input_id, input_mask, segment_id = \
- convert_txt_to_tokens_bpe(txt, self.tokenizer, self.context_length)
- input_ids += input_id
- input_masks += input_mask
- segment_ids += segment_id
- return input_ids, input_masks, segment_ids
-
- def __getitem__(self, index: Union[int, Tuple[int, int]]):
- if isinstance(index, tuple):
- items_image = self.image_tsv_file[index[0]]
- items_text = self.text_tsv_file[index[0]]
- if index[1] >= 0:
- tsv_filename = self.image_tsv_file.file_list[index[1]]
-
- # Python threads are not truly parallel. Spawn a new process instead.
- # logging.info('Pre-loading %s ...' % tsv_filename)
- # os.system('cat ' + tsv_filename + ' > /dev/null &')
- x = threading.Thread(
- target=pre_fetch, args=(tsv_filename,), daemon=True
- )
- x.start()
- curr_index = index[0]
- else:
- items_image = self.image_tsv_file[index]
- items_text = self.text_tsv_file[index]
- curr_index = index
-
- # NOTE: since we duplicate image tsv to text tsv for image-label data,
- # we can determine whether the current instance is an image-label pair or
- # a image-text pair data based on whether items_image is identical to items_text or not.
- if items_image == items_text:
- return self.get_img_label_pair(items_image, curr_index)
- else:
- return self.get_img_txt_pair(items_image, items_text, curr_index)
-
- def _decode_image(self, items: Tuple[str, str], dataset_name=""):
- key = items[0]
- image = Image.open(BytesIO(base64.b64decode(items[1]))).convert('RGB')
- return image
-
- def _decode_text(self, items: Tuple[str, Union[str, dict]]):
- key = items[0]
- text = ''
- if isinstance(items[1], str):
- try:
- str_dict = json.loads(items[1])
- # in this dict, it may contain either "tags" or "captions" or both
- keys = [key for key in str_dict.keys()]
- selected_key = random.sample(keys, 1)[0]
- if selected_key == "captions":
- # if this is a caption, we sample a caption
- captions = str_dict[selected_key]
- text = captions[:5]
- # text = random.sample(captions, 1)[0]
- elif selected_key == "tags":
- # for tags, we randomly disorder it
- tags = str_dict[selected_key]
- tag_words = tags.split(' ')
- random.shuffle(tag_words)
- tags_shuffled = " ".join(tag_words)
- # add prompt template
- pt = random.sample(self.prompt_templates, 1)[0]
- text = prompt_engineering(tags_shuffled, pt)
- except:
- text = items[1]
- elif isinstance(items[1], dict):
- assert 'captions' in items[1], '"captions" does not in {}'.format(items[1])
- captions = items[1]['captions']
- if isinstance(captions, list):
- text = random.choice(captions)
- elif isinstance(captions, str):
- text = captions
- else:
- raise ValueError('captions should be str or list')
-
- return text
-
- def _decode_data(self, items, dataset_name):
- key = items[0]
- label = self._get_label(items[1], dataset_name)
- try:
- image = Image.open(BytesIO(base64.b64decode(items[2])))
- except:
- return None
-
- return key, label, image.convert('RGB')
-
- def _get_label(self, item, dataset_name):
- if not self.label2idx[dataset_name]:
- return int(item)
-
- if item in self.label2idx[dataset_name]:
- return self.label2idx[dataset_name][item]
-
- label = json.loads(item)[0]['class']
- if label in self.label2idx[dataset_name]:
- return self.label2idx[dataset_name][label]
- else:
- return -1
-
- def __len__(self):
- return len(self.image_tsv_file)
-
-def convert_txt_to_tokens_bpe(text, tokenizer, context_length, return_link=False):
-
- sot_token = tokenizer.encoder["<|startoftext|>"]
- eot_token = tokenizer.encoder["<|endoftext|>"]
- if return_link:
- bpe_tokens, str2id_links = tokenizer.encode(text, return_link=return_link)
- str2id_links = [["<|startoftext|>", [sot_token]]] + str2id_links + [["<|endoftext|>", [eot_token]]]
- else:
- bpe_tokens = tokenizer.encode(text, return_link=return_link)
- input_ids = [sot_token] + bpe_tokens + [eot_token]
-
- if len(input_ids) > context_length:
- input_ids = input_ids[:context_length]
- segment_ids = [0] * len(input_ids)
- lm_label_ids = [-1] * len(input_ids)
-
- # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.
- input_mask = [1] * len(input_ids)
-
- # Zero-pad up to the sequence length.
- while len(input_ids) < context_length:
- input_ids.append(0)
- input_mask.append(0)
- segment_ids.append(0)
- lm_label_ids.append(-1)
-
- assert len(input_ids) == context_length
- assert len(input_mask) == context_length
- assert len(segment_ids) == context_length
- assert len(lm_label_ids) == context_length
-
- if return_link:
- return input_ids, input_mask, segment_ids, str2id_links
- return input_ids, input_mask, segment_ids
-
-def convert_example_to_features_bpe(args, example, max_seq_length, tokenizer,
- img_feat_len, context_length=77):
- """
- Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample with
- IDs, LM labels, input_mask, CLS and SEP tokens etc.
- :param args: parameter settings
- :param img_feat_len: lens of actual img features
- :param example: InputExample, containing sentence input as strings and is_next label
- :param max_seq_length: int, maximum length of sequence.
- :param tokenizer: Tokenizer
- :return: InputFeatures, containing all inputs and labels of one sample as IDs (as used for model training)
- """
- # we do not consider tokens_b for now in original CLIP
- text = example.tokens_a
- assert isinstance(text, str)
-
- sot_token = tokenizer.encoder["<|startoftext|>"]
- eot_token = tokenizer.encoder["<|endoftext|>"]
- input_ids = [sot_token] + tokenizer.encode(text) + [eot_token]
-
- if len(input_ids) > context_length:
- input_ids = input_ids[:context_length]
- segment_ids = [0] * len(input_ids)
- lm_label_ids = [-1] * len(input_ids)
-
- # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.
- input_mask = [1] * len(input_ids)
-
- # Zero-pad up to the sequence length.
- while len(input_ids) < context_length:
- input_ids.append(0)
- input_mask.append(0)
- segment_ids.append(0)
- lm_label_ids.append(-1)
-
- assert len(input_ids) == context_length
- assert len(input_mask) == context_length
- assert len(segment_ids) == context_length
- assert len(lm_label_ids) == context_length
-
- if example.guid < 1:
- logging.info("*** Example ***")
- logging.info("guid: %s" % example.guid)
- logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
- logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
- logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
- logging.info("LM label: %s " % lm_label_ids)
- logging.info("Is next sentence label: %s " % example.is_next)
-
- features = InputFeatures(input_ids=input_ids,
- input_mask=input_mask,
- segment_ids=segment_ids,
- lm_label_ids=lm_label_ids,
- is_next=example.is_next,
- img_feat_len=img_feat_len,
- is_img_match=example.is_img_match)
- return features
-
-class InputFeatures(object):
- """A single set of features of data."""
-
- def __init__(self, input_ids, input_mask, segment_ids, is_next,
- lm_label_ids, img_feat_len, is_img_match):
- self.input_ids = input_ids
- self.input_mask = input_mask
- self.segment_ids = segment_ids
- self.is_next = is_next
- self.lm_label_ids = lm_label_ids
-
- self.img_feat_len = img_feat_len
- self.is_img_match = is_img_match
\ No newline at end of file
diff --git a/spaces/CXD200/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/CXD200/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 4e44bab8aa65d16e35e935f1273de2e98ce80cf9..0000000000000000000000000000000000000000
--- a/spaces/CXD200/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.0.jar;%APP_HOME%\lib\unidbg-fix.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Fakeopen.py b/spaces/CofAI/chat/g4f/Provider/Providers/Fakeopen.py
deleted file mode 100644
index 5a82bf2cc0736384563332a279f5fbcbb120f676..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Fakeopen.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import os
-import json
-import requests
-from typing import Dict, get_type_hints
-
-url = 'https://ai.fakeopen.com/v1/'
-model = [
- 'gpt-3.5-turbo',
- 'gpt-3.5-turbo-0613',
- 'gpt-3.5-turbo-16k',
- 'gpt-3.5-turbo-16k-0613',
-]
-
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
-
- headers = {
- 'Content-Type': 'application/json',
- 'accept': 'text/event-stream',
- 'Cache-Control': 'no-cache',
- 'Proxy-Connection': 'keep-alive',
- 'Authorization': f"Bearer {os.environ.get('FAKE_OPEN_KEY', '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')}",
- }
-
- json_data = {
- 'messages': messages,
- 'temperature': 1.0,
- 'model': model,
- 'stream': stream,
- }
-
- response = requests.post(
- 'https://ai.fakeopen.com/v1/chat/completions', headers=headers, json=json_data, stream=True
- )
-
- for token in response.iter_lines():
- decoded = token.decode('utf-8')
- if decoded == '[DONE]':
- break
- if decoded.startswith('data: '):
- data_str = decoded.replace('data: ', '')
- if data_str != '[DONE]':
- data = json.loads(data_str)
- if 'choices' in data and 'delta' in data['choices'][0] and 'content' in data['choices'][0]['delta']:
- yield data['choices'][0]['delta']['content']
-
-
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/data/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/voc.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/voc.py
deleted file mode 100644
index 459985bd12a47ffe5a246cbf8e00b7930b991a1c..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/voc.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import os
-
-import torch
-import torch.utils.data
-from PIL import Image
-import sys
-
-if sys.version_info[0] == 2:
- import xml.etree.cElementTree as ET
-else:
- import xml.etree.ElementTree as ET
-
-
-from maskrcnn_benchmark.structures.bounding_box import BoxList
-
-
-class PascalVOCDataset(torch.utils.data.Dataset):
-
- CLASSES = (
- "__background__ ",
- "aeroplane",
- "bicycle",
- "bird",
- "boat",
- "bottle",
- "bus",
- "car",
- "cat",
- "chair",
- "cow",
- "diningtable",
- "dog",
- "horse",
- "motorbike",
- "person",
- "pottedplant",
- "sheep",
- "sofa",
- "train",
- "tvmonitor",
- )
-
- def __init__(self, data_dir, split, use_difficult=False, transforms=None):
- self.root = data_dir
- self.image_set = split
- self.keep_difficult = use_difficult
- self.transforms = transforms
-
- self._annopath = os.path.join(self.root, "Annotations", "%s.xml")
- self._imgpath = os.path.join(self.root, "JPEGImages", "%s.jpg")
- self._imgsetpath = os.path.join(self.root, "ImageSets", "Main", "%s.txt")
-
- with open(self._imgsetpath % self.image_set) as f:
- self.ids = f.readlines()
- self.ids = [x.strip("\n") for x in self.ids]
- self.id_to_img_map = {k: v for k, v in enumerate(self.ids)}
-
- cls = PascalVOCDataset.CLASSES
- self.class_to_ind = dict(zip(cls, range(len(cls))))
-
- def __getitem__(self, index):
- img_id = self.ids[index]
- img = Image.open(self._imgpath % img_id).convert("RGB")
-
- target = self.get_groundtruth(index)
- target = target.clip_to_image(remove_empty=True)
-
- if self.transforms is not None:
- img, target = self.transforms(img, target)
-
- return img, target, index
-
- def __len__(self):
- return len(self.ids)
-
- def get_groundtruth(self, index):
- img_id = self.ids[index]
- anno = ET.parse(self._annopath % img_id).getroot()
- anno = self._preprocess_annotation(anno)
-
- height, width = anno["im_info"]
- target = BoxList(anno["boxes"], (width, height), mode="xyxy")
- target.add_field("labels", anno["labels"])
- target.add_field("difficult", anno["difficult"])
- return target
-
- def _preprocess_annotation(self, target):
- boxes = []
- gt_classes = []
- difficult_boxes = []
- TO_REMOVE = 1
-
- for obj in target.iter("object"):
- difficult = int(obj.find("difficult").text) == 1
- if not self.keep_difficult and difficult:
- continue
- name = obj.find("name").text.lower().strip()
- bb = obj.find("bndbox")
- # Make pixel indexes 0-based
- # Refer to "https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/pascal_voc.py#L208-L211"
- box = [
- bb.find("xmin").text,
- bb.find("ymin").text,
- bb.find("xmax").text,
- bb.find("ymax").text,
- ]
- bndbox = tuple(
- map(lambda x: x - TO_REMOVE, list(map(int, box)))
- )
-
- boxes.append(bndbox)
- gt_classes.append(self.class_to_ind[name])
- difficult_boxes.append(difficult)
-
- size = target.find("size")
- im_info = tuple(map(int, (size.find("height").text, size.find("width").text)))
-
- res = {
- "boxes": torch.tensor(boxes, dtype=torch.float32),
- "labels": torch.tensor(gt_classes),
- "difficult": torch.tensor(difficult_boxes),
- "im_info": im_info,
- }
- return res
-
- def get_img_info(self, index):
- img_id = self.ids[index]
- anno = ET.parse(self._annopath % img_id).getroot()
- size = anno.find("size")
- im_info = tuple(map(int, (size.find("height").text, size.find("width").text)))
- return {"height": im_info[0], "width": im_info[1]}
-
- def map_class_id_to_class_name(self, class_id):
- return PascalVOCDataset.CLASSES[class_id]
diff --git a/spaces/Cyril666/ContourNet-ABI/modules/resnet.py b/spaces/Cyril666/ContourNet-ABI/modules/resnet.py
deleted file mode 100644
index 5ffb908ff8bf874a496c9f4fad2eb04f49cadf44..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/modules/resnet.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import math
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.model_zoo as model_zoo
-
-
-def conv1x1(in_planes, out_planes, stride=1):
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv1x1(inplanes, planes)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes, stride)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers):
- self.inplanes = 32
- super(ResNet, self).__init__()
- self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1,
- bias=False)
- self.bn1 = nn.BatchNorm2d(32)
- self.relu = nn.ReLU(inplace=True)
-
- self.layer1 = self._make_layer(block, 32, layers[0], stride=2)
- self.layer2 = self._make_layer(block, 64, layers[1], stride=1)
- self.layer3 = self._make_layer(block, 128, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 256, layers[3], stride=1)
- self.layer5 = self._make_layer(block, 512, layers[4], stride=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.layer5(x)
- return x
-
-
-def resnet45():
- return ResNet(BasicBlock, [3, 4, 6, 6, 3])
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/__init__.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/__init__.py
deleted file mode 100644
index 169237f3dd45dba53cf77f40c8a69e835d0bcecc..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/__init__.py
+++ /dev/null
@@ -1,38 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-from video_llama.processors.base_processor import BaseProcessor
-from video_llama.processors.blip_processors import (
- Blip2ImageTrainProcessor,
- Blip2ImageEvalProcessor,
- BlipCaptionProcessor,
-)
-from video_llama.processors.video_processor import (
- AlproVideoTrainProcessor,
- AlproVideoEvalProcessor
-)
-from video_llama.common.registry import registry
-
-__all__ = [
- "BaseProcessor",
- "Blip2ImageTrainProcessor",
- "Blip2ImageEvalProcessor",
- "BlipCaptionProcessor",
- "AlproVideoTrainProcessor",
- "AlproVideoEvalProcessor",
-]
-
-
-def load_processor(name, cfg=None):
- """
- Example
-
- >>> processor = load_processor("alpro_video_train", cfg=None)
- """
- processor = registry.get_processor_class(name).from_config(cfg)
-
- return processor
diff --git a/spaces/Danielsun888/pocSearch/app.py b/spaces/Danielsun888/pocSearch/app.py
deleted file mode 100644
index 18fdc923fc134f907df84dc0e6689271df5c0c01..0000000000000000000000000000000000000000
--- a/spaces/Danielsun888/pocSearch/app.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# import streamlit as st
-
-# x = st.slider('Select a value')
-# st.write(x, 'squared is', x * x)
-import numpy as np
-from PIL import Image,ImageColor,ImageDraw,ImageFont
-import torch
-import torchvision
-from torch import nn
-
-import torchvision
-from torchvision import datasets, models, transforms
-
-import streamlit as st
-
-# 可视化函数
-def plot_detection(image,prediction,idx2names,min_score = 0.8):
- image_result = image.copy()
- boxes,labels,scores = prediction['boxes'],prediction['labels'],prediction['scores']
- draw = ImageDraw.Draw(image_result)
- for idx in range(boxes.shape[0]):
- if scores[idx] >= min_score:
- x1, y1, x2, y2 = boxes[idx][0], boxes[idx][1], boxes[idx][2], boxes[idx][3]
- name = idx2names.get(str(labels[idx].item()))
- score = scores[idx]
- draw.rectangle((x1,y1,x2,y2), fill=None, outline ='lawngreen',width = 2)
- draw.text((x1,y1),name+":\n"+str(round(score.item(),2)),fill="red")
- return image_result
-
-
-# 加载模型
-@st.cache()
-def load_model():
- num_classes = 91
- model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True,num_classes = num_classes)
- if torch.cuda.is_available():
- model.to("cuda:0")
- model.eval()
- model.idx2names = {'0': 'background', '1': 'person', '2': 'bicycle', '3': 'car',
- '4': 'motorcycle', '5': 'airplane', '6': 'bus', '7': 'train', '8': 'truck', '9': 'boat',
- '10': 'traffic light', '11': 'fire hydrant', '13': 'stop sign',
- '14': 'parking meter', '15': 'bench', '16': 'bird', '17': 'cat',
- '18': 'dog', '19': 'horse', '20': 'sheep', '21': 'cow', '22': 'elephant',
- '23': 'bear', '24': 'zebra', '25': 'giraffe', '27': 'backpack',
- '28': 'umbrella', '31': 'handbag', '32': 'tie', '33': 'suitcase',
- '34': 'frisbee', '35': 'skis', '36': 'snowboard', '37': 'sports ball',
- '38': 'kite','39': 'baseball bat', '40': 'baseball glove', '41': 'skateboard',
- '42': 'surfboard', '43': 'tennis racket', '44': 'bottle', '46': 'wine glass',
- '47': 'cup', '48': 'fork', '49': 'knife', '50': 'spoon', '51': 'bowl',
- '52': 'banana', '53': 'apple', '54': 'sandwich', '55': 'orange',
- '56': 'broccoli', '57': 'carrot', '58': 'hot dog', '59': 'pizza',
- '60': 'donut', '61': 'cake', '62': 'chair', '63': 'couch',
- '64': 'potted plant', '65': 'bed', '67': 'dining table',
- '70': 'toilet', '72': 'tv', '73': 'laptop', '74': 'mouse',
- '75': 'remote', '76': 'keyboard', '77': 'cell phone',
- '78': 'microwave', '79': 'oven', '80': 'toaster',
- '81': 'sink', '82': 'refrigerator', '84': 'book',
- '85': 'clock', '86': 'vase', '87': 'scissors',
- '88': 'teddybear', '89': 'hair drier', '90': 'toothbrush'}
- return model
-
-def predict_detection(model,image_path,min_score=0.8):
- # 准备数据
- inputs = []
- img = Image.open(image_path).convert("RGB")
- img_tensor = torch.from_numpy(np.array(img)/255.).permute(2,0,1).float()
- if torch.cuda.is_available():
- img_tensor = img_tensor.cuda()
- inputs.append(img_tensor)
-
- # 预测结果
- with torch.no_grad():
- predictions = model(inputs)
-
- # 结果可视化
- img_result = plot_detection(img,predictions[0],
- model.idx2names,min_score = min_score)
- return img_result
-
-st.title("FasterRCNN功能演示")
-
-st.header("FasterRCNN Input:")
-image_file = st.file_uploader("upload a image file(jpg/png) to predict:")
-if image_file is not None:
- try:
- st.image(image_file)
- except Exception as err:
- st.write(err)
-else:
- image_file = "horseman.png"
- st.image(image_file)
-
-min_score = st.slider(label="choose the min_score parameter:",min_value=0.1,max_value=0.98,value=0.8)
-
-st.header("FasterRCNN Prediction:")
-with st.spinner('waitting for prediction...'):
- model = load_model()
- img_result = predict_detection(model,image_file,min_score=min_score)
- st.image(img_result)
diff --git a/spaces/Detomo/CuteRobot/style.css b/spaces/Detomo/CuteRobot/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/Detomo/CuteRobot/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Dinoking/Flower-Classification-v1/README.md b/spaces/Dinoking/Flower-Classification-v1/README.md
deleted file mode 100644
index 1caa8ad249f689d5b4ab0489c61d2678e622e6f8..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Flower-Classification-v1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Flower Classification V1
-emoji: 🌖
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/ChuanhuAgent.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/ChuanhuAgent.py
deleted file mode 100644
index c3cb944d3d4a5f60f1402445dc52a3501f466916..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/ChuanhuAgent.py
+++ /dev/null
@@ -1,216 +0,0 @@
-from langchain.chains.summarize import load_summarize_chain
-from langchain import PromptTemplate, LLMChain
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts import PromptTemplate
-from langchain.text_splitter import TokenTextSplitter
-from langchain.embeddings import OpenAIEmbeddings
-from langchain.vectorstores import FAISS
-from langchain.chains import RetrievalQA
-from langchain.agents import load_tools
-from langchain.agents import initialize_agent
-from langchain.agents import AgentType
-from langchain.docstore.document import Document
-from langchain.tools import BaseTool, StructuredTool, Tool, tool
-from langchain.callbacks.stdout import StdOutCallbackHandler
-from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
-from langchain.callbacks.manager import BaseCallbackManager
-from duckduckgo_search import DDGS
-from itertools import islice
-
-from typing import Any, Dict, List, Optional, Union
-
-from langchain.callbacks.base import BaseCallbackHandler
-from langchain.input import print_text
-from langchain.schema import AgentAction, AgentFinish, LLMResult
-
-from pydantic import BaseModel, Field
-
-import requests
-from bs4 import BeautifulSoup
-from threading import Thread, Condition
-from collections import deque
-
-from .base_model import BaseLLMModel, CallbackToIterator, ChuanhuCallbackHandler
-from ..config import default_chuanhu_assistant_model
-from ..presets import SUMMARIZE_PROMPT, i18n
-from ..index_func import construct_index
-
-from langchain.callbacks import get_openai_callback
-import os
-import gradio as gr
-import logging
-
-class GoogleSearchInput(BaseModel):
- keywords: str = Field(description="keywords to search")
-
-class WebBrowsingInput(BaseModel):
- url: str = Field(description="URL of a webpage")
-
-class WebAskingInput(BaseModel):
- url: str = Field(description="URL of a webpage")
- question: str = Field(description="Question that you want to know the answer to, based on the webpage's content.")
-
-
-class ChuanhuAgent_Client(BaseLLMModel):
- def __init__(self, model_name, openai_api_key, user_name="") -> None:
- super().__init__(model_name=model_name, user=user_name)
- self.text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30)
- self.api_key = openai_api_key
- self.llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name=default_chuanhu_assistant_model, openai_api_base=os.environ.get("OPENAI_API_BASE", None))
- self.cheap_llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo", openai_api_base=os.environ.get("OPENAI_API_BASE", None))
- PROMPT = PromptTemplate(template=SUMMARIZE_PROMPT, input_variables=["text"])
- self.summarize_chain = load_summarize_chain(self.cheap_llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
- self.index_summary = None
- self.index = None
- if "Pro" in self.model_name:
- self.tools = load_tools(["serpapi", "google-search-results-json", "llm-math", "arxiv", "wikipedia", "wolfram-alpha"], llm=self.llm)
- else:
- self.tools = load_tools(["ddg-search", "llm-math", "arxiv", "wikipedia"], llm=self.llm)
- self.tools.append(
- Tool.from_function(
- func=self.google_search_simple,
- name="Google Search JSON",
- description="useful when you need to search the web.",
- args_schema=GoogleSearchInput
- )
- )
-
- self.tools.append(
- Tool.from_function(
- func=self.summary_url,
- name="Summary Webpage",
- description="useful when you need to know the overall content of a webpage.",
- args_schema=WebBrowsingInput
- )
- )
-
- self.tools.append(
- StructuredTool.from_function(
- func=self.ask_url,
- name="Ask Webpage",
- description="useful when you need to ask detailed questions about a webpage.",
- args_schema=WebAskingInput
- )
- )
-
- def google_search_simple(self, query):
- results = []
- with DDGS() as ddgs:
- ddgs_gen = ddgs.text("notes from a dead house", backend="lite")
- for r in islice(ddgs_gen, 10):
- results.append({
- "title": r["title"],
- "link": r["href"],
- "snippet": r["body"]
- })
- return str(results)
-
- def handle_file_upload(self, files, chatbot, language):
- """if the model accepts multi modal input, implement this function"""
- status = gr.Markdown.update()
- if files:
- index = construct_index(self.api_key, file_src=files)
- assert index is not None, "获取索引失败"
- self.index = index
- status = i18n("索引构建完成")
- # Summarize the document
- logging.info(i18n("生成内容总结中……"))
- with get_openai_callback() as cb:
- os.environ["OPENAI_API_KEY"] = self.api_key
- from langchain.chains.summarize import load_summarize_chain
- from langchain.prompts import PromptTemplate
- from langchain.chat_models import ChatOpenAI
- prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":"
- PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
- llm = ChatOpenAI()
- chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
- summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"]
- logging.info(f"Summary: {summary}")
- self.index_summary = summary
- chatbot.append((f"Uploaded {len(files)} files", summary))
- logging.info(cb)
- return gr.Files.update(), chatbot, status
-
- def query_index(self, query):
- if self.index is not None:
- retriever = self.index.as_retriever()
- qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=retriever)
- return qa.run(query)
- else:
- "Error during query."
-
- def summary(self, text):
- texts = Document(page_content=text)
- texts = self.text_splitter.split_documents([texts])
- return self.summarize_chain({"input_documents": texts}, return_only_outputs=True)["output_text"]
-
- def fetch_url_content(self, url):
- response = requests.get(url)
- soup = BeautifulSoup(response.text, 'html.parser')
-
- # 提取所有的文本
- text = ''.join(s.getText() for s in soup.find_all('p'))
- logging.info(f"Extracted text from {url}")
- return text
-
- def summary_url(self, url):
- text = self.fetch_url_content(url)
- if text == "":
- return "URL unavailable."
- text_summary = self.summary(text)
- url_content = "webpage content summary:\n" + text_summary
-
- return url_content
-
- def ask_url(self, url, question):
- text = self.fetch_url_content(url)
- if text == "":
- return "URL unavailable."
- texts = Document(page_content=text)
- texts = self.text_splitter.split_documents([texts])
- # use embedding
- embeddings = OpenAIEmbeddings(openai_api_key=self.api_key, openai_api_base=os.environ.get("OPENAI_API_BASE", None))
-
- # create vectorstore
- db = FAISS.from_documents(texts, embeddings)
- retriever = db.as_retriever()
- qa = RetrievalQA.from_chain_type(llm=self.cheap_llm, chain_type="stuff", retriever=retriever)
- return qa.run(f"{question} Reply in 中文")
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- # llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
- agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
- reply = agent.run(input=f"{question} Reply in 简体中文")
- return reply, -1
-
- def get_answer_stream_iter(self):
- question = self.history[-1]["content"]
- it = CallbackToIterator()
- manager = BaseCallbackManager(handlers=[ChuanhuCallbackHandler(it.callback)])
- def thread_func():
- tools = self.tools
- if self.index is not None:
- tools.append(
- Tool.from_function(
- func=self.query_index,
- name="Query Knowledge Base",
- description=f"useful when you need to know about: {self.index_summary}",
- args_schema=WebBrowsingInput
- )
- )
- agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, callback_manager=manager)
- try:
- reply = agent.run(input=f"{question} Reply in 简体中文")
- except Exception as e:
- import traceback
- traceback.print_exc()
- reply = str(e)
- it.callback(reply)
- it.finish()
- t = Thread(target=thread_func)
- t.start()
- partial_text = ""
- for value in it:
- partial_text += value
- yield partial_text
diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/filtered_lrelu.cpp b/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/filtered_lrelu.cpp
deleted file mode 100644
index 4e253d1f3ffe84e54e667bf61a45dfe66264a73c..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/filtered_lrelu.cpp
+++ /dev/null
@@ -1,300 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "filtered_lrelu.h"
-
-//------------------------------------------------------------------------
-
-static std::tuple filtered_lrelu(
- torch::Tensor x, torch::Tensor fu, torch::Tensor fd, torch::Tensor b, torch::Tensor si,
- int up, int down, int px0, int px1, int py0, int py1, int sx, int sy, float gain, float slope, float clamp, bool flip_filters, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(fu.device() == x.device() && fd.device() == x.device() && b.device() == x.device(), "all input tensors must reside on the same device");
- TORCH_CHECK(fu.dtype() == torch::kFloat && fd.dtype() == torch::kFloat, "fu and fd must be float32");
- TORCH_CHECK(b.dtype() == x.dtype(), "x and b must have the same dtype");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat, "x and b must be float16 or float32");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK((fu.dim() == 1 || fu.dim() == 2) && (fd.dim() == 1 || fd.dim() == 2), "fu and fd must be rank 1 or 2");
- TORCH_CHECK(fu.size(0) <= INT_MAX && fu.size(-1) <= INT_MAX, "fu is too large");
- TORCH_CHECK(fd.size(0) <= INT_MAX && fd.size(-1) <= INT_MAX, "fd is too large");
- TORCH_CHECK(fu.numel() > 0, "fu is empty");
- TORCH_CHECK(fd.numel() > 0, "fd is empty");
- TORCH_CHECK(b.dim() == 1 && b.size(0) == x.size(1), "b must be a vector with the same number of channels as x");
- TORCH_CHECK(up >= 1 && down >= 1, "up and down must be at least 1");
-
- // Figure out how much shared memory is available on the device.
- int maxSharedBytes = 0;
- AT_CUDA_CHECK(cudaDeviceGetAttribute(&maxSharedBytes, cudaDevAttrMaxSharedMemoryPerBlockOptin, x.device().index()));
- int sharedKB = maxSharedBytes >> 10;
-
- // Populate enough launch parameters to check if a CUDA kernel exists.
- filtered_lrelu_kernel_params p;
- p.up = up;
- p.down = down;
- p.fuShape = make_int2((int)fu.size(-1), fu.dim() == 2 ? (int)fu.size(0) : 0); // shape [n, 0] indicates separable filter.
- p.fdShape = make_int2((int)fd.size(-1), fd.dim() == 2 ? (int)fd.size(0) : 0);
- filtered_lrelu_kernel_spec test_spec = choose_filtered_lrelu_kernel(p, sharedKB);
- if (!test_spec.exec)
- {
- // No kernel found - return empty tensors and indicate missing kernel with return code of -1.
- return std::make_tuple(torch::Tensor(), torch::Tensor(), -1);
- }
-
- // Input/output element size.
- int64_t sz = (x.dtype() == torch::kHalf) ? 2 : 4;
-
- // Input sizes.
- int64_t xw = (int)x.size(3);
- int64_t xh = (int)x.size(2);
- int64_t fut_w = (int)fu.size(-1) - 1;
- int64_t fut_h = (int)fu.size(0) - 1;
- int64_t fdt_w = (int)fd.size(-1) - 1;
- int64_t fdt_h = (int)fd.size(0) - 1;
-
- // Logical size of upsampled buffer.
- int64_t cw = xw * up + (px0 + px1) - fut_w;
- int64_t ch = xh * up + (py0 + py1) - fut_h;
- TORCH_CHECK(cw > fdt_w && ch > fdt_h, "upsampled buffer must be at least the size of downsampling filter");
- TORCH_CHECK(cw <= INT_MAX && ch <= INT_MAX, "upsampled buffer is too large");
-
- // Compute output size and allocate.
- int64_t yw = (cw - fdt_w + (down - 1)) / down;
- int64_t yh = (ch - fdt_h + (down - 1)) / down;
- TORCH_CHECK(yw > 0 && yh > 0, "output must be at least 1x1");
- TORCH_CHECK(yw <= INT_MAX && yh <= INT_MAX, "output is too large");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), yh, yw}, x.options(), x.suggest_memory_format());
-
- // Allocate sign tensor.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- int64_t sw_active = 0; // Active width of sign tensor.
- if (writeSigns)
- {
- sw_active = yw * down - (down - 1) + fdt_w; // Active width in elements.
- int64_t sh = yh * down - (down - 1) + fdt_h; // Height = active height.
- int64_t sw = (sw_active + 15) & ~15; // Width = active width in elements, rounded up to multiple of 16.
- TORCH_CHECK(sh <= INT_MAX && (sw >> 2) <= INT_MAX, "signs is too large");
- s = so = torch::empty({x.size(0), x.size(1), sh, sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
- else if (readSigns)
- sw_active = s.size(3) << 2;
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && s.size(3) <= INT_MAX, "signs is too large");
- }
-
- // Populate rest of CUDA kernel parameters.
- p.x = x.data_ptr();
- p.y = y.data_ptr();
- p.b = b.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.fu = fu.data_ptr();
- p.fd = fd.data_ptr();
- p.pad0 = make_int2(px0, py0);
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.flip = (flip_filters) ? 1 : 0;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.yShape = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3), (int)s.size(2)) : make_int2(0, 0); // Width is in bytes. Contiguous.
- p.sOfs = make_int2(sx, sy);
- p.swLimit = (sw_active + 3) >> 2; // Rounded up to bytes.
-
- // x, y, b strides are in bytes.
- p.xStride = make_longlong4(sz * x.stride(3), sz * x.stride(2), sz * x.stride(1), sz * x.stride(0));
- p.yStride = make_longlong4(sz * y.stride(3), sz * y.stride(2), sz * y.stride(1), sz * y.stride(0));
- p.bStride = sz * b.stride(0);
-
- // fu, fd strides are in elements.
- p.fuStride = make_longlong3(fu.stride(-1), fu.dim() == 2 ? fu.stride(0) : 0, 0);
- p.fdStride = make_longlong3(fd.stride(-1), fd.dim() == 2 ? fd.stride(0) : 0, 0);
-
- // Determine if indices don't fit in int32. Support negative strides although Torch currently never produces those.
- bool index64b = false;
- if (std::abs(p.bStride * x.size(1)) > INT_MAX) index64b = true;
- if (std::min(x.size(0) * p.xStride.w, 0ll) + std::min(x.size(1) * p.xStride.z, 0ll) + std::min(x.size(2) * p.xStride.y, 0ll) + std::min(x.size(3) * p.xStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(x.size(0) * p.xStride.w, 0ll) + std::max(x.size(1) * p.xStride.z, 0ll) + std::max(x.size(2) * p.xStride.y, 0ll) + std::max(x.size(3) * p.xStride.x, 0ll) > INT_MAX) index64b = true;
- if (std::min(y.size(0) * p.yStride.w, 0ll) + std::min(y.size(1) * p.yStride.z, 0ll) + std::min(y.size(2) * p.yStride.y, 0ll) + std::min(y.size(3) * p.yStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(y.size(0) * p.yStride.w, 0ll) + std::max(y.size(1) * p.yStride.z, 0ll) + std::max(y.size(2) * p.yStride.y, 0ll) + std::max(y.size(3) * p.yStride.x, 0ll) > INT_MAX) index64b = true;
- if (s.numel() > INT_MAX) index64b = true;
-
- // Choose CUDA kernel.
- filtered_lrelu_kernel_spec spec = { 0 };
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_cuda", [&]
- {
- if constexpr (sizeof(scalar_t) <= 4) // Exclude doubles. constexpr prevents template instantiation.
- {
- // Choose kernel based on index type, datatype and sign read/write modes.
- if (!index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- }
- });
- TORCH_CHECK(spec.exec, "internal error - CUDA kernel not found") // This should not happen because we tested earlier that kernel exists.
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = spec.numWarps * 32;
- int gx = (p.yShape.x - 1) / spec.tileOut.x + 1;
- int gy = (p.yShape.y - 1) / spec.tileOut.y + 1;
- int gz = p.yShape.z * p.yShape.w;
-
- // Repeat multiple horizontal tiles in a CTA?
- if (spec.xrep)
- {
- p.tilesXrep = spec.xrep;
- p.tilesXdim = gx;
-
- gx = (gx + p.tilesXrep - 1) / p.tilesXrep;
- std::swap(gx, gy);
- }
- else
- {
- p.tilesXrep = 0;
- p.tilesXdim = 0;
- }
-
- // Launch filter setup kernel.
- AT_CUDA_CHECK(cudaLaunchKernel(spec.setup, 1, 1024, args, 0, at::cuda::getCurrentCUDAStream()));
-
- // Copy kernels to constant memory.
- if ( writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
-
- // Set cache and shared memory configurations for main kernel.
- AT_CUDA_CHECK(cudaFuncSetCacheConfig(spec.exec, cudaFuncCachePreferShared));
- if (spec.dynamicSharedKB) // Need dynamically allocated shared memory?
- AT_CUDA_CHECK(cudaFuncSetAttribute(spec.exec, cudaFuncAttributeMaxDynamicSharedMemorySize, spec.dynamicSharedKB << 10));
- AT_CUDA_CHECK(cudaFuncSetSharedMemConfig(spec.exec, cudaSharedMemBankSizeFourByte));
-
- // Launch main kernel.
- const int maxSubGz = 65535; // CUDA maximum for block z dimension.
- for (int zofs=0; zofs < gz; zofs += maxSubGz) // Do multiple launches if gz is too big.
- {
- p.blockZofs = zofs;
- int subGz = std::min(maxSubGz, gz - zofs);
- AT_CUDA_CHECK(cudaLaunchKernel(spec.exec, dim3(gx, gy, subGz), bx, args, spec.dynamicSharedKB << 10, at::cuda::getCurrentCUDAStream()));
- }
-
- // Done.
- return std::make_tuple(y, so, 0);
-}
-
-//------------------------------------------------------------------------
-
-static torch::Tensor filtered_lrelu_act(torch::Tensor x, torch::Tensor si, int sx, int sy, float gain, float slope, float clamp, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat || x.dtype() == torch::kDouble, "x must be float16, float32 or float64");
-
- // Output signs if we don't have sign input.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- if (writeSigns)
- {
- int64_t sw = x.size(3);
- sw = (sw + 15) & ~15; // Round to a multiple of 16 for coalescing.
- s = so = torch::empty({x.size(0), x.size(1), x.size(2), sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && (s.size(3) << 2) <= INT_MAX, "signs tensor is too large");
- }
-
- // Initialize CUDA kernel parameters.
- filtered_lrelu_act_kernel_params p;
- p.x = x.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.xStride = make_longlong4(x.stride(3), x.stride(2), x.stride(1), x.stride(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3) << 2, (int)s.size(2)) : make_int2(0, 0); // Width is in elements. Contiguous.
- p.sOfs = make_int2(sx, sy);
-
- // Choose CUDA kernel.
- void* func = 0;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_act_cuda", [&]
- {
- if (writeSigns)
- func = choose_filtered_lrelu_act_kernel();
- else if (readSigns)
- func = choose_filtered_lrelu_act_kernel();
- else
- func = choose_filtered_lrelu_act_kernel();
- });
- TORCH_CHECK(func, "internal error - CUDA kernel not found");
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = 128; // 4 warps per block.
-
- // Logical size of launch = writeSigns ? p.s : p.x
- uint32_t gx = writeSigns ? p.sShape.x : p.xShape.x;
- uint32_t gy = writeSigns ? p.sShape.y : p.xShape.y;
- uint32_t gz = p.xShape.z * p.xShape.w; // Same as in p.sShape if signs are in use.
- gx = (gx - 1) / bx + 1;
-
- // Make sure grid y and z dimensions are within CUDA launch limits. Kernel loops internally to do the rest.
- const uint32_t gmax = 65535;
- gy = std::min(gy, gmax);
- gz = std::min(gz, gmax);
-
- // Launch.
- AT_CUDA_CHECK(cudaLaunchKernel(func, dim3(gx, gy, gz), bx, args, 0, at::cuda::getCurrentCUDAStream()));
- return so;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("filtered_lrelu", &filtered_lrelu); // The whole thing.
- m.def("filtered_lrelu_act_", &filtered_lrelu_act); // Activation and sign tensor handling only. Modifies data tensor in-place.
-}
-
-//------------------------------------------------------------------------
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan/visualizer_drag_gradio.py b/spaces/DragGan/DragGan/visualizer_drag_gradio.py
deleted file mode 100644
index 1eace3a82f748672dc0e7ff4b73cc4f506a479d2..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/visualizer_drag_gradio.py
+++ /dev/null
@@ -1,940 +0,0 @@
-# https://huggingface.co/DragGan/DragGan-Models
-# https://arxiv.org/abs/2305.10973
-import os
-import os.path as osp
-from argparse import ArgumentParser
-from functools import partial
-from pathlib import Path
-import time
-
-import psutil
-
-import gradio as gr
-import numpy as np
-import torch
-from PIL import Image
-
-import dnnlib
-from gradio_utils import (ImageMask, draw_mask_on_image, draw_points_on_image,
- get_latest_points_pair, get_valid_mask,
- on_change_single_global_state)
-from viz.renderer import Renderer, add_watermark_np
-
-
-# download models from Hugging Face hub
-from huggingface_hub import snapshot_download
-
-model_dir = Path('./checkpoints')
-snapshot_download('DragGan/DragGan-Models',
- repo_type='model', local_dir=model_dir)
-
-parser = ArgumentParser()
-parser.add_argument('--share', action='store_true')
-parser.add_argument('--cache-dir', type=str, default='./checkpoints')
-args = parser.parse_args()
-
-cache_dir = args.cache_dir
-
-device = 'cuda'
-IS_SPACE = "DragGan/DragGan" in os.environ.get('SPACE_ID', '')
-TIMEOUT = 80
-
-
-def reverse_point_pairs(points):
- new_points = []
- for p in points:
- new_points.append([p[1], p[0]])
- return new_points
-
-
-def clear_state(global_state, target=None):
- """Clear target history state from global_state
- If target is not defined, points and mask will be both removed.
- 1. set global_state['points'] as empty dict
- 2. set global_state['mask'] as full-one mask.
- """
- if target is None:
- target = ['point', 'mask']
- if not isinstance(target, list):
- target = [target]
- if 'point' in target:
- global_state['points'] = dict()
- print('Clear Points State!')
- if 'mask' in target:
- image_raw = global_state["images"]["image_raw"]
- global_state['mask'] = np.ones((image_raw.size[1], image_raw.size[0]),
- dtype=np.uint8)
- print('Clear mask State!')
-
- return global_state
-
-
-def init_images(global_state):
- """This function is called only ones with Gradio App is started.
- 0. pre-process global_state, unpack value from global_state of need
- 1. Re-init renderer
- 2. run `renderer._render_drag_impl` with `is_drag=False` to generate
- new image
- 3. Assign images to global state and re-generate mask
- """
-
- if isinstance(global_state, gr.State):
- state = global_state.value
- else:
- state = global_state
-
- state['renderer'].init_network(
- state['generator_params'], # res
- valid_checkpoints_dict[state['pretrained_weight']], # pkl
- state['params']['seed'], # w0_seed,
- None, # w_load
- state['params']['latent_space'] == 'w+', # w_plus
- 'const',
- state['params']['trunc_psi'], # trunc_psi,
- state['params']['trunc_cutoff'], # trunc_cutoff,
- None, # input_transform
- state['params']['lr'] # lr,
- )
-
- state['renderer']._render_drag_impl(state['generator_params'],
- is_drag=False,
- to_pil=True)
-
- init_image = state['generator_params'].image
- state['images']['image_orig'] = init_image
- state['images']['image_raw'] = init_image
- state['images']['image_show'] = Image.fromarray(
- add_watermark_np(np.array(init_image)))
- state['mask'] = np.ones((init_image.size[1], init_image.size[0]),
- dtype=np.uint8)
- return global_state
-
-
-def update_image_draw(image, points, mask, show_mask, global_state=None):
-
- image_draw = draw_points_on_image(image, points)
- if show_mask and mask is not None and not (mask == 0).all() and not (
- mask == 1).all():
- image_draw = draw_mask_on_image(image_draw, mask)
-
- image_draw = Image.fromarray(add_watermark_np(np.array(image_draw)))
- if global_state is not None:
- global_state['images']['image_show'] = image_draw
- return image_draw
-
-
-def preprocess_mask_info(global_state, image):
- """Function to handle mask information.
- 1. last_mask is None: Do not need to change mask, return mask
- 2. last_mask is not None:
- 2.1 global_state is remove_mask:
- 2.2 global_state is add_mask:
- """
- if isinstance(image, dict):
- last_mask = get_valid_mask(image['mask'])
- else:
- last_mask = None
- mask = global_state['mask']
-
- # mask in global state is a placeholder with all 1.
- if (mask == 1).all():
- mask = last_mask
-
- # last_mask = global_state['last_mask']
- editing_mode = global_state['editing_state']
-
- if last_mask is None:
- return global_state
-
- if editing_mode == 'remove_mask':
- updated_mask = np.clip(mask - last_mask, 0, 1)
- print(f'Last editing_state is {editing_mode}, do remove.')
- elif editing_mode == 'add_mask':
- updated_mask = np.clip(mask + last_mask, 0, 1)
- print(f'Last editing_state is {editing_mode}, do add.')
- else:
- updated_mask = mask
- print(f'Last editing_state is {editing_mode}, '
- 'do nothing to mask.')
-
- global_state['mask'] = updated_mask
- # global_state['last_mask'] = None # clear buffer
- return global_state
-
-
-def print_memory_usage():
- # Print system memory usage
- print(f"System memory usage: {psutil.virtual_memory().percent}%")
-
- # Print GPU memory usage
- if torch.cuda.is_available():
- device = torch.device("cuda")
- print(f"GPU memory usage: {torch.cuda.memory_allocated() / 1e9} GB")
- print(
- f"Max GPU memory usage: {torch.cuda.max_memory_allocated() / 1e9} GB")
- device_properties = torch.cuda.get_device_properties(device)
- available_memory = device_properties.total_memory - \
- torch.cuda.max_memory_allocated()
- print(f"Available GPU memory: {available_memory / 1e9} GB")
- else:
- print("No GPU available")
-
-
-# filter large models running on SPACES
-allowed_checkpoints = [] # all checkpoints
-if IS_SPACE:
- allowed_checkpoints = ["stylegan_human_v2_512.pkl",
- "stylegan2_dogs_1024_pytorch.pkl"]
-
-valid_checkpoints_dict = {
- f.name.split('.')[0]: str(f)
- for f in Path(cache_dir).glob('*.pkl')
- if f.name in allowed_checkpoints or not IS_SPACE
-}
-print('Valid checkpoint file:')
-print(valid_checkpoints_dict)
-
-init_pkl = 'stylegan_human_v2_512'
-
-with gr.Blocks() as app:
- gr.Markdown("""
-# DragGAN - Drag Your GAN
-## Interactive Point-based Manipulation on the Generative Image Manifold
-### Unofficial Gradio Demo
-
-**Due to high demand, only one model can be run at a time, or you can duplicate the space and run your own copy.**
-
-
- for no queue on your own hardware.
-
-* Official Repo: [XingangPan](https://github.com/XingangPan/DragGAN)
-* Gradio Demo by: [LeoXing1996](https://github.com/LeoXing1996) © [OpenMMLab MMagic](https://github.com/open-mmlab/mmagic)
-""")
-
- # renderer = Renderer()
- global_state = gr.State({
- "images": {
- # image_orig: the original image, change with seed/model is changed
- # image_raw: image with mask and points, change durning optimization
- # image_show: image showed on screen
- },
- "temporal_params": {
- # stop
- },
- 'mask':
- None, # mask for visualization, 1 for editing and 0 for unchange
- 'last_mask': None, # last edited mask
- 'show_mask': True, # add button
- "generator_params": dnnlib.EasyDict(),
- "params": {
- "seed": int(np.random.randint(0, 2**32 - 1)),
- "motion_lambda": 20,
- "r1_in_pixels": 3,
- "r2_in_pixels": 12,
- "magnitude_direction_in_pixels": 1.0,
- "latent_space": "w+",
- "trunc_psi": 0.7,
- "trunc_cutoff": None,
- "lr": 0.001,
- },
- "device": device,
- "draw_interval": 1,
- "renderer": Renderer(disable_timing=True),
- "points": {},
- "curr_point": None,
- "curr_type_point": "start",
- 'editing_state': 'add_points',
- 'pretrained_weight': init_pkl
- })
-
- # init image
- global_state = init_images(global_state)
- with gr.Row():
-
- with gr.Row():
-
- # Left --> tools
- with gr.Column(scale=3):
-
- # Pickle
- with gr.Row():
-
- with gr.Column(scale=1, min_width=10):
- gr.Markdown(value='Pickle', show_label=False)
-
- with gr.Column(scale=4, min_width=10):
- form_pretrained_dropdown = gr.Dropdown(
- choices=list(valid_checkpoints_dict.keys()),
- label="Pretrained Model",
- value=init_pkl,
- )
-
- # Latent
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- gr.Markdown(value='Latent', show_label=False)
-
- with gr.Column(scale=4, min_width=10):
- form_seed_number = gr.Slider(
- mininium=0,
- maximum=2**32-1,
- step=1,
- value=global_state.value['params']['seed'],
- interactive=True,
- # randomize=True,
- label="Seed",
- )
- form_lr_number = gr.Number(
- value=global_state.value["params"]["lr"],
- interactive=True,
- label="Step Size")
-
- with gr.Row():
- with gr.Column(scale=2, min_width=10):
- form_reset_image = gr.Button("Reset Image")
- with gr.Column(scale=3, min_width=10):
- form_latent_space = gr.Radio(
- ['w', 'w+'],
- value=global_state.value['params']
- ['latent_space'],
- interactive=True,
- label='Latent space to optimize',
- show_label=False,
- )
-
- # Drag
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- gr.Markdown(value='Drag', show_label=False)
- with gr.Column(scale=4, min_width=10):
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- enable_add_points = gr.Button('Add Points')
- with gr.Column(scale=1, min_width=10):
- undo_points = gr.Button('Reset Points')
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- form_start_btn = gr.Button("Start")
- with gr.Column(scale=1, min_width=10):
- form_stop_btn = gr.Button("Stop")
-
- form_steps_number = gr.Number(value=0,
- label="Steps",
- interactive=False)
-
- # Mask
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- gr.Markdown(value='Mask', show_label=False)
- with gr.Column(scale=4, min_width=10):
- enable_add_mask = gr.Button('Edit Flexible Area')
- with gr.Row():
- with gr.Column(scale=1, min_width=10):
- form_reset_mask_btn = gr.Button("Reset mask")
- with gr.Column(scale=1, min_width=10):
- show_mask = gr.Checkbox(
- label='Show Mask',
- value=global_state.value['show_mask'],
- show_label=False)
-
- with gr.Row():
- form_lambda_number = gr.Number(
- value=global_state.value["params"]
- ["motion_lambda"],
- interactive=True,
- label="Lambda",
- )
-
- form_draw_interval_number = gr.Number(
- value=global_state.value["draw_interval"],
- label="Draw Interval (steps)",
- interactive=True,
- visible=False)
-
- # Right --> Image
- with gr.Column(scale=8):
- form_image = ImageMask(
- value=global_state.value['images']['image_show'],
- brush_radius=20).style(
- width=768,
- height=768) # NOTE: hard image size code here.
- gr.Markdown("""
- ## Quick Start
-
- 1. Select desired `Pretrained Model` and adjust `Seed` to generate an
- initial image.
- 2. Click on image to add control points.
- 3. Click `Start` and enjoy it!
-
- ## Advance Usage
-
- 1. Change `Step Size` to adjust learning rate in drag optimization.
- 2. Select `w` or `w+` to change latent space to optimize:
- * Optimize on `w` space may cause greater influence to the image.
- * Optimize on `w+` space may work slower than `w`, but usually achieve
- better results.
- * Note that changing the latent space will reset the image, points and
- mask (this has the same effect as `Reset Image` button).
- 3. Click `Edit Flexible Area` to create a mask and constrain the
- unmasked region to remain unchanged.
-
-
- """)
- gr.HTML("""
-
-
- """)
- # Network & latents tab listeners
-
- def on_change_pretrained_dropdown(pretrained_value, global_state):
- """Function to handle model change.
- 1. Set pretrained value to global_state
- 2. Re-init images and clear all states
- """
-
- global_state['pretrained_weight'] = pretrained_value
- init_images(global_state)
- clear_state(global_state)
-
- return global_state, global_state["images"]['image_show']
-
- form_pretrained_dropdown.change(
- on_change_pretrained_dropdown,
- inputs=[form_pretrained_dropdown, global_state],
- outputs=[global_state, form_image],
- queue=True,
- )
-
- def on_click_reset_image(global_state):
- """Reset image to the original one and clear all states
- 1. Re-init images
- 2. Clear all states
- """
-
- init_images(global_state)
- clear_state(global_state)
-
- return global_state, global_state['images']['image_show']
-
- form_reset_image.click(
- on_click_reset_image,
- inputs=[global_state],
- outputs=[global_state, form_image],
- queue=False,
- )
-
- # Update parameters
- def on_change_update_image_seed(seed, global_state):
- """Function to handle generation seed change.
- 1. Set seed to global_state
- 2. Re-init images and clear all states
- """
-
- global_state["params"]["seed"] = int(seed)
- init_images(global_state)
- clear_state(global_state)
-
- return global_state, global_state['images']['image_show']
-
- form_seed_number.change(
- on_change_update_image_seed,
- inputs=[form_seed_number, global_state],
- outputs=[global_state, form_image],
- )
-
- def on_click_latent_space(latent_space, global_state):
- """Function to reset latent space to optimize.
- NOTE: this function we reset the image and all controls
- 1. Set latent-space to global_state
- 2. Re-init images and clear all state
- """
-
- global_state['params']['latent_space'] = latent_space
- init_images(global_state)
- clear_state(global_state)
-
- return global_state, global_state['images']['image_show']
-
- form_latent_space.change(on_click_latent_space,
- inputs=[form_latent_space, global_state],
- outputs=[global_state, form_image])
-
- # ==== Params
- form_lambda_number.change(
- partial(on_change_single_global_state, ["params", "motion_lambda"]),
- inputs=[form_lambda_number, global_state],
- outputs=[global_state],
- )
-
- def on_change_lr(lr, global_state):
- if lr == 0:
- print('lr is 0, do nothing.')
- return global_state
- else:
- global_state["params"]["lr"] = lr
- renderer = global_state['renderer']
- renderer.update_lr(lr)
- print('New optimizer: ')
- print(renderer.w_optim)
- return global_state
-
- form_lr_number.change(
- on_change_lr,
- inputs=[form_lr_number, global_state],
- outputs=[global_state],
- queue=False,
- )
-
- def on_click_start(global_state, image):
- p_in_pixels = []
- t_in_pixels = []
- valid_points = []
-
- # handle of start drag in mask editing mode
- global_state = preprocess_mask_info(global_state, image)
-
- # Prepare the points for the inference
- if len(global_state["points"]) == 0:
- # yield on_click_start_wo_points(global_state, image)
- image_raw = global_state['images']['image_raw']
- update_image_draw(
- image_raw,
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'],
- global_state,
- )
-
- yield (
- global_state,
- 0,
- global_state['images']['image_show'],
- # gr.File.update(visible=False),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- # latent space
- gr.Radio.update(interactive=True),
- gr.Button.update(interactive=True),
- # NOTE: disable stop button
- gr.Button.update(interactive=False),
-
- # update other comps
- gr.Dropdown.update(interactive=True),
- gr.Number.update(interactive=True),
- gr.Number.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Checkbox.update(interactive=True),
- # gr.Number.update(interactive=True),
- gr.Number.update(interactive=True),
- )
- else:
-
- # Transform the points into torch tensors
- for key_point, point in global_state["points"].items():
- try:
- p_start = point.get("start_temp", point["start"])
- p_end = point["target"]
-
- if p_start is None or p_end is None:
- continue
-
- except KeyError:
- continue
-
- p_in_pixels.append(p_start)
- t_in_pixels.append(p_end)
- valid_points.append(key_point)
-
- mask = torch.tensor(global_state['mask']).float()
- drag_mask = 1 - mask
-
- renderer: Renderer = global_state["renderer"]
- global_state['temporal_params']['stop'] = False
- global_state['editing_state'] = 'running'
-
- # reverse points order
- p_to_opt = reverse_point_pairs(p_in_pixels)
- t_to_opt = reverse_point_pairs(t_in_pixels)
- print('Running with:')
- print(f' Source: {p_in_pixels}')
- print(f' Target: {t_in_pixels}')
- step_idx = 0
- last_time = time.time()
- while True:
- print_memory_usage()
- # add a TIMEOUT break
- print(f'Running time: {time.time() - last_time}')
- if IS_SPACE and time.time() - last_time > TIMEOUT:
- print('Timeout break!')
- break
- if global_state["temporal_params"]["stop"] or global_state['generator_params']["stop"]:
- break
-
- # do drage here!
- renderer._render_drag_impl(
- global_state['generator_params'],
- p_to_opt, # point
- t_to_opt, # target
- drag_mask, # mask,
- global_state['params']['motion_lambda'], # lambda_mask
- reg=0,
- feature_idx=5, # NOTE: do not support change for now
- r1=global_state['params']['r1_in_pixels'], # r1
- r2=global_state['params']['r2_in_pixels'], # r2
- # random_seed = 0,
- # noise_mode = 'const',
- trunc_psi=global_state['params']['trunc_psi'],
- # force_fp32 = False,
- # layer_name = None,
- # sel_channels = 3,
- # base_channel = 0,
- # img_scale_db = 0,
- # img_normalize = False,
- # untransform = False,
- is_drag=True,
- to_pil=True)
-
- if step_idx % global_state['draw_interval'] == 0:
- print('Current Source:')
- for key_point, p_i, t_i in zip(valid_points, p_to_opt,
- t_to_opt):
- global_state["points"][key_point]["start_temp"] = [
- p_i[1],
- p_i[0],
- ]
- global_state["points"][key_point]["target"] = [
- t_i[1],
- t_i[0],
- ]
- start_temp = global_state["points"][key_point][
- "start_temp"]
- print(f' {start_temp}')
-
- image_result = global_state['generator_params']['image']
- image_draw = update_image_draw(
- image_result,
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'],
- global_state,
- )
- global_state['images']['image_raw'] = image_result
-
- yield (
- global_state,
- step_idx,
- global_state['images']['image_show'],
- # gr.File.update(visible=False),
- gr.Button.update(interactive=False),
- gr.Button.update(interactive=False),
- gr.Button.update(interactive=False),
- gr.Button.update(interactive=False),
- gr.Button.update(interactive=False),
- # latent space
- gr.Radio.update(interactive=False),
- gr.Button.update(interactive=False),
- # enable stop button in loop
- gr.Button.update(interactive=True),
-
- # update other comps
- gr.Dropdown.update(interactive=False),
- gr.Number.update(interactive=False),
- gr.Number.update(interactive=False),
- gr.Button.update(interactive=False),
- gr.Button.update(interactive=False),
- gr.Checkbox.update(interactive=False),
- # gr.Number.update(interactive=False),
- gr.Number.update(interactive=False),
- )
-
- # increate step
- step_idx += 1
-
- image_result = global_state['generator_params']['image']
- global_state['images']['image_raw'] = image_result
- image_draw = update_image_draw(image_result,
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'],
- global_state)
-
- # fp = NamedTemporaryFile(suffix=".png", delete=False)
- # image_result.save(fp, "PNG")
-
- global_state['editing_state'] = 'add_points'
-
- yield (
- global_state,
- 0, # reset step to 0 after stop.
- global_state['images']['image_show'],
- # gr.File.update(visible=True, value=fp.name),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- gr.Button.update(interactive=True),
- # latent space
- gr.Radio.update(interactive=True),
- gr.Button.update(interactive=True),
- # NOTE: disable stop button with loop finish
- gr.Button.update(interactive=False),
-
- # update other comps
- gr.Dropdown.update(interactive=True),
- gr.Number.update(interactive=True),
- gr.Number.update(interactive=True),
- gr.Checkbox.update(interactive=True),
- gr.Number.update(interactive=True),
- )
-
- form_start_btn.click(
- on_click_start,
- inputs=[global_state, form_image],
- outputs=[
- global_state,
- form_steps_number,
- form_image,
- # form_download_result_file,
- # >>> buttons
- form_reset_image,
- enable_add_points,
- enable_add_mask,
- undo_points,
- form_reset_mask_btn,
- form_latent_space,
- form_start_btn,
- form_stop_btn,
- # <<< buttonm
- # >>> inputs comps
- form_pretrained_dropdown,
- form_seed_number,
- form_lr_number,
- show_mask,
- form_lambda_number,
- ],
- )
-
- def on_click_stop(global_state):
- """Function to handle stop button is clicked.
- 1. send a stop signal by set global_state["temporal_params"]["stop"] as True
- 2. Disable Stop button
- """
- global_state["temporal_params"]["stop"] = True
-
- return global_state, gr.Button.update(interactive=False)
-
- form_stop_btn.click(on_click_stop,
- inputs=[global_state],
- outputs=[global_state, form_stop_btn],
- queue=False)
-
- form_draw_interval_number.change(
- partial(
- on_change_single_global_state,
- "draw_interval",
- map_transform=lambda x: int(x),
- ),
- inputs=[form_draw_interval_number, global_state],
- outputs=[global_state],
- queue=False,
- )
-
- def on_click_remove_point(global_state):
- choice = global_state["curr_point"]
- del global_state["points"][choice]
-
- choices = list(global_state["points"].keys())
-
- if len(choices) > 0:
- global_state["curr_point"] = choices[0]
-
- return (
- gr.Dropdown.update(choices=choices, value=choices[0]),
- global_state,
- )
-
- # Mask
- def on_click_reset_mask(global_state):
- global_state['mask'] = np.ones(
- (
- global_state["images"]["image_raw"].size[1],
- global_state["images"]["image_raw"].size[0],
- ),
- dtype=np.uint8,
- )
- image_draw = update_image_draw(global_state['images']['image_raw'],
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'], global_state)
- return global_state, image_draw
-
- form_reset_mask_btn.click(
- on_click_reset_mask,
- inputs=[global_state],
- outputs=[global_state, form_image],
- )
-
- # Image
- def on_click_enable_draw(global_state, image):
- """Function to start add mask mode.
- 1. Preprocess mask info from last state
- 2. Change editing state to add_mask
- 3. Set curr image with points and mask
- """
- global_state = preprocess_mask_info(global_state, image)
- global_state['editing_state'] = 'add_mask'
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(image_raw, global_state['points'],
- global_state['mask'], True,
- global_state)
- return (global_state,
- gr.Image.update(value=image_draw, interactive=True))
-
- def on_click_remove_draw(global_state, image):
- """Function to start remove mask mode.
- 1. Preprocess mask info from last state
- 2. Change editing state to remove_mask
- 3. Set curr image with points and mask
- """
- global_state = preprocess_mask_info(global_state, image)
- global_state['edinting_state'] = 'remove_mask'
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(image_raw, global_state['points'],
- global_state['mask'], True,
- global_state)
- return (global_state,
- gr.Image.update(value=image_draw, interactive=True))
-
- enable_add_mask.click(on_click_enable_draw,
- inputs=[global_state, form_image],
- outputs=[
- global_state,
- form_image,
- ],
- queue=False)
-
- def on_click_add_point(global_state, image: dict):
- """Function switch from add mask mode to add points mode.
- 1. Updaste mask buffer if need
- 2. Change global_state['editing_state'] to 'add_points'
- 3. Set current image with mask
- """
-
- global_state = preprocess_mask_info(global_state, image)
- global_state['editing_state'] = 'add_points'
- mask = global_state['mask']
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(image_raw, global_state['points'], mask,
- global_state['show_mask'], global_state)
-
- return (global_state,
- gr.Image.update(value=image_draw, interactive=False))
-
- enable_add_points.click(on_click_add_point,
- inputs=[global_state, form_image],
- outputs=[global_state, form_image],
- queue=False)
-
- def on_click_image(global_state, evt: gr.SelectData):
- """This function only support click for point selection
- """
- xy = evt.index
- if global_state['editing_state'] != 'add_points':
- print(f'In {global_state["editing_state"]} state. '
- 'Do not add points.')
-
- return global_state, global_state['images']['image_show']
-
- points = global_state["points"]
-
- point_idx = get_latest_points_pair(points)
- if point_idx is None:
- points[0] = {'start': xy, 'target': None}
- print(f'Click Image - Start - {xy}')
- elif points[point_idx].get('target', None) is None:
- points[point_idx]['target'] = xy
- print(f'Click Image - Target - {xy}')
- else:
- points[point_idx + 1] = {'start': xy, 'target': None}
- print(f'Click Image - Start - {xy}')
-
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(
- image_raw,
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'],
- global_state,
- )
-
- return global_state, image_draw
-
- form_image.select(
- on_click_image,
- inputs=[global_state],
- outputs=[global_state, form_image],
- queue=False,
- )
-
- def on_click_clear_points(global_state):
- """Function to handle clear all control points
- 1. clear global_state['points'] (clear_state)
- 2. re-init network
- 2. re-draw image
- """
- clear_state(global_state, target='point')
-
- renderer: Renderer = global_state["renderer"]
- renderer.feat_refs = None
-
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(image_raw, {}, global_state['mask'],
- global_state['show_mask'], global_state)
- return global_state, image_draw
-
- undo_points.click(on_click_clear_points,
- inputs=[global_state],
- outputs=[global_state, form_image],
- queue=False)
-
- def on_click_show_mask(global_state, show_mask):
- """Function to control whether show mask on image."""
- global_state['show_mask'] = show_mask
-
- image_raw = global_state['images']['image_raw']
- image_draw = update_image_draw(
- image_raw,
- global_state['points'],
- global_state['mask'],
- global_state['show_mask'],
- global_state,
- )
- return global_state, image_draw
-
- show_mask.change(
- on_click_show_mask,
- inputs=[global_state, show_mask],
- outputs=[global_state, form_image],
- queue=False,
- )
-
-print("SHAReD: Start app", parser.parse_args())
-gr.close_all()
-app.queue(concurrency_count=1, max_size=200, api_open=False)
-app.launch(share=args.share, show_api=False)
diff --git a/spaces/Duskfallcrew/darkstorm2150-Protogen_x5.8_Official_Release/README.md b/spaces/Duskfallcrew/darkstorm2150-Protogen_x5.8_Official_Release/README.md
deleted file mode 100644
index d8031b7c92ecbc3293669a2e3d97441a3c894314..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/darkstorm2150-Protogen_x5.8_Official_Release/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Darkstorm2150-Protogen X5.8 Official Release
-emoji: 👀
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Duskfallcrew/newdreambooth-toclone/train_dreambooth.py b/spaces/Duskfallcrew/newdreambooth-toclone/train_dreambooth.py
deleted file mode 100644
index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/newdreambooth-toclone/train_dreambooth.py
+++ /dev/null
@@ -1,889 +0,0 @@
-import argparse
-import itertools
-import math
-import os
-from pathlib import Path
-from typing import Optional
-import subprocess
-import sys
-import gc
-import random
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-from torch.utils.data import Dataset
-
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.optimization import get_scheduler
-from huggingface_hub import HfFolder, Repository, whoami
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel, CLIPTokenizer
-
-
-logger = get_logger(__name__)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- #required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--instance_data_dir",
- type=str,
- default=None,
- #required=True,
- help="A folder containing the training data of instance images.",
- )
- parser.add_argument(
- "--class_data_dir",
- type=str,
- default=None,
- #required=False,
- help="A folder containing the training data of class images.",
- )
- parser.add_argument(
- "--instance_prompt",
- type=str,
- default=None,
- help="The prompt with identifier specifying the instance",
- )
- parser.add_argument(
- "--class_prompt",
- type=str,
- default="",
- help="The prompt to specify images in the same class as provided instance images.",
- )
- parser.add_argument(
- "--with_prior_preservation",
- default=False,
- action="store_true",
- help="Flag to add prior preservation loss.",
- )
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
- parser.add_argument(
- "--num_class_images",
- type=int,
- default=100,
- help=(
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
- " sampled with class_prompt."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
- )
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
- parser.add_argument(
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
- )
- parser.add_argument("--num_train_epochs", type=int, default=1)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-6,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
-
- parser.add_argument(
- "--save_n_steps",
- type=int,
- default=1,
- help=("Save the model every n global_steps"),
- )
-
-
- parser.add_argument(
- "--save_starting_step",
- type=int,
- default=1,
- help=("The step from which it starts saving intermediary checkpoints"),
- )
-
- parser.add_argument(
- "--stop_text_encoder_training",
- type=int,
- default=1000000,
- help=("The step at which the text_encoder is no longer trained"),
- )
-
-
- parser.add_argument(
- "--image_captions_filename",
- action="store_true",
- help="Get captions from filename",
- )
-
-
- parser.add_argument(
- "--dump_only_text_encoder",
- action="store_true",
- default=False,
- help="Dump only text encoder",
- )
-
- parser.add_argument(
- "--train_only_unet",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--cache_latents",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--Session_dir",
- type=str,
- default="",
- help="Current session directory",
- )
-
-
-
-
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- #if args.instance_data_dir is None:
- # raise ValueError("You must specify a train data directory.")
-
- #if args.with_prior_preservation:
- # if args.class_data_dir is None:
- # raise ValueError("You must specify a data directory for class images.")
- # if args.class_prompt is None:
- # raise ValueError("You must specify prompt for class images.")
-
- return args
-
-
-class DreamBoothDataset(Dataset):
- """
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
- It pre-processes the images and the tokenizes prompts.
- """
-
- def __init__(
- self,
- instance_data_root,
- instance_prompt,
- tokenizer,
- args,
- class_data_root=None,
- class_prompt=None,
- size=512,
- center_crop=False,
- ):
- self.size = size
- self.center_crop = center_crop
- self.tokenizer = tokenizer
- self.image_captions_filename = None
-
- self.instance_data_root = Path(instance_data_root)
- if not self.instance_data_root.exists():
- raise ValueError("Instance images root doesn't exists.")
-
- self.instance_images_path = list(Path(instance_data_root).iterdir())
- self.num_instance_images = len(self.instance_images_path)
- self.instance_prompt = instance_prompt
- self._length = self.num_instance_images
-
- if args.image_captions_filename:
- self.image_captions_filename = True
-
- if class_data_root is not None:
- self.class_data_root = Path(class_data_root)
- self.class_data_root.mkdir(parents=True, exist_ok=True)
- self.class_images_path = list(self.class_data_root.iterdir())
- random.shuffle(self.class_images_path)
- self.num_class_images = len(self.class_images_path)
- self._length = max(self.num_class_images, self.num_instance_images)
- self.class_prompt = class_prompt
- else:
- self.class_data_root = None
-
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, index):
- example = {}
- path = self.instance_images_path[index % self.num_instance_images]
- instance_image = Image.open(path)
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
-
- instance_prompt = self.instance_prompt
-
- if self.image_captions_filename:
- filename = Path(path).stem
- pt=''.join([i for i in filename if not i.isdigit()])
- pt=pt.replace("_"," ")
- pt=pt.replace("(","")
- pt=pt.replace(")","")
- pt=pt.replace("-","")
- instance_prompt = pt
- sys.stdout.write(" [0;32m" +instance_prompt+" [0m")
- sys.stdout.flush()
-
-
- example["instance_images"] = self.image_transforms(instance_image)
- example["instance_prompt_ids"] = self.tokenizer(
- instance_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- if self.class_data_root:
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
- if not class_image.mode == "RGB":
- class_image = class_image.convert("RGB")
- example["class_images"] = self.image_transforms(class_image)
- example["class_prompt_ids"] = self.tokenizer(
- self.class_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- return example
-
-
-
-class PromptDataset(Dataset):
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
-
- def __init__(self, prompt, num_samples):
- self.prompt = prompt
- self.num_samples = num_samples
-
- def __len__(self):
- return self.num_samples
-
- def __getitem__(self, index):
- example = {}
- example["prompt"] = self.prompt
- example["index"] = index
- return example
-
-class LatentsDataset(Dataset):
- def __init__(self, latents_cache, text_encoder_cache):
- self.latents_cache = latents_cache
- self.text_encoder_cache = text_encoder_cache
-
- def __len__(self):
- return len(self.latents_cache)
-
- def __getitem__(self, index):
- return self.latents_cache[index], self.text_encoder_cache[index]
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
- """
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
- the first starting/base dict with the second updater dict.
-
- For later: how does d = {**d1, **d2} replace collision?
-
- :param starting_dict:
- :param updater_dict:
- :return:
- """
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
- return new_dict
-
-def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
- """
-
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
- :param args1:
- :param args2:
- :return:
- """
- # - the merged args
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
- args = argparse.Namespace(**merged_key_values_for_namespace)
- return args
-
-def run_training(args_imported):
- args_default = parse_args()
- args = merge_args(args_default, args_imported)
- print(args)
- logging_dir = Path(args.output_dir, args.logging_dir)
- i=args.save_starting_step
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with="tensorboard",
- logging_dir=logging_dir,
- )
-
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
- raise ValueError(
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
- )
-
- if args.seed is not None:
- set_seed(args.seed)
-
- if args.with_prior_preservation:
- class_images_dir = Path(args.class_data_dir)
- if not class_images_dir.exists():
- class_images_dir.mkdir(parents=True)
- cur_class_images = len(list(class_images_dir.iterdir()))
-
- if cur_class_images < args.num_class_images:
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
- )
- pipeline.set_progress_bar_config(disable=True)
-
- num_new_images = args.num_class_images - cur_class_images
- logger.info(f"Number of class images to sample: {num_new_images}.")
-
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
-
- sample_dataloader = accelerator.prepare(sample_dataloader)
- pipeline.to(accelerator.device)
-
- for example in tqdm(
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
- ):
- with torch.autocast("cuda"):
- images = pipeline(example["prompt"]).images
-
- for i, image in enumerate(images):
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
-
- del pipeline
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- repo = Repository(args.output_dir, clone_from=repo_name)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Load the tokenizer
- if args.tokenizer_name:
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
- elif args.pretrained_model_name_or_path:
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-
- # Load models and create wrapper for stable diffusion
- if args.train_only_unet:
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
- if is_xformers_available():
- try:
- print("Enabling memory efficient attention with xformers...")
- unet.enable_xformers_memory_efficient_attention()
- except Exception as e:
- logger.warning(
- f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}"
- )
- vae.requires_grad_(False)
- if not args.train_text_encoder:
- text_encoder.requires_grad_(False)
-
- if args.gradient_checkpointing:
- unet.enable_gradient_checkpointing()
- if args.train_text_encoder:
- text_encoder.gradient_checkpointing_enable()
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
- )
-
- optimizer_class = bnb.optim.AdamW8bit
- else:
- optimizer_class = torch.optim.AdamW
-
- params_to_optimize = (
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
- )
- optimizer = optimizer_class(
- params_to_optimize,
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
-
- train_dataset = DreamBoothDataset(
- instance_data_root=args.instance_data_dir,
- instance_prompt=args.instance_prompt,
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
- class_prompt=args.class_prompt,
- tokenizer=tokenizer,
- size=args.resolution,
- center_crop=args.center_crop,
- args=args,
- )
-
- def collate_fn(examples):
- input_ids = [example["instance_prompt_ids"] for example in examples]
- pixel_values = [example["instance_images"] for example in examples]
-
- # Concat class and instance examples for prior preservation.
- # We do this to avoid doing two forward passes.
- if args.with_prior_preservation:
- input_ids += [example["class_prompt_ids"] for example in examples]
- pixel_values += [example["class_images"] for example in examples]
-
- pixel_values = torch.stack(pixel_values)
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
-
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
-
- batch = {
- "input_ids": input_ids,
- "pixel_values": pixel_values,
- }
- return batch
-
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- if args.train_text_encoder:
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
- )
- else:
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, optimizer, train_dataloader, lr_scheduler
- )
-
- weight_dtype = torch.float32
- if args.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif args.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu.
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- vae.to(accelerator.device, dtype=weight_dtype)
- if not args.train_text_encoder:
- text_encoder.to(accelerator.device, dtype=weight_dtype)
-
-
- if args.cache_latents:
- latents_cache = []
- text_encoder_cache = []
- for batch in tqdm(train_dataloader, desc="Caching latents"):
- with torch.no_grad():
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
- if args.train_text_encoder:
- text_encoder_cache.append(batch["input_ids"])
- else:
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
-
- del vae
- #if not args.train_text_encoder:
- # del text_encoder
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("dreambooth", config=vars(args))
-
- def bar(prg):
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
- return br
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- global_step = 0
-
- for epoch in range(args.num_train_epochs):
- unet.train()
- if args.train_text_encoder:
- text_encoder.train()
- for step, batch in enumerate(train_dataloader):
- with accelerator.accumulate(unet):
- # Convert images to latent space
- with torch.no_grad():
- if args.cache_latents:
- latents_dist = batch[0][0]
- else:
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
- latents = latents_dist.sample() * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- if(args.cache_latents):
- if args.train_text_encoder:
- encoder_hidden_states = text_encoder(batch[0][1])[0]
- else:
- encoder_hidden_states = batch[0][1]
- else:
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- if args.with_prior_preservation:
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
- target, target_prior = torch.chunk(target, 2, dim=0)
-
- # Compute instance loss
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
-
- # Compute prior loss
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
-
- # Add the prior loss to the instance loss.
- loss = loss + args.prior_loss_weight * prior_loss
- else:
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- params_to_clip = (
- itertools.chain(unet.parameters(), text_encoder.parameters())
- if args.train_text_encoder
- else unet.parameters()
- )
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- fll=round((global_step*100)/args.max_train_steps)
- fll=round(fll/4)
- pr=bar(fll)
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- progress_bar.set_description_str("Progress:"+pr)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
- if accelerator.is_main_process:
- print(" [0;32m" +" Freezing the text_encoder ..."+" [0m")
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if os.path.exists(frz_dir):
- subprocess.call('rm -r '+ frz_dir, shell=True)
- os.mkdir(frz_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(frz_dir)
-
- if args.save_n_steps >= 200:
- if global_step < args.max_train_steps and global_step+1==i:
- ckpt_name = "_step_" + str(global_step+1)
- save_dir = Path(args.output_dir+ckpt_name)
- save_dir=str(save_dir)
- save_dir=save_dir.replace(" ", "_")
- if not os.path.exists(save_dir):
- os.mkdir(save_dir)
- inst=save_dir[16:]
- inst=inst.replace(" ", "_")
- print(" [1;32mSAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
- # Create the pipeline using the trained modules and save it.
- if accelerator.is_main_process:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(save_dir)
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
- chkpth=args.Session_dir+"/"+inst+".ckpt"
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
- subprocess.call('rm -r '+ save_dir, shell=True)
- i=i+args.save_n_steps
-
- accelerator.wait_for_everyone()
-
- # Create the pipeline using using the trained modules and save it.
- if accelerator.is_main_process:
- if args.dump_only_text_encoder:
- txt_dir=args.output_dir + "/text_encoder_trained"
- if not os.path.exists(txt_dir):
- os.mkdir(txt_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(txt_dir)
-
- elif args.train_only_unet:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(args.output_dir)
- txt_dir=args.output_dir + "/text_encoder_trained"
- subprocess.call('rm -r '+txt_dir, shell=True)
-
- else:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- frz_dir=args.output_dir + "/text_encoder_frozen"
- pipeline.save_pretrained(args.output_dir)
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
- subprocess.call('rm -r '+ frz_dir, shell=True)
-
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
-
- accelerator.end_training()
- del pipeline
- torch.cuda.empty_cache()
- gc.collect()
-if __name__ == "__main__":
- pass
- #main()
-
diff --git a/spaces/Ekimetrics/Biomap/biomap/utils.py b/spaces/Ekimetrics/Biomap/biomap/utils.py
deleted file mode 100644
index 1a68c46b2b4c4025b665938fbb89030ae3df1161..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/Biomap/biomap/utils.py
+++ /dev/null
@@ -1,653 +0,0 @@
-import collections
-import os
-from os.path import join
-import io
-
-import datetime
-
-from dateutil.relativedelta import relativedelta
-import matplotlib.pyplot as plt
-import numpy as np
-import torch.multiprocessing
-import torch.nn as nn
-import torch.nn.functional as F
-import wget
-from PIL import Image
-from scipy.optimize import linear_sum_assignment
-from torch._six import string_classes
-from torch.utils.data._utils.collate import np_str_obj_array_pattern, default_collate_err_msg_format
-from torchmetrics import Metric
-from torchvision import models
-from torchvision import transforms as T
-from torch.utils.tensorboard.summary import hparams
-import matplotlib as mpl
-from PIL import Image
-
-import matplotlib as mpl
-
-import torch.multiprocessing
-import torchvision.transforms as T
-
-import plotly.graph_objects as go
-import plotly.express as px
-import numpy as np
-from plotly.subplots import make_subplots
-
-import os
-os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
-
-colors = ("red", "palegreen", "green", "steelblue", "blue", "yellow", "lightgrey")
-class_names = ('Buildings', 'Cultivation', 'Natural green', 'Wetland', 'Water', 'Infrastructure', 'Background')
-mapping_class = {
- "Buildings": 1,
- "Cultivation": 2,
- "Natural green": 3,
- "Wetland": 4,
- "Water": 5,
- "Infrastructure": 6,
- "Background": 0,
-}
-
-score_attribution = {
- "Buildings" : 0.,
- "Cultivation": 0.3,
- "Natural green": 1.,
- "Wetland": 0.9,
- "Water": 0.9,
- "Infrastructure": 0.,
- "Background": 0.
-}
-bounds = list(np.arange(len(mapping_class.keys()) + 1) + 1)
-cmap = mpl.colors.ListedColormap(colors)
-norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
-
-def compute_biodiv_score(class_image):
- """Compute the biodiversity score of an image
-
- Args:
- image (_type_): _description_
-
- Returns:
- biodiversity_score: the biodiversity score associated to the landscape of the image
- """
- score_matrice = class_image.copy().astype(int)
- for key in mapping_class.keys():
- score_matrice = np.where(score_matrice==mapping_class[key], score_attribution[key], score_matrice)
- number_of_pixel = np.prod(list(score_matrice.shape))
- score = np.sum(score_matrice)/number_of_pixel
- score_details = {
- key: np.sum(np.where(class_image == mapping_class[key], 1, 0))
- for key in mapping_class.keys()
- if key not in ["background"]
- }
- return score, score_details
-
-def plot_image(months, imgs, imgs_label, nb_values, scores, title="Single Date"):
- fig2 = px.imshow(np.array(imgs), animation_frame=0, binary_string=True)
- fig3 = px.imshow(np.array(imgs_label), animation_frame=0, binary_string=True)
-
- # Scores
- fig = make_subplots(
- rows=1, cols=4,
- specs=[[{"type": "image"},{"type": "image"}, {"type": "pie"}, {"type": "indicator"}]],
- subplot_titles=("Localisation visualization", "Labeled visualisation", "Segments repartition", "Biodiversity scores")
- )
-
- fig.add_trace(fig2["frames"][0]["data"][0], row=1, col=1)
- fig.add_trace(fig3["frames"][0]["data"][0], row=1, col=2)
-
- fig.add_trace(go.Pie(labels = class_names,
- values = [nb_values[0][key] for key in mapping_class.keys()],
- marker_colors = colors,
- name="Segment repartition",
- textposition='inside',
- texttemplate = "%{percent:.0%}",
- textfont_size=14
- ),
- row=1, col=3)
-
-
- fig.add_trace(go.Indicator(value=scores[0]), row=1, col=4)
- fig.update_layout(
- legend=dict(
- xanchor = "center",
- yanchor="top",
- y=-0.1,
- x = 0.5,
- orientation="h")
- )
- fig.update(
- layout={
- "xaxis": {
- "range": [0,imgs[0].shape[1]+1/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
-
- "yaxis": {
- "range": [imgs[0].shape[0]+1/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at y=0
- 'visible': False,},
- "xaxis1": {
- "range": [0,imgs[0].shape[1]+1/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
-
- "yaxis1": {
- "range": [imgs[0].shape[0]+1/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at y=0
- 'visible': False,}
-
- },)
- fig.update_xaxes(row=1, col=2, visible=False)
- fig.update_yaxes(row=1, col=2, visible=False)
- fig.update_layout(title=title, title_x=0.5, title_xanchor="center")
-
- return fig
-
-def plot_imgs_labels(months, imgs, imgs_label, nb_values, scores, title="TimeLapse") :
- fig2 = px.imshow(np.array(imgs), animation_frame=0, binary_string=True)
- fig3 = px.imshow(np.array(imgs_label), animation_frame=0, binary_string=True)
-
- # Scores
- scatters = [
- go.Scatter(
- x=months[:i+1],
- y=scores[:i+1],
- mode="lines+markers+text",
- marker_color="black",
- text = [f"{score:.2f}" for score in scores[:i+1]],
- textposition="top center"
- ) for i in range(len(scores))
- ]
-
- # Scores
- fig = make_subplots(
- rows=1, cols=4,
- specs=[[{"type": "image"},{"type": "image"}, {"type": "pie"}, {"type": "scatter"}]],
- subplot_titles=("Localisation visualization", "Labeled visualisation", "Segments repartition", "Biodiversity scores")
- )
-
- fig.add_trace(fig2["frames"][0]["data"][0], row=1, col=1)
- fig.add_trace(fig3["frames"][0]["data"][0], row=1, col=2)
-
- fig.add_trace(go.Pie(labels = class_names,
- values = [nb_values[0][key] for key in mapping_class.keys()],
- marker_colors = colors,
- name="Segment repartition",
- textposition='inside',
- texttemplate = "%{percent:.0%}",
- textfont_size=14
- ),
- row=1, col=3)
-
-
- fig.add_trace(scatters[0], row=1, col=4)
- fig.update_traces(selector=dict(type='scatter'))
-
- number_frames = len(imgs)
- frames = [dict(
- name = k,
- data = [ fig2["frames"][k]["data"][0],
- fig3["frames"][k]["data"][0],
- go.Pie(labels = class_names,
- values = [nb_values[k][key] for key in mapping_class.keys()],
- marker_colors = colors,
- name="Segment repartition",
- textposition='inside',
- texttemplate = "%{percent:.0%}",
- textfont_size=14
- ),
- scatters[k]
- ],
- traces=[0, 1, 2, 3] # the elements of the list [0,1,2] give info on the traces in fig.data
- # that are updated by the above three go.Scatter instances
- ) for k in range(number_frames)]
-
- updatemenus = [dict(type='buttons',
- buttons=[dict(label='Play',
- method='animate',
- args=[[f'{k}' for k in range(number_frames)],
- dict(frame=dict(duration=500, redraw=False),
- transition=dict(duration=0),
- easing='linear',
- fromcurrent=True,
- mode='immediate'
- )])],
- direction= 'left',
- pad=dict(t=85),
- showactive =True, x= 0.1, y= 0.13, xanchor= 'right', yanchor= 'top')
- ]
-
- sliders = [{'yanchor': 'top',
- 'xanchor': 'left',
- 'currentvalue': {'font': {'size': 16}, 'prefix': 'Frame: ', 'visible': False, 'xanchor': 'right'},
- 'transition': {'duration': 500.0, 'easing': 'linear'},
- 'pad': {'b': 10, 't': 50},
- 'len': 0.9, 'x': 0.1, 'y': 0,
- 'steps': [{'args': [[k], {'frame': {'duration': 500.0, 'easing': 'linear', 'redraw': False},
- 'transition': {'duration': 0, 'easing': 'linear'}}],
- 'label': months[k], 'method': 'animate'} for k in range(number_frames)
- ]}]
-
-
- fig.update(frames=frames)
-
- for i,fr in enumerate(fig["frames"]):
- fr.update(
- layout={
- "xaxis": {
- "range": [0,imgs[0].shape[1]+i/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
- "yaxis": {
- "range": [imgs[0].shape[0]+i/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
- "xaxis1": {
- "range": [0,imgs[0].shape[1]+i/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
- "yaxis1": {
- "range": [imgs[0].shape[0]+i/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
- })
-
- start_date = datetime.datetime.strptime(months[0], "%Y-%m-%d") - relativedelta(months=1)
- end_date = datetime.datetime.strptime(months[-1], "%Y-%m-%d") + relativedelta(months=1)
- interval = [start_date.strftime("%Y-%m-%d"),end_date.strftime("%Y-%m-%d")]
- fig.update(
- layout={
- "xaxis": {
- "range": [0,imgs[0].shape[1]+i/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
-
- "yaxis": {
- "range": [imgs[0].shape[0]+i/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at y=0
- 'visible': False,},
-
- "xaxis2": {
- "range": [0,imgs[0].shape[1]+i/100000],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at x=0
- 'visible': False, # numbers below
- },
-
- "yaxis2": {
- "range": [imgs[0].shape[0]+i/100000,0],
- 'showgrid': False, # thin lines in the background
- 'zeroline': False, # thick line at y=0
- 'visible': False,},
-
-
- "xaxis3": {
- "dtick":"M3",
- "range":interval
- },
- "yaxis3": {
- 'range': [min(scores)*0.9, max(scores)* 1.1],
- 'showgrid': False,
- 'zeroline': False,
- 'visible': True
- }
- }
- )
-
-
- fig.update_layout(updatemenus=updatemenus,
- sliders=sliders,
- legend=dict(
- xanchor = "center",
- yanchor="top",
- y=-0.1,
- x = 0.5,
- orientation="h")
- )
-
-
- fig.update_layout(margin=dict(b=0, r=0))
- fig.update_layout(title=title, title_x=0.5, title_xanchor="center")
- return fig
-
-
-
-
-
-def transform_to_pil(output, alpha=0.3):
- # Transform img with torch
- img = torch.moveaxis(prep_for_plot(output['img']),-1,0)
- img=T.ToPILImage()(img)
-
- cmaplist = np.array([np.array(cmap(i)) for i in range(cmap.N)])
- labels = np.array(output['linear_preds'])-1
- label = T.ToPILImage()((cmaplist[labels]*255).astype(np.uint8))
-
- # Overlay labels with img wit alpha
- background = img.convert("RGBA")
- overlay = label.convert("RGBA")
-
- labeled_img = Image.blend(background, overlay, alpha)
-
- return img, label, labeled_img
-
-
-def prep_for_plot(img, rescale=True, resize=None):
- if resize is not None:
- img = F.interpolate(img.unsqueeze(0), resize, mode="bilinear")
- else:
- img = img.unsqueeze(0)
-
- plot_img = unnorm(img).squeeze(0).cpu().permute(1, 2, 0)
- if rescale:
- plot_img = (plot_img - plot_img.min()) / (plot_img.max() - plot_img.min())
- return plot_img
-
-
-def add_plot(writer, name, step):
- buf = io.BytesIO()
- plt.savefig(buf, format='jpeg', dpi=100)
- buf.seek(0)
- image = Image.open(buf)
- image = T.ToTensor()(image)
- writer.add_image(name, image, step)
- plt.clf()
- plt.close()
-
-
-@torch.jit.script
-def shuffle(x):
- return x[torch.randperm(x.shape[0])]
-
-
-def add_hparams_fixed(writer, hparam_dict, metric_dict, global_step):
- exp, ssi, sei = hparams(hparam_dict, metric_dict)
- writer.file_writer.add_summary(exp)
- writer.file_writer.add_summary(ssi)
- writer.file_writer.add_summary(sei)
- for k, v in metric_dict.items():
- writer.add_scalar(k, v, global_step)
-
-
-@torch.jit.script
-def resize(classes: torch.Tensor, size: int):
- return F.interpolate(classes, (size, size), mode="bilinear", align_corners=False)
-
-
-def one_hot_feats(labels, n_classes):
- return F.one_hot(labels, n_classes).permute(0, 3, 1, 2).to(torch.float32)
-
-
-def load_model(model_type, data_dir):
- if model_type == "robust_resnet50":
- model = models.resnet50(pretrained=False)
- model_file = join(data_dir, 'imagenet_l2_3_0.pt')
- if not os.path.exists(model_file):
- wget.download("http://6.869.csail.mit.edu/fa19/psets19/pset6/imagenet_l2_3_0.pt",
- model_file)
- model_weights = torch.load(model_file)
- model_weights_modified = {name.split('model.')[1]: value for name, value in model_weights['model'].items() if
- 'model' in name}
- model.load_state_dict(model_weights_modified)
- model = nn.Sequential(*list(model.children())[:-1])
- elif model_type == "densecl":
- model = models.resnet50(pretrained=False)
- model_file = join(data_dir, 'densecl_r50_coco_1600ep.pth')
- if not os.path.exists(model_file):
- wget.download("https://cloudstor.aarnet.edu.au/plus/s/3GapXiWuVAzdKwJ/download",
- model_file)
- model_weights = torch.load(model_file)
- # model_weights_modified = {name.split('model.')[1]: value for name, value in model_weights['model'].items() if
- # 'model' in name}
- model.load_state_dict(model_weights['state_dict'], strict=False)
- model = nn.Sequential(*list(model.children())[:-1])
- elif model_type == "resnet50":
- model = models.resnet50(pretrained=True)
- model = nn.Sequential(*list(model.children())[:-1])
- elif model_type == "mocov2":
- model = models.resnet50(pretrained=False)
- model_file = join(data_dir, 'moco_v2_800ep_pretrain.pth.tar')
- if not os.path.exists(model_file):
- wget.download("https://dl.fbaipublicfiles.com/moco/moco_checkpoints/"
- "moco_v2_800ep/moco_v2_800ep_pretrain.pth.tar", model_file)
- checkpoint = torch.load(model_file)
- # rename moco pre-trained keys
- state_dict = checkpoint['state_dict']
- for k in list(state_dict.keys()):
- # retain only encoder_q up to before the embedding layer
- if k.startswith('module.encoder_q') and not k.startswith('module.encoder_q.fc'):
- # remove prefix
- state_dict[k[len("module.encoder_q."):]] = state_dict[k]
- # delete renamed or unused k
- del state_dict[k]
- msg = model.load_state_dict(state_dict, strict=False)
- assert set(msg.missing_keys) == {"fc.weight", "fc.bias"}
- model = nn.Sequential(*list(model.children())[:-1])
- elif model_type == "densenet121":
- model = models.densenet121(pretrained=True)
- model = nn.Sequential(*list(model.children())[:-1] + [nn.AdaptiveAvgPool2d((1, 1))])
- elif model_type == "vgg11":
- model = models.vgg11(pretrained=True)
- model = nn.Sequential(*list(model.children())[:-1] + [nn.AdaptiveAvgPool2d((1, 1))])
- else:
- raise ValueError("No model: {} found".format(model_type))
-
- model.eval()
- model.cuda()
- return model
-
-
-class UnNormalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, image):
- image2 = torch.clone(image)
- for t, m, s in zip(image2, self.mean, self.std):
- t.mul_(s).add_(m)
- return image2
-
-
-normalize = T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
-unnorm = UnNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
-
-
-class ToTargetTensor(object):
- def __call__(self, target):
- return torch.as_tensor(np.array(target), dtype=torch.int64).unsqueeze(0)
-
-
-def prep_args():
- import sys
-
- old_args = sys.argv
- new_args = [old_args.pop(0)]
- while len(old_args) > 0:
- arg = old_args.pop(0)
- if len(arg.split("=")) == 2:
- new_args.append(arg)
- elif arg.startswith("--"):
- new_args.append(arg[2:] + "=" + old_args.pop(0))
- else:
- raise ValueError("Unexpected arg style {}".format(arg))
- sys.argv = new_args
-
-
-def get_transform(res, is_label, crop_type):
- if crop_type == "center":
- cropper = T.CenterCrop(res)
- elif crop_type == "random":
- cropper = T.RandomCrop(res)
- elif crop_type is None:
- cropper = T.Lambda(lambda x: x)
- res = (res, res)
- else:
- raise ValueError("Unknown Cropper {}".format(crop_type))
- if is_label:
- return T.Compose([T.Resize(res, Image.NEAREST),
- cropper,
- ToTargetTensor()])
- else:
- return T.Compose([T.Resize(res, Image.NEAREST),
- cropper,
- T.ToTensor(),
- normalize])
-
-
-def _remove_axes(ax):
- ax.xaxis.set_major_formatter(plt.NullFormatter())
- ax.yaxis.set_major_formatter(plt.NullFormatter())
- ax.set_xticks([])
- ax.set_yticks([])
-
-
-def remove_axes(axes):
- if len(axes.shape) == 2:
- for ax1 in axes:
- for ax in ax1:
- _remove_axes(ax)
- else:
- for ax in axes:
- _remove_axes(ax)
-
-
-class UnsupervisedMetrics(Metric):
- def __init__(self, prefix: str, n_classes: int, extra_clusters: int, compute_hungarian: bool,
- dist_sync_on_step=True):
- # call `self.add_state`for every internal state that is needed for the metrics computations
- # dist_reduce_fx indicates the function that should be used to reduce
- # state from multiple processes
- super().__init__(dist_sync_on_step=dist_sync_on_step)
-
- self.n_classes = n_classes
- self.extra_clusters = extra_clusters
- self.compute_hungarian = compute_hungarian
- self.prefix = prefix
- self.add_state("stats",
- default=torch.zeros(n_classes + self.extra_clusters, n_classes, dtype=torch.int64),
- dist_reduce_fx="sum")
-
- def update(self, preds: torch.Tensor, target: torch.Tensor):
- with torch.no_grad():
- actual = target.reshape(-1)
- preds = preds.reshape(-1)
- mask = (actual >= 0) & (actual < self.n_classes) & (preds >= 0) & (preds < self.n_classes)
- actual = actual[mask]
- preds = preds[mask]
- self.stats += torch.bincount(
- (self.n_classes + self.extra_clusters) * actual + preds,
- minlength=self.n_classes * (self.n_classes + self.extra_clusters)) \
- .reshape(self.n_classes, self.n_classes + self.extra_clusters).t().to(self.stats.device)
-
- def map_clusters(self, clusters):
- if self.extra_clusters == 0:
- return torch.tensor(self.assignments[1])[clusters]
- else:
- missing = sorted(list(set(range(self.n_classes + self.extra_clusters)) - set(self.assignments[0])))
- cluster_to_class = self.assignments[1]
- for missing_entry in missing:
- if missing_entry == cluster_to_class.shape[0]:
- cluster_to_class = np.append(cluster_to_class, -1)
- else:
- cluster_to_class = np.insert(cluster_to_class, missing_entry + 1, -1)
- cluster_to_class = torch.tensor(cluster_to_class)
- return cluster_to_class[clusters]
-
- def compute(self):
- if self.compute_hungarian:
- self.assignments = linear_sum_assignment(self.stats.detach().cpu(), maximize=True)
- # print(self.assignments)
- if self.extra_clusters == 0:
- self.histogram = self.stats[np.argsort(self.assignments[1]), :]
- if self.extra_clusters > 0:
- self.assignments_t = linear_sum_assignment(self.stats.detach().cpu().t(), maximize=True)
- histogram = self.stats[self.assignments_t[1], :]
- missing = list(set(range(self.n_classes + self.extra_clusters)) - set(self.assignments[0]))
- new_row = self.stats[missing, :].sum(0, keepdim=True)
- histogram = torch.cat([histogram, new_row], axis=0)
- new_col = torch.zeros(self.n_classes + 1, 1, device=histogram.device)
- self.histogram = torch.cat([histogram, new_col], axis=1)
- else:
- self.assignments = (torch.arange(self.n_classes).unsqueeze(1),
- torch.arange(self.n_classes).unsqueeze(1))
- self.histogram = self.stats
-
- tp = torch.diag(self.histogram)
- fp = torch.sum(self.histogram, dim=0) - tp
- fn = torch.sum(self.histogram, dim=1) - tp
-
- iou = tp / (tp + fp + fn)
- prc = tp / (tp + fn)
- opc = torch.sum(tp) / torch.sum(self.histogram)
-
- metric_dict = {self.prefix + "mIoU": iou[~torch.isnan(iou)].mean().item(),
- self.prefix + "Accuracy": opc.item()}
- return {k: 100 * v for k, v in metric_dict.items()}
-
-
-def flexible_collate(batch):
- r"""Puts each data field into a tensor with outer dimension batch size"""
-
- elem = batch[0]
- elem_type = type(elem)
- if isinstance(elem, torch.Tensor):
- out = None
- if torch.utils.data.get_worker_info() is not None:
- # If we're in a background process, concatenate directly into a
- # shared memory tensor to avoid an extra copy
- numel = sum([x.numel() for x in batch])
- storage = elem.storage()._new_shared(numel)
- out = elem.new(storage)
- try:
- return torch.stack(batch, 0, out=out)
- except RuntimeError:
- return batch
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
- and elem_type.__name__ != 'string_':
- if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
- # array of string classes and object
- if np_str_obj_array_pattern.search(elem.dtype.str) is not None:
- raise TypeError(default_collate_err_msg_format.format(elem.dtype))
-
- return flexible_collate([torch.as_tensor(b) for b in batch])
- elif elem.shape == (): # scalars
- return torch.as_tensor(batch)
- elif isinstance(elem, float):
- return torch.tensor(batch, dtype=torch.float64)
- elif isinstance(elem, int):
- return torch.tensor(batch)
- elif isinstance(elem, string_classes):
- return batch
- elif isinstance(elem, collections.abc.Mapping):
- return {key: flexible_collate([d[key] for d in batch]) for key in elem}
- elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
- return elem_type(*(flexible_collate(samples) for samples in zip(*batch)))
- elif isinstance(elem, collections.abc.Sequence):
- # check to make sure that the elements in batch have consistent size
- it = iter(batch)
- elem_size = len(next(it))
- if not all(len(elem) == elem_size for elem in it):
- raise RuntimeError('each element in list of batch should be of equal size')
- transposed = zip(*batch)
- return [flexible_collate(samples) for samples in transposed]
-
- raise TypeError(default_collate_err_msg_format.format(elem_type))
diff --git a/spaces/Ekimetrics/climate-question-answering/utils.py b/spaces/Ekimetrics/climate-question-answering/utils.py
deleted file mode 100644
index ddc12a212abbc735fe244784c2bfbb298c37b28d..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/climate-question-answering/utils.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import numpy as np
-import random
-import string
-import uuid
-
-
-def create_user_id():
- """Create user_id
- str: String to id user
- """
- user_id = str(uuid.uuid4())
- return user_id
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/maskrcnn_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/maskrcnn_pipeline.py
deleted file mode 100644
index fff3e071ea115843752f34de8141fa982b8ad14b..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/maskrcnn_pipeline.py
+++ /dev/null
@@ -1,57 +0,0 @@
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-train_pipeline = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='ScaleAspectJitter',
- img_scale=None,
- keep_ratio=False,
- resize_type='indep_sample_in_range',
- scale_range=(640, 2560)),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(
- type='RandomCropInstances',
- target_size=(640, 640),
- mask_type='union_all',
- instance_key='gt_masks'),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-
-# for ctw1500
-img_scale_ctw1500 = (1600, 1600)
-test_pipeline_ctw1500 = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale_ctw1500, # used by Resize
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-
-# for icdar2015
-img_scale_icdar2015 = (1920, 1920)
-test_pipeline_icdar2015 = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale_icdar2015, # used by Resize
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
diff --git a/spaces/FineLong/stabilityai-stable-diffusion-2/app.py b/spaces/FineLong/stabilityai-stable-diffusion-2/app.py
deleted file mode 100644
index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000
--- a/spaces/FineLong/stabilityai-stable-diffusion-2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2").launch()
\ No newline at end of file
diff --git a/spaces/FireFrame/werz/style.css b/spaces/FireFrame/werz/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/FireFrame/werz/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/how to export onnx.md b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/how to export onnx.md
deleted file mode 100644
index 6d22719fd1a8e9d034e6224cc95f4b50d44a0320..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/how to export onnx.md
+++ /dev/null
@@ -1,4 +0,0 @@
-- Open [onnx_export](onnx_export.py)
-- project_name = "dddsp" change "project_name" to your project name
-- model_path = f'{project_name}/model_500000.pt' change "model_path" to your model path
-- Run
\ No newline at end of file
diff --git a/spaces/GIZ/embedding_visualisation/app.py b/spaces/GIZ/embedding_visualisation/app.py
deleted file mode 100644
index 1da40fed774a1c93b6031f6872f6484e65980ad9..0000000000000000000000000000000000000000
--- a/spaces/GIZ/embedding_visualisation/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import apps.sdg_pd as sdg_tab
-import apps.intro as intro
-import apps.similarity as similarity
-
-# import appStore.check_site as check_site
-from apps.multiapp import MultiApp
-import streamlit as st
-
-st.set_page_config(f'Embedding Visualisator (Sentence Transformer)',
- layout="wide",
- initial_sidebar_state="expanded")
-
-app = MultiApp()
-app.add_app("Intro", intro.app)
-app.add_app("SDG", sdg_tab.app)
-app.add_app("Similarity", similarity.app)
-
-app.run()
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_wheel.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_wheel.py
deleted file mode 100644
index fca85bf43650e70fc2959cc41f3b4515e914930e..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_wheel.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class BuildWheel(Task):
- """Construct a wheel using blocks and a sphere."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 10
- self.lang_template = "Construct a wheel using blocks and a sphere. First, position eight blocks in a circular layout on the tabletop. Each block should be touching its two neighbors and colored in alternating red and blue. Then place a green sphere in the center of the circular layout, completing the wheel."
- self.task_completed_desc = "done building wheel."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add blocks.
- block_size = (0.04, 0.04, 0.04)
- block_urdf = 'block/block.urdf'
- block_colors = [utils.COLORS['red'], utils.COLORS['blue']]
- blocks = []
- for i in range(8):
- block_pose = self.get_random_pose(env, block_size)
- block_id = env.add_object(block_urdf, block_pose, color=block_colors[i % 2])
- blocks.append(block_id)
-
- # Add sphere.
- sphere_size = (0.04, 0.04, 0.04)
- sphere_urdf = 'sphere/sphere.urdf'
- sphere_color = utils.COLORS['green']
- sphere_pose = ((0.5, 0.0, 0.0), (0,0,0,1)) # fixed pose
- sphere_id = env.add_object(sphere_urdf, sphere_pose, color=sphere_color)
-
- # Goal: blocks are arranged in a circle and sphere is in the center.
- circle_radius = 0.1
- circle_center = (0, 0, block_size[2] / 2)
- angles = np.linspace(0, 2 * np.pi, 8, endpoint=False)
- block_poses = [(circle_center[0] + circle_radius * np.cos(angle),
- circle_center[1] + circle_radius * np.sin(angle),
- circle_center[2]) for angle in angles]
- block_poses = [(utils.apply(sphere_pose, pos), sphere_pose[1]) for pos in block_poses]
- self.add_goal(objs=blocks, matches=np.ones((8, 8)), targ_poses=block_poses, replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=8 / 9, language_goal=self.lang_template)
-
- # Goal: sphere is in the center of the blocks.
- self.add_goal(objs=[sphere_id], matches=np.ones((1, 1)), targ_poses=[sphere_pose], replace=False,
- rotations=False, metric='pose', params=None, step_max_reward=1 / 9, language_goal=self.lang_template)
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/palletizing_boxes.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/palletizing_boxes.py
deleted file mode 100644
index a045826ea261b676ca2fdcf315808969b21fdda7..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/palletizing_boxes.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import os
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-
-class PalletizingBoxes(Task):
- """Pick up homogeneous fixed-sized boxes and stack them in transposed layers on the pallet."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 30
- self.lang_template = "stack all the boxes on the pallet"
- self.task_completed_desc = "done stacking boxes."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add pallet.
- zone_size = (0.3, 0.25, 0.25)
- zone_urdf = 'pallet/pallet.urdf'
- rotation = utils.eulerXYZ_to_quatXYZW((0, 0, 0))
- zone_pose = ((0.5, 0.25, 0.02), rotation)
- env.add_object(zone_urdf, zone_pose, 'fixed')
-
- # Add stack of boxes on pallet.
- margin = 0.01
- object_ids = []
-
- # x, y, z dimensions for the asset size
- stack_size = (0.19, 0.19, 0.19)
- box_template = 'box/box-template.urdf'
- stack_dim = np.int32([2, 3, 3])
-
- box_size = (stack_size - (stack_dim - 1) * margin) / stack_dim
- for z in range(stack_dim[2]):
-
- # Transpose every layer.
- stack_dim[0], stack_dim[1] = stack_dim[1], stack_dim[0]
- box_size[0], box_size[1] = box_size[1], box_size[0]
-
- # IMPORTANT: Compute object points and store as a dictionary for the `goal`
- for y in range(stack_dim[1]):
- for x in range(stack_dim[0]):
- position = list((x + 0.5, y + 0.5, z + 0.5) * box_size)
- position[0] += x * margin - stack_size[0] / 2
- position[1] += y * margin - stack_size[1] / 2
- position[2] += z * margin + 0.03
- pose = (position, (0, 0, 0, 1))
- pose = utils.multiply(zone_pose, pose)
-
- # IMPORTANT: REPLACE THE TEMPLATE URDF
- urdf = self.fill_template(box_template, {'DIM': box_size})
- box_id = env.add_object(urdf, pose)
- object_ids.append(box_id)
- self.color_random_brown(box_id)
-
- # Randomly select top box on pallet and save ground truth pose.
- targets = []
- self.steps = []
- boxes = object_ids[:] # make copy
- while boxes:
- _, height, object_mask = self.get_true_image(env)
- top = np.argwhere(height > (np.max(height) - 0.03))
- rpixel = top[int(np.floor(np.random.random() * len(top)))] # y, x
- box_id = int(object_mask[rpixel[0], rpixel[1]])
- if box_id in boxes:
- position, rotation = p.getBasePositionAndOrientation(box_id)
- rposition = np.float32(position) + np.float32([0, -10, 0])
- p.resetBasePositionAndOrientation(box_id, rposition, rotation)
- self.steps.append(box_id)
- targets.append((position, rotation))
- boxes.remove(box_id)
-
- self.steps.reverse() # Time-reversed depalletizing.
- self.add_goal(objs=object_ids, matches=np.eye(len(object_ids)), targ_poses=targets, replace=False,
- rotations=True, metric='zone', params=[(zone_pose, zone_size)], step_max_reward=1, language_goal=self.lang_template)
- self.spawn_box()
-
- def reward(self):
- reward, info = super().reward()
- self.spawn_box()
- return reward, info
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/__init__.py
deleted file mode 100644
index 95e34a848652f2ab3ca6d3489aa2934d24817888..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from .approx_max_iou_assigner import ApproxMaxIoUAssigner
-from .assign_result import AssignResult
-from .atss_assigner import ATSSAssigner
-from .base_assigner import BaseAssigner
-from .center_region_assigner import CenterRegionAssigner
-from .grid_assigner import GridAssigner
-from .hungarian_assigner import HungarianAssigner
-from .max_iou_assigner import MaxIoUAssigner
-from .point_assigner import PointAssigner
-from .region_assigner import RegionAssigner
-
-__all__ = [
- 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult',
- 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner',
- 'HungarianAssigner', 'RegionAssigner'
-]
diff --git a/spaces/GroNLP/divemt_explorer/app.py b/spaces/GroNLP/divemt_explorer/app.py
deleted file mode 100644
index ddb20e568be869f2a286127c0f05fda5ba36c866..0000000000000000000000000000000000000000
--- a/spaces/GroNLP/divemt_explorer/app.py
+++ /dev/null
@@ -1,123 +0,0 @@
-from datasets import load_dataset
-import streamlit as st
-import urllib
-import math
-from inseq import FeatureAttributionOutput
-
-st.set_page_config(layout="wide")
-
-dataset = load_dataset("GroNLP/divemt")
-attribution_path = "https://huggingface.co/datasets/inseq/divemt_attributions/resolve/main/divemt-attributions/{lang}/{idx}_{lang}_gradl2_{setting}_{sentence_type}.json.gz"
-df = dataset["train"].to_pandas()
-unique_src = df[["item_id", "src_text"]].drop_duplicates(subset="item_id").rename(columns={"item_id": "Item ID", "src_text": "Source text"})
-langs = list(df["lang_id"].unique())
-st.title("DivEMT Explorer 🔍 🌍")
-st.markdown("""
-##### The DivEMT Explorer is a tool to explore translations, edits and errors in the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
-
-The table below shows the 430 source sentences taken from Flores-101 and translated into six typologically diverse languages to build the DivEMT corpus. When you find a sentence you would like to inspect closely, insert its numeric id (between 0 and 429) in the box below, and select all the available languages you want to use for visualizing the results.
-
-Inside every language section, you will find the translations for all the available settings, alongside aligned edits and all collected metadata. You can filter the settings to see only cases you are interested in. In the **Attributions** section, you can find attribution maps computed using the [Inseq library](https://github.com/inseq-team/inseq) and the mBART model.
-""")
-
-divemt_to_spacy_lang_map = {
- "ara": "ar",
- "nld": "nl",
- "ita": "it",
- "tur": "tr",
- "ukr": "uk",
- "vie": "vi",
-}
-
-divemt_to_labels_lang_map = {
- "ara": "Arabic",
- "nld": "Dutch",
- "ita": "Italian",
- "tur": "Turkish",
- "ukr": "Ukrainian",
- "vie": "Vietnamese",
-}
-
-st.dataframe(
- unique_src,
-)
-col1_main, col2_main, _ = st.columns([1,1,3])
-with col1_main:
- item_id = st.number_input(
- 'Select an item (0-429) to inspect',
- min_value=0,
- max_value=len(unique_src) - 1,
- )
-with col2_main:
- langs = st.multiselect(
- 'Select languages',
- options=langs,
- format_func=lambda x: divemt_to_labels_lang_map[x],
- )
-st.markdown("##### Source text")
-st.markdown("##### " + unique_src.iloc[int(item_id)]["Source text"] + " ", unsafe_allow_html=True)
-task_names = ["From Scratch (HT)", "Google PE (PE1)", "mBART PE (PE2)"]
-for lang in langs:
- st.markdown(f"## {divemt_to_labels_lang_map[lang]}")
- c1, _ = st.columns([1.5,1])
- with c1:
- tasks = st.multiselect(
- 'Select settings',
- options=task_names,
- default=task_names,
- key=f"{lang}_tasks"
- )
- #columns = st.columns(len(tasks))
- lang_data = df[(df["item_id"] == unique_src.iloc[int(item_id)]["Item ID"]) & (df["lang_id"] == lang)]
- lang_dicts = lang_data.to_dict("records")
- ht = [x for x in lang_dicts if x["task_type"] == "ht"][0]
- pe1 = [x for x in lang_dicts if x["task_type"] == "pe1"][0]
- pe2 = [x for x in lang_dicts if x["task_type"] == "pe2"][0]
- task_dict = {k:v for k,v in zip(task_names, [ht, pe1, pe2])}
- max_mt_length = max([len(x["mt_text"]) for x in lang_dicts if x["mt_text"] is not None])
- for task_name, dic in zip(tasks, [task_dict[name] for name in tasks]):
- with st.expander(f"{task_name}"):
- st.markdown(f"### {task_name}")
- st.markdown(f"Translator : {dic['subject_id']}", unsafe_allow_html=True)
- mt_text = dic["mt_text"]
- if mt_text is None:
- mt_text = "" + "".join(["O " for i in range(max_mt_length // 2)]) + " "
- st.markdown(f"MT : {'' if lang == 'ara' else ''}{mt_text if mt_text != 'nan' else 'N/A'}{' ' if lang == 'ara' else ''}", unsafe_allow_html=True)
- st.markdown(f"PE : {'' if lang == 'ara' else ''}{dic['tgt_text']}{' ' if lang == 'ara' else ''}", unsafe_allow_html=True)
- st.markdown(f"Aligned edits :", unsafe_allow_html=True)
- if dic["aligned_edit"] != "nan":
- aligned_edit = dic["aligned_edit"]
- if lang == 'ara' and len(dic["aligned_edit"].split("EVAL: ")) == 2:
- edits_reverse = aligned_edit.split("EVAL: ")[1]
- # - 4 is a hack that makes things aligned most of the time, grounded in empirical observation only
- edits_reverse = edits_reverse + " " * ((len(aligned_edit.split("\\n")[0]) - len(edits_reverse)) - 10)
- aligned_edit = aligned_edit.split("EVAL: ")[0] + "EVAL: " + edits_reverse[::-1]
- aligned_edit = aligned_edit.replace("\\n", "\n").replace("REF:", "MT :").replace("HYP:", "PE :")
- st.text(aligned_edit)
- else:
- st.text("MT : N/A\nPE : N/A\nEVAL: N/A\n")
- st.markdown(f"Metadata :", unsafe_allow_html=True)
- st.json({k:v for k,v in dic.items() if k not in ["src_text", "mt_text", "tgt_text", "aligned_edit"]}, expanded=False)
- st.markdown(f"Attributions :", unsafe_allow_html=True)
- if task_name != "From Scratch (HT)":
- setting = "pe1" if task_name == "Google PE (PE1)" else "pe2"
- st.markdown("Click on checkboxes to show/hide the respective attributions computed with mBART. ", unsafe_allow_html=True)
- for sentence_type in ["mt", "pe", "diff"]:
- url = attribution_path.format(idx=item_id, setting=setting, sentence_type=sentence_type, lang=divemt_to_spacy_lang_map[lang])
- try:
- g = urllib.request.urlopen(url)
- fpath = f"attr_{lang}_{sentence_type}.json.gz"
- with open(fpath, 'b+w') as f:
- f.write(g.read())
- attr = FeatureAttributionOutput.load(fpath, decompress=True)
- if st.checkbox(sentence_type.upper(), key=f"{lang}_{task_name}_{sentence_type}"):
- st.markdown(f"{attr.show(return_html=True, display=False, do_aggregation=False)}", unsafe_allow_html=True)
- except (urllib.error.HTTPError, urllib.error.URLError) as e:
- st.checkbox(sentence_type.upper() + " (NOT AVAILABLE)", key=f"{lang}_{task_name}_{sentence_type}", disabled=True)
- else:
- st.markdown("Attributions are available only for machine-translated outputs. ", unsafe_allow_html=True)
-st.markdown("", unsafe_allow_html=True)
-st.markdown("*Built by [Gabriele Sarti](https://gsarti.com)*")
-
-
-
\ No newline at end of file
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/WhisperPPGLarge.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/WhisperPPGLarge.py
deleted file mode 100644
index cab1ca646a1559c2a05b24ec38474408f27b3f08..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vencoder/WhisperPPGLarge.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import torch
-
-from vencoder.whisper.model import Whisper, ModelDimensions
-from vencoder.whisper.audio import pad_or_trim, log_mel_spectrogram
-
-
-class WhisperPPGLarge(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/large-v2.pt",device=None):
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- checkpoint = torch.load(vec_path, map_location=device)
- dims = ModelDimensions(**checkpoint["dims"])
- model = Whisper(dims)
- model.load_state_dict(checkpoint["model_state_dict"])
- self.hidden_dim = dims
- self.model = model.to(self.dev)
-
- def encoder(self, wav):
- audio = wav
- audln = audio.shape[0]
- ppgln = audln // 320
- audio = pad_or_trim(audio)
- mel = log_mel_spectrogram(audio).to(self.dev)
- with torch.no_grad():
- ppg = self.model.encoder(mel.unsqueeze(0)).squeeze().data.cpu().float().numpy()
- ppg = torch.FloatTensor(ppg[:ppgln,]).to(self.dev)
- return ppg[None,:,:].transpose(1, 2)
diff --git a/spaces/HarlanHong/DaGAN/modules/discriminator.py b/spaces/HarlanHong/DaGAN/modules/discriminator.py
deleted file mode 100644
index 2f9fdfdba4a0c3ccb7206184bae8a8009e9fb621..0000000000000000000000000000000000000000
--- a/spaces/HarlanHong/DaGAN/modules/discriminator.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from torch import nn
-import torch.nn.functional as F
-from modules.util import kp2gaussian
-import torch
-import pdb
-
-class DownBlock2d(nn.Module):
- """
- Simple block for processing video (encoder).
- """
-
- def __init__(self, in_features, out_features, norm=False, kernel_size=4, pool=False, sn=False):
- super(DownBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size)
-
- if sn:
- self.conv = nn.utils.spectral_norm(self.conv)
-
- if norm:
- self.norm = nn.InstanceNorm2d(out_features, affine=True)
- else:
- self.norm = None
- self.pool = pool
-
- def forward(self, x):
- out = x
- out = self.conv(out)
- if self.norm:
- out = self.norm(out)
- out = F.leaky_relu(out, 0.2)
- if self.pool:
- out = F.avg_pool2d(out, (2, 2))
- return out
-
-
-class Discriminator(nn.Module):
- """
- Discriminator similar to Pix2Pix
- """
-
- def __init__(self, num_channels=3, block_expansion=64, num_blocks=4, max_features=512,
- sn=False, use_kp=False, num_kp=10, kp_variance=0.01, **kwargs):
- super(Discriminator, self).__init__()
-
- down_blocks = []
- for i in range(num_blocks):
- down_blocks.append(
- DownBlock2d(num_channels + num_kp * use_kp if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- norm=(i != 0), kernel_size=4, pool=(i != num_blocks - 1), sn=sn))
- self.down_blocks = nn.ModuleList(down_blocks)
- self.conv = nn.Conv2d(self.down_blocks[-1].conv.out_channels, out_channels=1, kernel_size=1)
- if sn:
- self.conv = nn.utils.spectral_norm(self.conv)
- self.use_kp = use_kp
- self.kp_variance = kp_variance
-
- def forward(self, x, kp=None):
- feature_maps = []
- out = x
- if self.use_kp:
- heatmap = kp2gaussian(kp, x.shape[2:], self.kp_variance)
- out = torch.cat([out, heatmap], dim=1)
- # print(out.shape)
- for down_block in self.down_blocks:
- feature_maps.append(down_block(out))
- out = feature_maps[-1]
- # print(out.shape)
- prediction_map = self.conv(out)
-
- return feature_maps, prediction_map
-
-
-class MultiScaleDiscriminator(nn.Module):
- """
- Multi-scale (scale) discriminator
- """
-
- def __init__(self, scales=(), **kwargs):
- super(MultiScaleDiscriminator, self).__init__()
- self.scales = scales
- discs = {}
- for scale in scales:
- discs[str(scale).replace('.', '-')] = Discriminator(**kwargs)
- self.discs = nn.ModuleDict(discs)
-
- def forward(self, x, kp=None):
- out_dict = {}
- for scale, disc in self.discs.items():
- scale = str(scale).replace('-', '.')
- key = 'prediction_' + scale
- feature_maps, prediction_map = disc(x[key], kp)
- out_dict['feature_maps_' + scale] = feature_maps
- out_dict['prediction_map_' + scale] = prediction_map
- return out_dict
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh
deleted file mode 100644
index 9ecf1690c67f8a019009ef32d973fbd45b56c7ca..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/bin/bash
-
-split="dev_other"
-ref_data=""
-get_best_wer=true
-dec_name="decode"
-graph_name="graph"
-
-. ./cmd.sh
-. ./path.sh
-. parse_options.sh
-
-exp_root=$1
-
-set -eu
-
-echo "==== WER w.r.t. pseudo transcript"
-for x in $exp_root/*/${dec_name}_${split}*; do grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh; done
-
-
-if [ ! -z $ref_data ]; then
- echo "==== WER w.r.t. real transcript (select based on pseudo WER)"
- ref_txt=$ref_data/$split/text
- for x in $exp_root/*/${dec_name}_${split}*; do
- lang=$(dirname $x)/$graph_name
-
- lmwt=$(
- grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh |
- sed 's/.*wer_\(.*\)$/\1/g' | sed 's/_/./g'
- )
- tra=$x/scoring/$lmwt.tra
- cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' | \
- compute-wer --text --mode=present \
- ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra
- done
-fi
-
-if [ ! -z $ref_data ] && $get_best_wer; then
- echo "==== WER w.r.t. real transcript (select based on true WER)"
- ref_txt=$ref_data/$split/text
- for x in $exp_root/*/${dec_name}_${split}*; do
- lang=$(dirname $x)/$graph_name
-
- for tra in $x/scoring/*.tra; do
- cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' | \
- compute-wer --text --mode=present \
- ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra
- done | sort -k2n | head -n1
- done
-fi
-
-exit 0;
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/__init__.py
deleted file mode 100644
index be783be896396ff659c0bd173a7acebb8a2d165d..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/__init__.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-import importlib
-import os
-
-from fairseq import registry
-from fairseq.optim.bmuf import FairseqBMUF # noqa
-from fairseq.optim.fairseq_optimizer import ( # noqa
- FairseqOptimizer,
- LegacyFairseqOptimizer,
-)
-from fairseq.optim.amp_optimizer import AMPOptimizer
-from fairseq.optim.fp16_optimizer import FP16Optimizer, MemoryEfficientFP16Optimizer
-from fairseq.optim.shard import shard_
-from omegaconf import DictConfig
-
-__all__ = [
- "AMPOptimizer",
- "FairseqOptimizer",
- "FP16Optimizer",
- "MemoryEfficientFP16Optimizer",
- "shard_",
-]
-
-(
- _build_optimizer,
- register_optimizer,
- OPTIMIZER_REGISTRY,
- OPTIMIZER_DATACLASS_REGISTRY,
-) = registry.setup_registry("--optimizer", base_class=FairseqOptimizer, required=True)
-
-
-def build_optimizer(cfg: DictConfig, params, *extra_args, **extra_kwargs):
- if all(isinstance(p, dict) for p in params):
- params = [t for p in params for t in p.values()]
- params = list(filter(lambda p: p.requires_grad, params))
- return _build_optimizer(cfg, params, *extra_args, **extra_kwargs)
-
-
-# automatically import any Python files in the optim/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- file_name = file[: file.find(".py")]
- importlib.import_module("fairseq.optim." + file_name)
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/models.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/models.py
deleted file mode 100644
index be51fa51407e6ce1daaee5e8d090f6acdbee0db9..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/models.py
+++ /dev/null
@@ -1,403 +0,0 @@
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from utils import init_weights, get_padding
-
-LRELU_SLOPE = 0.1
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
- self.num_kernels = len(h.resblock_kernel_sizes)
- self.num_upsamples = len(h.upsample_rates)
- self.conv_pre = weight_norm(
- Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3)
- )
- resblock = ResBlock1 if h.resblock == "1" else ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- h.upsample_initial_channel // (2 ** i),
- h.upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h.upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x):
- x = self.conv_pre(x)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print("Removing weight norm...")
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(5, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(5, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(5, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(5, 1), 0),
- )
- ),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList(
- [
- DiscriminatorP(2),
- DiscriminatorP(3),
- DiscriminatorP(5),
- DiscriminatorP(7),
- DiscriminatorP(11),
- ]
- )
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList(
- [
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ]
- )
- self.meanpools = nn.ModuleList(
- [AvgPool1d(4, 2, padding=2), AvgPool1d(4, 2, padding=2)]
- )
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- loss += r_loss + g_loss
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/README.md b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/README.md
deleted file mode 100644
index 5fa0e97245d3ba6db69d11222261b0644960183d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Flashlight Decoder
-
-This script runs decoding for pre-trained speech recognition models.
-
-## Usage
-
-Assuming a few variables:
-
-```bash
-checkpoint=
-data=
-lm_model=
-lexicon=
-```
-
-Example usage for decoding a fine-tuned Wav2Vec model:
-
-```bash
-python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \
- task=audio_pretraining \
- task.data=$data \
- task.labels=ltr \
- common_eval.path=$checkpoint \
- decoding.type=kenlm \
- decoding.lexicon=$lexicon \
- decoding.lmpath=$lm_model \
- dataset.gen_subset=dev_clean,dev_other,test_clean,test_other
-```
-
-Example usage for using Ax to sweep WER parameters (requires `pip install hydra-ax-sweeper`):
-
-```bash
-python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \
- hydra/sweeper=ax \
- task=audio_pretraining \
- task.data=$data \
- task.labels=ltr \
- common_eval.path=$checkpoint \
- decoding.type=kenlm \
- decoding.lexicon=$lexicon \
- decoding.lmpath=$lm_model \
- dataset.gen_subset=dev_other
-```
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/sentence_ranking.py b/spaces/ICML2022/OFA/fairseq/fairseq/criterions/sentence_ranking.py
deleted file mode 100644
index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/sentence_ranking.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-
-
-@register_criterion("sentence_ranking")
-class SentenceRankingCriterion(FairseqCriterion):
- def __init__(self, task, ranking_head_name, save_predictions, num_classes):
- super().__init__(task)
- self.ranking_head_name = ranking_head_name
- if save_predictions is not None:
- self.prediction_h = open(save_predictions, "w")
- else:
- self.prediction_h = None
- self.num_classes = num_classes
-
- def __del__(self):
- if self.prediction_h is not None:
- self.prediction_h.close()
-
- @staticmethod
- def add_args(parser):
- # fmt: off
- parser.add_argument('--save-predictions', metavar='FILE',
- help='file to save predictions to')
- parser.add_argument('--ranking-head-name',
- default='sentence_classification_head',
- help='name of the ranking head to use')
- # fmt: on
-
- def forward(self, model, sample, reduce=True):
- """Compute ranking loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- assert (
- hasattr(model, "classification_heads")
- and self.ranking_head_name in model.classification_heads
- ), "model must provide sentence ranking head for --criterion=sentence_ranking"
-
- scores = []
- for idx in range(self.num_classes):
- score, _ = model(
- **sample["net_input{idx}".format(idx=idx + 1)],
- classification_head_name=self.ranking_head_name,
- )
- scores.append(score)
-
- logits = torch.cat(scores, dim=1)
- sample_size = logits.size(0)
-
- if "target" in sample:
- targets = model.get_targets(sample, [logits]).view(-1)
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- loss = F.nll_loss(lprobs, targets, reduction="sum")
- else:
- targets = None
- loss = torch.tensor(0.0, requires_grad=True)
-
- if self.prediction_h is not None:
- preds = logits.argmax(dim=1)
- for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())):
- if targets is not None:
- label = targets[i].item()
- print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h)
- else:
- print("{}\t{}".format(id, pred), file=self.prediction_h)
-
- logging_output = {
- "loss": loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample_size,
- "sample_size": sample_size,
- }
- if targets is not None:
- logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum()
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
-
- if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]:
- ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs)
- metrics.log_scalar(
- "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/IDEA-CCNL/Ziya-v1/README.md b/spaces/IDEA-CCNL/Ziya-v1/README.md
deleted file mode 100644
index 7760adf518303561ed408b3031ad30c84ac59f7f..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/Ziya-v1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ziya V1
-emoji: 🐢
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/IISRFactCheck/claim_detection/code/app.py b/spaces/IISRFactCheck/claim_detection/code/app.py
deleted file mode 100644
index 274b103982587842ec46d5aef275751c3207647d..0000000000000000000000000000000000000000
--- a/spaces/IISRFactCheck/claim_detection/code/app.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from flask import Flask, request, jsonify, make_response, render_template
-from do_predict import predict_single
-
-from dotenv import load_dotenv
-load_dotenv()
-
-app = Flask(__name__, template_folder="static", static_url_path="", static_folder="static")
-app.config["JSON_AS_ASCII"] = False
-
-@app.route("/")
-def index():
- return render_template("index.html")
-
-@app.before_request
-def before():
- # handle preflight
- if request.method == "OPTIONS":
- resp = make_response()
- resp.headers["Access-Control-Allow-Origin"] = "*"
- resp.headers["Access-Control-Allow-Methods"] = "GET, POST"
- resp.headers["Access-Control-Allow-Headers"] = "Content-Type"
- return resp
-
-
-@app.post("/api/predict_single")
-def api_predict_single():
- text = request.json["text"]
- result = predict_single(text)
- resp = jsonify(result)
- resp.headers["Access-Control-Allow-Origin"] = "*"
- return resp
-
-
-if __name__ == "__main__":
- app.run(host="0.0.0.0", port=7860)
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py
deleted file mode 100644
index 8a6e6d816955b4c6097e1de6ce6e4ed3bafe327c..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py
+++ /dev/null
@@ -1,269 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from segment_anything.modeling import Sam
-
-from typing import Optional, Tuple
-
-from .utils.transforms import ResizeLongestSide
-
-
-class SamPredictor:
- def __init__(
- self,
- sam_model: Sam,
- ) -> None:
- """
- Uses SAM to calculate the image embedding for an image, and then
- allow repeated, efficient mask prediction given prompts.
-
- Arguments:
- sam_model (Sam): The model to use for mask prediction.
- """
- super().__init__()
- self.model = sam_model
- self.transform = ResizeLongestSide(sam_model.image_encoder.img_size)
- self.reset_image()
-
- def set_image(
- self,
- image: np.ndarray,
- image_format: str = "RGB",
- ) -> None:
- """
- Calculates the image embeddings for the provided image, allowing
- masks to be predicted with the 'predict' method.
-
- Arguments:
- image (np.ndarray): The image for calculating masks. Expects an
- image in HWC uint8 format, with pixel values in [0, 255].
- image_format (str): The color format of the image, in ['RGB', 'BGR'].
- """
- assert image_format in [
- "RGB",
- "BGR",
- ], f"image_format must be in ['RGB', 'BGR'], is {image_format}."
- if image_format != self.model.image_format:
- image = image[..., ::-1]
-
- # Transform the image to the form expected by the model
- input_image = self.transform.apply_image(image)
- input_image_torch = torch.as_tensor(input_image, device=self.device)
- input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :]
-
- self.set_torch_image(input_image_torch, image.shape[:2])
-
- @torch.no_grad()
- def set_torch_image(
- self,
- transformed_image: torch.Tensor,
- original_image_size: Tuple[int, ...],
- ) -> None:
- """
- Calculates the image embeddings for the provided image, allowing
- masks to be predicted with the 'predict' method. Expects the input
- image to be already transformed to the format expected by the model.
-
- Arguments:
- transformed_image (torch.Tensor): The input image, with shape
- 1x3xHxW, which has been transformed with ResizeLongestSide.
- original_image_size (tuple(int, int)): The size of the image
- before transformation, in (H, W) format.
- """
- assert (
- len(transformed_image.shape) == 4
- and transformed_image.shape[1] == 3
- and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size
- ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}."
- self.reset_image()
-
- self.original_size = original_image_size
- self.input_size = tuple(transformed_image.shape[-2:])
- input_image = self.model.preprocess(transformed_image)
- self.features = self.model.image_encoder(input_image)
- self.is_image_set = True
-
- def predict(
- self,
- point_coords: Optional[np.ndarray] = None,
- point_labels: Optional[np.ndarray] = None,
- box: Optional[np.ndarray] = None,
- mask_input: Optional[np.ndarray] = None,
- multimask_output: bool = True,
- return_logits: bool = False,
- ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
- """
- Predict masks for the given input prompts, using the currently set image.
-
- Arguments:
- point_coords (np.ndarray or None): A Nx2 array of point prompts to the
- model. Each point is in (X,Y) in pixels.
- point_labels (np.ndarray or None): A length N array of labels for the
- point prompts. 1 indicates a foreground point and 0 indicates a
- background point.
- box (np.ndarray or None): A length 4 array given a box prompt to the
- model, in XYXY format.
- mask_input (np.ndarray): A low resolution mask input to the model, typically
- coming from a previous prediction iteration. Has form 1xHxW, where
- for SAM, H=W=256.
- multimask_output (bool): If true, the model will return three masks.
- For ambiguous input prompts (such as a single click), this will often
- produce better masks than a single prediction. If only a single
- mask is needed, the model's predicted quality score can be used
- to select the best mask. For non-ambiguous prompts, such as multiple
- input prompts, multimask_output=False can give better results.
- return_logits (bool): If true, returns un-thresholded masks logits
- instead of a binary mask.
-
- Returns:
- (np.ndarray): The output masks in CxHxW format, where C is the
- number of masks, and (H, W) is the original image size.
- (np.ndarray): An array of length C containing the model's
- predictions for the quality of each mask.
- (np.ndarray): An array of shape CxHxW, where C is the number
- of masks and H=W=256. These low resolution logits can be passed to
- a subsequent iteration as mask input.
- """
- if not self.is_image_set:
- raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
-
- # Transform input prompts
- coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None
- if point_coords is not None:
- assert (
- point_labels is not None
- ), "point_labels must be supplied if point_coords is supplied."
- point_coords = self.transform.apply_coords(point_coords, self.original_size)
- coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device)
- labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device)
- coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :]
- if box is not None:
- box = self.transform.apply_boxes(box, self.original_size)
- box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device)
- box_torch = box_torch[None, :]
- if mask_input is not None:
- mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device)
- mask_input_torch = mask_input_torch[None, :, :, :]
-
- masks, iou_predictions, low_res_masks = self.predict_torch(
- coords_torch,
- labels_torch,
- box_torch,
- mask_input_torch,
- multimask_output,
- return_logits=return_logits,
- )
-
- masks_np = masks[0].detach().cpu().numpy()
- iou_predictions_np = iou_predictions[0].detach().cpu().numpy()
- low_res_masks_np = low_res_masks[0].detach().cpu().numpy()
- return masks_np, iou_predictions_np, low_res_masks_np
-
- @torch.no_grad()
- def predict_torch(
- self,
- point_coords: Optional[torch.Tensor],
- point_labels: Optional[torch.Tensor],
- boxes: Optional[torch.Tensor] = None,
- mask_input: Optional[torch.Tensor] = None,
- multimask_output: bool = True,
- return_logits: bool = False,
- ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
- """
- Predict masks for the given input prompts, using the currently set image.
- Input prompts are batched torch tensors and are expected to already be
- transformed to the input frame using ResizeLongestSide.
-
- Arguments:
- point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the
- model. Each point is in (X,Y) in pixels.
- point_labels (torch.Tensor or None): A BxN array of labels for the
- point prompts. 1 indicates a foreground point and 0 indicates a
- background point.
- boxes (np.ndarray or None): A Bx4 array given a box prompt to the
- model, in XYXY format.
- mask_input (np.ndarray): A low resolution mask input to the model, typically
- coming from a previous prediction iteration. Has form Bx1xHxW, where
- for SAM, H=W=256. Masks returned by a previous iteration of the
- predict method do not need further transformation.
- multimask_output (bool): If true, the model will return three masks.
- For ambiguous input prompts (such as a single click), this will often
- produce better masks than a single prediction. If only a single
- mask is needed, the model's predicted quality score can be used
- to select the best mask. For non-ambiguous prompts, such as multiple
- input prompts, multimask_output=False can give better results.
- return_logits (bool): If true, returns un-thresholded masks logits
- instead of a binary mask.
-
- Returns:
- (torch.Tensor): The output masks in BxCxHxW format, where C is the
- number of masks, and (H, W) is the original image size.
- (torch.Tensor): An array of shape BxC containing the model's
- predictions for the quality of each mask.
- (torch.Tensor): An array of shape BxCxHxW, where C is the number
- of masks and H=W=256. These low res logits can be passed to
- a subsequent iteration as mask input.
- """
- if not self.is_image_set:
- raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
-
- if point_coords is not None:
- points = (point_coords, point_labels)
- else:
- points = None
-
- # Embed prompts
- sparse_embeddings, dense_embeddings = self.model.prompt_encoder(
- points=points,
- boxes=boxes,
- masks=mask_input,
- )
-
- # Predict masks
- low_res_masks, iou_predictions = self.model.mask_decoder(
- image_embeddings=self.features,
- image_pe=self.model.prompt_encoder.get_dense_pe(),
- sparse_prompt_embeddings=sparse_embeddings,
- dense_prompt_embeddings=dense_embeddings,
- multimask_output=multimask_output,
- )
-
- # Upscale the masks to the original image resolution
- masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size)
-
- if not return_logits:
- masks = masks > self.model.mask_threshold
-
- return masks, iou_predictions, low_res_masks
-
- def get_image_embedding(self) -> torch.Tensor:
- """
- Returns the image embeddings for the currently set image, with
- shape 1xCxHxW, where C is the embedding dimension and (H,W) are
- the embedding spatial dimension of SAM (typically C=256, H=W=64).
- """
- if not self.is_image_set:
- raise RuntimeError(
- "An image must be set with .set_image(...) to generate an embedding."
- )
- assert self.features is not None, "Features must exist if an image has been set."
- return self.features
-
- @property
- def device(self) -> torch.device:
- return self.model.device
-
- def reset_image(self) -> None:
- """Resets the currently set image."""
- self.is_image_set = False
- self.features = None
- self.orig_h = None
- self.orig_w = None
- self.input_h = None
- self.input_w = None
diff --git a/spaces/Intae/deepfake/training/pipelines/train_classifier.py b/spaces/Intae/deepfake/training/pipelines/train_classifier.py
deleted file mode 100644
index 633e41a1a6280287face2a083fd9357c78aeb54d..0000000000000000000000000000000000000000
--- a/spaces/Intae/deepfake/training/pipelines/train_classifier.py
+++ /dev/null
@@ -1,360 +0,0 @@
-import argparse
-import json
-import os
-from collections import defaultdict
-
-from sklearn.metrics import log_loss
-from torch import topk
-
-from training import losses
-from training.datasets.classifier_dataset import DeepFakeClassifierDataset
-from training.losses import WeightedLosses
-from training.tools.config import load_config
-from training.tools.utils import create_optimizer, AverageMeter
-from training.transforms.albu import IsotropicResize
-from training.zoo import classifiers
-
-os.environ["MKL_NUM_THREADS"] = "1"
-os.environ["NUMEXPR_NUM_THREADS"] = "1"
-os.environ["OMP_NUM_THREADS"] = "1"
-
-import cv2
-
-cv2.ocl.setUseOpenCL(False)
-cv2.setNumThreads(0)
-import numpy as np
-from albumentations import Compose, RandomBrightnessContrast, \
- HorizontalFlip, FancyPCA, HueSaturationValue, OneOf, ToGray, \
- ShiftScaleRotate, ImageCompression, PadIfNeeded, GaussNoise, GaussianBlur
-
-from apex.parallel import DistributedDataParallel, convert_syncbn_model
-from tensorboardX import SummaryWriter
-
-from apex import amp
-
-import torch
-from torch.backends import cudnn
-from torch.nn import DataParallel
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-import torch.distributed as dist
-
-torch.backends.cudnn.benchmark = True
-
-
-def create_train_transforms(size=300):
- return Compose([
- ImageCompression(quality_lower=60, quality_upper=100, p=0.5),
- GaussNoise(p=0.1),
- GaussianBlur(blur_limit=3, p=0.05),
- HorizontalFlip(),
- OneOf([
- IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC),
- IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_LINEAR),
- IsotropicResize(max_side=size, interpolation_down=cv2.INTER_LINEAR, interpolation_up=cv2.INTER_LINEAR),
- ], p=1),
- PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT),
- OneOf([RandomBrightnessContrast(), FancyPCA(), HueSaturationValue()], p=0.7),
- ToGray(p=0.2),
- ShiftScaleRotate(shift_limit=0.1, scale_limit=0.2, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT, p=0.5),
- ]
- )
-
-
-def create_val_transforms(size=300):
- return Compose([
- IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC),
- PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT),
- ])
-
-
-def main():
- parser = argparse.ArgumentParser("PyTorch Xview Pipeline")
- arg = parser.add_argument
- arg('--config', metavar='CONFIG_FILE', help='path to configuration file')
- arg('--workers', type=int, default=6, help='number of cpu threads to use')
- arg('--gpu', type=str, default='0', help='List of GPUs for parallel training, e.g. 0,1,2,3')
- arg('--output-dir', type=str, default='weights/')
- arg('--resume', type=str, default='')
- arg('--fold', type=int, default=0)
- arg('--prefix', type=str, default='classifier_')
- arg('--data-dir', type=str, default="/mnt/sota/datasets/deepfake")
- arg('--folds-csv', type=str, default='folds.csv')
- arg('--crops-dir', type=str, default='crops')
- arg('--label-smoothing', type=float, default=0.01)
- arg('--logdir', type=str, default='logs')
- arg('--zero-score', action='store_true', default=False)
- arg('--from-zero', action='store_true', default=False)
- arg('--distributed', action='store_true', default=False)
- arg('--freeze-epochs', type=int, default=0)
- arg("--local_rank", default=0, type=int)
- arg("--seed", default=777, type=int)
- arg("--padding-part", default=3, type=int)
- arg("--opt-level", default='O1', type=str)
- arg("--test_every", type=int, default=1)
- arg("--no-oversample", action="store_true")
- arg("--no-hardcore", action="store_true")
- arg("--only-changed-frames", action="store_true")
-
- args = parser.parse_args()
- os.makedirs(args.output_dir, exist_ok=True)
- if args.distributed:
- torch.cuda.set_device(args.local_rank)
- torch.distributed.init_process_group(backend='nccl', init_method='env://')
- else:
- os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
- os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
- cudnn.benchmark = True
-
- conf = load_config(args.config)
- model = classifiers.__dict__[conf['network']](encoder=conf['encoder'])
-
- model = model
- if args.distributed:
- model = convert_syncbn_model(model)
- ohem = conf.get("ohem_samples", None)
- reduction = "mean"
- if ohem:
- reduction = "none"
- loss_fn = []
- weights = []
- for loss_name, weight in conf["losses"].items():
- loss_fn.append(losses.__dict__[loss_name](reduction=reduction))
- weights.append(weight)
- loss = WeightedLosses(loss_fn, weights)
- loss_functions = {"classifier_loss": loss}
- optimizer, scheduler = create_optimizer(conf['optimizer'], model)
- bce_best = 100
- start_epoch = 0
- batch_size = conf['optimizer']['batch_size']
-
- data_train = DeepFakeClassifierDataset(mode="train",
- oversample_real=not args.no_oversample,
- fold=args.fold,
- padding_part=args.padding_part,
- hardcore=not args.no_hardcore,
- crops_dir=args.crops_dir,
- data_path=args.data_dir,
- label_smoothing=args.label_smoothing,
- folds_csv=args.folds_csv,
- transforms=create_train_transforms(conf["size"]),
- normalize=conf.get("normalize", None))
- data_val = DeepFakeClassifierDataset(mode="val",
- fold=args.fold,
- padding_part=args.padding_part,
- crops_dir=args.crops_dir,
- data_path=args.data_dir,
- folds_csv=args.folds_csv,
- transforms=create_val_transforms(conf["size"]),
- normalize=conf.get("normalize", None))
- val_data_loader = DataLoader(data_val, batch_size=batch_size * 2, num_workers=args.workers, shuffle=False,
- pin_memory=False)
- os.makedirs(args.logdir, exist_ok=True)
- summary_writer = SummaryWriter(args.logdir + '/' + conf.get("prefix", args.prefix) + conf['encoder'] + "_" + str(args.fold))
- if args.resume:
- if os.path.isfile(args.resume):
- print("=> loading checkpoint '{}'".format(args.resume))
- checkpoint = torch.load(args.resume, map_location='cpu')
- state_dict = checkpoint['state_dict']
- state_dict = {k[7:]: w for k, w in state_dict.items()}
- model.load_state_dict(state_dict, strict=False)
- if not args.from_zero:
- start_epoch = checkpoint['epoch']
- if not args.zero_score:
- bce_best = checkpoint.get('bce_best', 0)
- print("=> loaded checkpoint '{}' (epoch {}, bce_best {})"
- .format(args.resume, checkpoint['epoch'], checkpoint['bce_best']))
- else:
- print("=> no checkpoint found at '{}'".format(args.resume))
- if args.from_zero:
- start_epoch = 0
- current_epoch = start_epoch
-
- if conf['fp16']:
- model, optimizer = amp.initialize(model, optimizer,
- opt_level=args.opt_level,
- loss_scale='dynamic')
-
- snapshot_name = "{}{}_{}_{}".format(conf.get("prefix", args.prefix), conf['network'], conf['encoder'], args.fold)
-
- if args.distributed:
- model = DistributedDataParallel(model, delay_allreduce=True)
- else:
- model = DataParallel(model)
- data_val.reset(1, args.seed)
- max_epochs = conf['optimizer']['schedule']['epochs']
- for epoch in range(start_epoch, max_epochs):
- data_train.reset(epoch, args.seed)
- train_sampler = None
- if args.distributed:
- train_sampler = torch.utils.data.distributed.DistributedSampler(data_train)
- train_sampler.set_epoch(epoch)
- if epoch < args.freeze_epochs:
- print("Freezing encoder!!!")
- model.module.encoder.eval()
- for p in model.module.encoder.parameters():
- p.requires_grad = False
- else:
- model.module.encoder.train()
- for p in model.module.encoder.parameters():
- p.requires_grad = True
-
- train_data_loader = DataLoader(data_train, batch_size=batch_size, num_workers=args.workers,
- shuffle=train_sampler is None, sampler=train_sampler, pin_memory=False,
- drop_last=True)
-
- train_epoch(current_epoch, loss_functions, model, optimizer, scheduler, train_data_loader, summary_writer, conf,
- args.local_rank, args.only_changed_frames)
- model = model.eval()
-
- if args.local_rank == 0:
- torch.save({
- 'epoch': current_epoch + 1,
- 'state_dict': model.state_dict(),
- 'bce_best': bce_best,
- }, args.output_dir + '/' + snapshot_name + "_last")
- torch.save({
- 'epoch': current_epoch + 1,
- 'state_dict': model.state_dict(),
- 'bce_best': bce_best,
- }, args.output_dir + snapshot_name + "_{}".format(current_epoch))
- if (epoch + 1) % args.test_every == 0:
- bce_best = evaluate_val(args, val_data_loader, bce_best, model,
- snapshot_name=snapshot_name,
- current_epoch=current_epoch,
- summary_writer=summary_writer)
- current_epoch += 1
-
-
-def evaluate_val(args, data_val, bce_best, model, snapshot_name, current_epoch, summary_writer):
- print("Test phase")
- model = model.eval()
-
- bce, probs, targets = validate(model, data_loader=data_val)
- if args.local_rank == 0:
- summary_writer.add_scalar('val/bce', float(bce), global_step=current_epoch)
- if bce < bce_best:
- print("Epoch {} improved from {} to {}".format(current_epoch, bce_best, bce))
- if args.output_dir is not None:
- torch.save({
- 'epoch': current_epoch + 1,
- 'state_dict': model.state_dict(),
- 'bce_best': bce,
- }, args.output_dir + snapshot_name + "_best_dice")
- bce_best = bce
- with open("predictions_{}.json".format(args.fold), "w") as f:
- json.dump({"probs": probs, "targets": targets}, f)
- torch.save({
- 'epoch': current_epoch + 1,
- 'state_dict': model.state_dict(),
- 'bce_best': bce_best,
- }, args.output_dir + snapshot_name + "_last")
- print("Epoch: {} bce: {}, bce_best: {}".format(current_epoch, bce, bce_best))
- return bce_best
-
-
-def validate(net, data_loader, prefix=""):
- probs = defaultdict(list)
- targets = defaultdict(list)
-
- with torch.no_grad():
- for sample in tqdm(data_loader):
- imgs = sample["image"]
- img_names = sample["img_name"]
- labels = sample["labels"].float()
- out = net(imgs)
- labels = labels.cpu().numpy()
- preds = torch.sigmoid(out).cpu().numpy()
- for i in range(out.shape[0]):
- video, img_id = img_names[i].split("/")
- probs[video].append(preds[i].tolist())
- targets[video].append(labels[i].tolist())
- data_x = []
- data_y = []
- for vid, score in probs.items():
- score = np.array(score)
- lbl = targets[vid]
-
- score = np.mean(score)
- lbl = np.mean(lbl)
- data_x.append(score)
- data_y.append(lbl)
- y = np.array(data_y)
- x = np.array(data_x)
- fake_idx = y > 0.1
- real_idx = y < 0.1
- fake_loss = log_loss(y[fake_idx], x[fake_idx], labels=[0, 1])
- real_loss = log_loss(y[real_idx], x[real_idx], labels=[0, 1])
- print("{}fake_loss".format(prefix), fake_loss)
- print("{}real_loss".format(prefix), real_loss)
-
- return (fake_loss + real_loss) / 2, probs, targets
-
-
-def train_epoch(current_epoch, loss_functions, model, optimizer, scheduler, train_data_loader, summary_writer, conf,
- local_rank, only_valid):
- losses = AverageMeter()
- fake_losses = AverageMeter()
- real_losses = AverageMeter()
- max_iters = conf["batches_per_epoch"]
- print("training epoch {}".format(current_epoch))
- model.train()
- pbar = tqdm(enumerate(train_data_loader), total=max_iters, desc="Epoch {}".format(current_epoch), ncols=0)
- if conf["optimizer"]["schedule"]["mode"] == "epoch":
- scheduler.step(current_epoch)
- for i, sample in pbar:
- imgs = sample["image"]
- labels = sample["labels"].float()
- out_labels = model(imgs)
- if only_valid:
- valid_idx = sample["valid"].float() > 0
- out_labels = out_labels[valid_idx]
- labels = labels[valid_idx]
- if labels.size(0) == 0:
- continue
-
- fake_loss = 0
- real_loss = 0
- fake_idx = labels > 0.5
- real_idx = labels <= 0.5
-
- ohem = conf.get("ohem_samples", None)
- if torch.sum(fake_idx * 1) > 0:
- fake_loss = loss_functions["classifier_loss"](out_labels[fake_idx], labels[fake_idx])
- if torch.sum(real_idx * 1) > 0:
- real_loss = loss_functions["classifier_loss"](out_labels[real_idx], labels[real_idx])
- if ohem:
- fake_loss = topk(fake_loss, k=min(ohem, fake_loss.size(0)), sorted=False)[0].mean()
- real_loss = topk(real_loss, k=min(ohem, real_loss.size(0)), sorted=False)[0].mean()
-
- loss = (fake_loss + real_loss) / 2
- losses.update(loss.item(), imgs.size(0))
- fake_losses.update(0 if fake_loss == 0 else fake_loss.item(), imgs.size(0))
- real_losses.update(0 if real_loss == 0 else real_loss.item(), imgs.size(0))
-
- optimizer.zero_grad()
- pbar.set_postfix({"lr": float(scheduler.get_lr()[-1]), "epoch": current_epoch, "loss": losses.avg,
- "fake_loss": fake_losses.avg, "real_loss": real_losses.avg})
-
- if conf['fp16']:
- with amp.scale_loss(loss, optimizer) as scaled_loss:
- scaled_loss.backward()
- else:
- loss.backward()
- torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), 1)
- optimizer.step()
- if conf["optimizer"]["schedule"]["mode"] in ("step", "poly"):
- scheduler.step(i + current_epoch * max_iters)
- if i == max_iters - 1:
- break
- pbar.close()
- if local_rank == 0:
- for idx, param_group in enumerate(optimizer.param_groups):
- lr = param_group['lr']
- summary_writer.add_scalar('group{}/lr'.format(idx), float(lr), global_step=current_epoch)
- summary_writer.add_scalar('train/loss', float(losses.avg), global_step=current_epoch)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/JKLUCY99/voice-cloning/Makefile b/spaces/JKLUCY99/voice-cloning/Makefile
deleted file mode 100644
index ad23323414bd2175956f6aef92f223a02f7258be..0000000000000000000000000000000000000000
--- a/spaces/JKLUCY99/voice-cloning/Makefile
+++ /dev/null
@@ -1,11 +0,0 @@
-.PHONY: quality style
-
-# Check that source code meets quality standards
-quality:
- black --check --diff .
- ruff .
-
-# Format source code automatically
-style:
- black .
- ruff . --fix
diff --git a/spaces/Jaehan/Translation-Korean2English-1/README.md b/spaces/Jaehan/Translation-Korean2English-1/README.md
deleted file mode 100644
index 7532330d1233a1c53134c29ddcb9ab32dd6e8f11..0000000000000000000000000000000000000000
--- a/spaces/Jaehan/Translation-Korean2English-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Translation Korean To English
-emoji: 👀
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/commands/analyze_code.py b/spaces/Jamkonams/AutoGPT/autogpt/commands/analyze_code.py
deleted file mode 100644
index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/commands/analyze_code.py
+++ /dev/null
@@ -1,25 +0,0 @@
-"""Code evaluation module."""
-from __future__ import annotations
-
-from autogpt.llm_utils import call_ai_function
-
-
-def analyze_code(code: str) -> list[str]:
- """
- A function that takes in a string and returns a response from create chat
- completion api call.
-
- Parameters:
- code (str): Code to be evaluated.
- Returns:
- A result string from create chat completion. A list of suggestions to
- improve the code.
- """
-
- function_string = "def analyze_code(code: str) -> List[str]:"
- args = [code]
- description_string = (
- "Analyzes the given code and returns a list of suggestions" " for improvements."
- )
-
- return call_ai_function(function_string, args, description_string)
diff --git a/spaces/JenkinsGage/WritingHelper/app.py b/spaces/JenkinsGage/WritingHelper/app.py
deleted file mode 100644
index 2cb9bf3981622d622774fc1301aa3de04953b48c..0000000000000000000000000000000000000000
--- a/spaces/JenkinsGage/WritingHelper/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import torch
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
-
-tokenizer = AutoTokenizer.from_pretrained('humarin/chatgpt_paraphraser_on_T5_base')
-model = AutoModelForSeq2SeqLM.from_pretrained('humarin/chatgpt_paraphraser_on_T5_base')
-
-def paraphrase(
- text,
- num_beams=5,
- num_beam_groups=5,
- num_return_sequences=5,
- repetition_penalty=10.0,
- diversity_penalty=3.0,
- no_repeat_ngram_size=2,
- temperature=0.7,
- max_length=128
-):
- input_ids = tokenizer(
- f'paraphrase: {text}',
- return_tensors="pt", padding="longest",
- max_length=max_length,
- truncation=True,
- ).input_ids
-
- outputs = model.generate(
- input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
- num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
- num_beams=num_beams, num_beam_groups=num_beam_groups,
- max_length=max_length, diversity_penalty=diversity_penalty
- )
-
- res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
- return res
-
-def fn(
- text,
- num_beams=5,
- num_beam_groups=5,
- num_return_sequences=5,
- repetition_penalty=10.0,
- diversity_penalty=3.0,
- no_repeat_ngram_size=2,
- temperature=0.7,
- max_length=128
-):
- res = paraphrase(text, num_beams, num_beam_groups, num_return_sequences, repetition_penalty, diversity_penalty, no_repeat_ngram_size, temperature, max_length)
- result = ''
- for i, item in enumerate(res):
- result += f'{i+1}. {item}\n'
- return result
-
-demo = gr.Interface(
- fn=fn,
- inputs=[
- gr.Textbox(lines=3, placeholder='Enter Text To Paraphrase'),
- gr.Slider(minimum=1, maximum=10, step=1, value=5, label='Num Beams', info='This parameter controls the number of possible next tokens that are considered at each step in the beam search algorithm. A higher value will result in more diverse paraphrases, but may also take longer to generate.'),
- gr.Slider(minimum=1, maximum=10, step=1, value=5, label='Num Beam Groups', info='This parameter controls the number of beams that are run in parallel. A higher value will result in faster generation, but may also result in less diversity.'),
- gr.Slider(minimum=1, maximum=10, step=1, value=5, label='Num Return Sequences', info='This parameter controls the number of sequences that are generated at each step in the beam search algorithm. A higher value will produce more results, but may also take longer to generate.'),
- gr.Slider(minimum=0.6, maximum=20.1, step=0.5, value=10.1, label='Repetition Penalty', info='This parameter controls how much the model penalizes itself for generating repeated words or phrases. A higher value will result in more unique paraphrases, but may also result in less accurate paraphrases.'),
- gr.Slider(minimum=0.6, maximum=20.1, step=0.5, value=3.1, label='Diversity Penalty', info='This parameter controls how much the model penalizes itself for generating paraphrases that are similar to each other. A higher value will result in more diverse paraphrases, but may also result in less accurate paraphrases.'),
- gr.Slider(minimum=1, maximum=10, step=1, value=2, label='No Repeat Ngram Size', info='This parameter controls the size of the n-grams that the model is not allowed to repeat. A higher value will result in more unique paraphrases, but may also result in less accurate paraphrases.'),
- gr.Slider(minimum=0.0, maximum=1, step=0.1, value=0.7, label='Temperature', info='This parameter controls how much the model is allowed to deviate from the original text. A higher value will result in more creative paraphrases, but may also result in less accurate paraphrases.'),
- gr.Slider(minimum=32, maximum=512, step=1, value=128, label='Max Length', info='This parameter controls the maximum length of the generated paraphrase. A higher value will result in more detailed paraphrases, but may also take longer to generate.'),
- ],
- outputs=['text'],
-)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/JohnC26/StreamlitWikipediaChat/README.md b/spaces/JohnC26/StreamlitWikipediaChat/README.md
deleted file mode 100644
index 42c76f9e674f722ae54d8d741c9f4eb204fdab89..0000000000000000000000000000000000000000
--- a/spaces/JohnC26/StreamlitWikipediaChat/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: 🌎📚👋Streamlit-Wikipedia-Chat
-emoji: 🌐👨🏫👩🏫
-colorFrom: red
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: awacke1/StreamlitWikipediaChat
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py
deleted file mode 100644
index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py
+++ /dev/null
@@ -1,16 +0,0 @@
-class F0Predictor(object):
- def compute_f0(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length]
- """
- pass
-
- def compute_f0_uv(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
- """
- pass
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/julius/resample.py b/spaces/Kangarroar/ApplioRVC-Inference/julius/resample.py
deleted file mode 100644
index fd3b9b547d4c33ec7136d32e5f086420d0a72e14..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/julius/resample.py
+++ /dev/null
@@ -1,216 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-"""
-Differentiable, Pytorch based resampling.
-Implementation of Julius O. Smith algorithm for resampling.
-See https://ccrma.stanford.edu/~jos/resample/ for details.
-This implementation is specially optimized for when new_sr / old_sr is a fraction
-with a small numerator and denominator when removing the gcd (e.g. new_sr = 700, old_sr = 500).
-
-Very similar to [bmcfee/resampy](https://github.com/bmcfee/resampy) except this implementation
-is optimized for the case mentioned before, while resampy is slower but more general.
-
-"""
-
-import math
-from typing import Optional
-
-import torch
-from torch.nn import functional as F
-
-from .core import sinc
-from .utils import simple_repr
-
-
-class ResampleFrac(torch.nn.Module):
- """
- Resampling from the sample rate `old_sr` to `new_sr`.
- """
- def __init__(self, old_sr: int, new_sr: int, zeros: int = 24, rolloff: float = 0.945):
- """
- Args:
- old_sr (int): sample rate of the input signal x.
- new_sr (int): sample rate of the output.
- zeros (int): number of zero crossing to keep in the sinc filter.
- rolloff (float): use a lowpass filter that is `rolloff * new_sr / 2`,
- to ensure sufficient margin due to the imperfection of the FIR filter used.
- Lowering this value will reduce anti-aliasing, but will reduce some of the
- highest frequencies.
-
- Shape:
-
- - Input: `[*, T]`
- - Output: `[*, T']` with `T' = int(new_sr * T / old_sr)
-
-
- .. caution::
- After dividing `old_sr` and `new_sr` by their GCD, both should be small
- for this implementation to be fast.
-
- >>> import torch
- >>> resample = ResampleFrac(4, 5)
- >>> x = torch.randn(1000)
- >>> print(len(resample(x)))
- 1250
- """
- super().__init__()
- if not isinstance(old_sr, int) or not isinstance(new_sr, int):
- raise ValueError("old_sr and new_sr should be integers")
- gcd = math.gcd(old_sr, new_sr)
- self.old_sr = old_sr // gcd
- self.new_sr = new_sr // gcd
- self.zeros = zeros
- self.rolloff = rolloff
-
- self._init_kernels()
-
- def _init_kernels(self):
- if self.old_sr == self.new_sr:
- return
-
- kernels = []
- sr = min(self.new_sr, self.old_sr)
- # rolloff will perform antialiasing filtering by removing the highest frequencies.
- # At first I thought I only needed this when downsampling, but when upsampling
- # you will get edge artifacts without this, the edge is equivalent to zero padding,
- # which will add high freq artifacts.
- sr *= self.rolloff
-
- # The key idea of the algorithm is that x(t) can be exactly reconstructed from x[i] (tensor)
- # using the sinc interpolation formula:
- # x(t) = sum_i x[i] sinc(pi * old_sr * (i / old_sr - t))
- # We can then sample the function x(t) with a different sample rate:
- # y[j] = x(j / new_sr)
- # or,
- # y[j] = sum_i x[i] sinc(pi * old_sr * (i / old_sr - j / new_sr))
-
- # We see here that y[j] is the convolution of x[i] with a specific filter, for which
- # we take an FIR approximation, stopping when we see at least `zeros` zeros crossing.
- # But y[j+1] is going to have a different set of weights and so on, until y[j + new_sr].
- # Indeed:
- # y[j + new_sr] = sum_i x[i] sinc(pi * old_sr * ((i / old_sr - (j + new_sr) / new_sr))
- # = sum_i x[i] sinc(pi * old_sr * ((i - old_sr) / old_sr - j / new_sr))
- # = sum_i x[i + old_sr] sinc(pi * old_sr * (i / old_sr - j / new_sr))
- # so y[j+new_sr] uses the same filter as y[j], but on a shifted version of x by `old_sr`.
- # This will explain the F.conv1d after, with a stride of old_sr.
- self._width = math.ceil(self.zeros * self.old_sr / sr)
- # If old_sr is still big after GCD reduction, most filters will be very unbalanced, i.e.,
- # they will have a lot of almost zero values to the left or to the right...
- # There is probably a way to evaluate those filters more efficiently, but this is kept for
- # future work.
- idx = torch.arange(-self._width, self._width + self.old_sr).float()
- for i in range(self.new_sr):
- t = (-i/self.new_sr + idx/self.old_sr) * sr
- t = t.clamp_(-self.zeros, self.zeros)
- t *= math.pi
- window = torch.cos(t/self.zeros/2)**2
- kernel = sinc(t) * window
- # Renormalize kernel to ensure a constant signal is preserved.
- kernel.div_(kernel.sum())
- kernels.append(kernel)
-
- self.register_buffer("kernel", torch.stack(kernels).view(self.new_sr, 1, -1))
-
- def forward(self, x: torch.Tensor, output_length: Optional[int] = None, full: bool = False):
- """
- Resample x.
- Args:
- x (Tensor): signal to resample, time should be the last dimension
- output_length (None or int): This can be set to the desired output length
- (last dimension). Allowed values are between 0 and
- ceil(length * new_sr / old_sr). When None (default) is specified, the
- floored output length will be used. In order to select the largest possible
- size, use the `full` argument.
- full (bool): return the longest possible output from the input. This can be useful
- if you chain resampling operations, and want to give the `output_length` only
- for the last one, while passing `full=True` to all the other ones.
- """
- if self.old_sr == self.new_sr:
- return x
- shape = x.shape
- length = x.shape[-1]
- x = x.reshape(-1, length)
- x = F.pad(x[:, None], (self._width, self._width + self.old_sr), mode='replicate')
- ys = F.conv1d(x, self.kernel, stride=self.old_sr) # type: ignore
- y = ys.transpose(1, 2).reshape(list(shape[:-1]) + [-1])
-
- float_output_length = self.new_sr * length / self.old_sr
- max_output_length = int(math.ceil(float_output_length))
- default_output_length = int(float_output_length)
- if output_length is None:
- output_length = max_output_length if full else default_output_length
- elif output_length < 0 or output_length > max_output_length:
- raise ValueError(f"output_length must be between 0 and {max_output_length}")
- else:
- if full:
- raise ValueError("You cannot pass both full=True and output_length")
- return y[..., :output_length]
-
- def __repr__(self):
- return simple_repr(self)
-
-
-def resample_frac(x: torch.Tensor, old_sr: int, new_sr: int,
- zeros: int = 24, rolloff: float = 0.945,
- output_length: Optional[int] = None, full: bool = False):
- """
- Functional version of `ResampleFrac`, refer to its documentation for more information.
-
- ..warning::
- If you call repeatidly this functions with the same sample rates, then the
- resampling kernel will be recomputed everytime. For best performance, you should use
- and cache an instance of `ResampleFrac`.
- """
- return ResampleFrac(old_sr, new_sr, zeros, rolloff).to(x)(x, output_length, full)
-
-
-# Easier implementations for downsampling and upsampling by a factor of 2
-# Kept for testing and reference
-
-def _kernel_upsample2_downsample2(zeros):
- # Kernel for upsampling and downsampling by a factor of 2. Interestingly,
- # it is the same kernel used for both.
- win = torch.hann_window(4 * zeros + 1, periodic=False)
- winodd = win[1::2]
- t = torch.linspace(-zeros + 0.5, zeros - 0.5, 2 * zeros)
- t *= math.pi
- kernel = (sinc(t) * winodd).view(1, 1, -1)
- return kernel
-
-
-def _upsample2(x, zeros=24):
- """
- Upsample x by a factor of two. The output will be exactly twice as long as the input.
- Args:
- x (Tensor): signal to upsample, time should be the last dimension
- zeros (int): number of zero crossing to keep in the sinc filter.
-
- This function is kept only for reference, you should use the more generic `resample_frac`
- one. This function does not perform anti-aliasing filtering.
- """
- *other, time = x.shape
- kernel = _kernel_upsample2_downsample2(zeros).to(x)
- out = F.conv1d(x.view(-1, 1, time), kernel, padding=zeros)[..., 1:].view(*other, time)
- y = torch.stack([x, out], dim=-1)
- return y.view(*other, -1)
-
-
-def _downsample2(x, zeros=24):
- """
- Downsample x by a factor of two. The output length is half of the input, ceiled.
- Args:
- x (Tensor): signal to downsample, time should be the last dimension
- zeros (int): number of zero crossing to keep in the sinc filter.
-
- This function is kept only for reference, you should use the more generic `resample_frac`
- one. This function does not perform anti-aliasing filtering.
- """
- if x.shape[-1] % 2 != 0:
- x = F.pad(x, (0, 1))
- xeven = x[..., ::2]
- xodd = x[..., 1::2]
- *other, time = xodd.shape
- kernel = _kernel_upsample2_downsample2(zeros).to(x)
- out = xeven + F.conv1d(xodd.view(-1, 1, time), kernel, padding=zeros)[..., :-1].view(
- *other, time)
- return out.view(*other, -1).mul(0.5)
diff --git a/spaces/Kaori1707/Depth-estimation/dpt/models.py b/spaces/Kaori1707/Depth-estimation/dpt/models.py
deleted file mode 100644
index f0c142fd3d8a29f9588b964250225d77f7b56fc8..0000000000000000000000000000000000000000
--- a/spaces/Kaori1707/Depth-estimation/dpt/models.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock,
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- enable_attention_hooks=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- enable_attention_hooks=enable_attention_hooks,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
- def forward(self, x):
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(
- self, path=None, non_negative=True, scale=1.0, shift=0.0, invert=False, **kwargs
- ):
- features = kwargs["features"] if "features" in kwargs else 256
-
- self.scale = scale
- self.shift = shift
- self.invert = invert
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
-
- def forward(self, x):
- inv_depth = super().forward(x).squeeze(dim=1)
-
- if self.invert:
- depth = self.scale * inv_depth + self.shift
- depth[depth < 1e-8] = 1e-8
- depth = 1.0 / depth
- return depth
- else:
- return inv_depth
-
-
-class DPTSegmentationModel(DPT):
- def __init__(self, num_classes, path=None, **kwargs):
-
- features = kwargs["features"] if "features" in kwargs else 256
-
- kwargs["use_bn"] = True
-
- head = nn.Sequential(
- nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False),
- nn.BatchNorm2d(features),
- nn.ReLU(True),
- nn.Dropout(0.1, False),
- nn.Conv2d(features, num_classes, kernel_size=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- )
-
- super().__init__(head, **kwargs)
-
- self.auxlayer = nn.Sequential(
- nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False),
- nn.BatchNorm2d(features),
- nn.ReLU(True),
- nn.Dropout(0.1, False),
- nn.Conv2d(features, num_classes, kernel_size=1),
- )
-
- if path is not None:
- self.load(path)
diff --git a/spaces/Kedreamix/YoloGesture/README.md b/spaces/Kedreamix/YoloGesture/README.md
deleted file mode 100644
index 58e51d26561e13d160636de0bb379bab20e5bea0..0000000000000000000000000000000000000000
--- a/spaces/Kedreamix/YoloGesture/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: YoloGesture
-emoji: 👀
-colorFrom: pink
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/places205.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/places205.py
deleted file mode 100644
index f3ba1ff631a7a4840b66cf63ec53585ec064560d..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/places205.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Optional, Union
-
-from mmpretrain.registry import DATASETS
-from .categories import PLACES205_CATEGORIES
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class Places205(CustomDataset):
- """`Places205 `_ Dataset.
-
- Args:
- data_root (str): The root directory for ``data_prefix`` and
- ``ann_file``. Defaults to ''.
- data_prefix (str | dict): Prefix for training data. Defaults
- to ''.
- ann_file (str): Annotation file path. Defaults to ''.
- metainfo (dict, optional): Meta information for dataset, such as class
- information. Defaults to None.
- **kwargs: Other keyword arguments in :class:`CustomDataset` and
- :class:`BaseDataset`.
- """
-
- IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif')
- METAINFO = {'classes': PLACES205_CATEGORIES}
-
- def __init__(self,
- data_root: str = '',
- data_prefix: Union[str, dict] = '',
- ann_file: str = '',
- metainfo: Optional[dict] = None,
- **kwargs):
- kwargs = {'extensions': self.IMG_EXTENSIONS, **kwargs}
- super().__init__(
- data_root=data_root,
- data_prefix=data_prefix,
- ann_file=ann_file,
- metainfo=metainfo,
- **kwargs)
diff --git a/spaces/LanguageBind/LanguageBind/languagebind/depth/modeling_depth.py b/spaces/LanguageBind/LanguageBind/languagebind/depth/modeling_depth.py
deleted file mode 100644
index 849eade79b0f4bff345b73bcf6a71115a28d0a09..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/languagebind/depth/modeling_depth.py
+++ /dev/null
@@ -1,1030 +0,0 @@
-import math
-from typing import Optional, Tuple, Union
-
-import torch
-from einops import rearrange
-from peft import LoraConfig, get_peft_model
-from torch import nn
-from torch.nn import functional as F
-from transformers import PreTrainedModel, add_start_docstrings
-from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling
-from transformers.models.clip.modeling_clip import CLIPMLP, CLIPAttention, CLIPTextEmbeddings, CLIPVisionEmbeddings, \
- CLIPVisionModelWithProjection, CLIPTextModelWithProjection, _expand_mask, CLIPOutput, clip_loss
-from transformers.utils import add_start_docstrings_to_model_forward, replace_return_docstrings
-
-from .configuration_depth import LanguageBindDepthConfig, CLIPVisionConfig, CLIPTextConfig
-
-
-
-class PatchDropout(nn.Module):
- """
- https://arxiv.org/abs/2212.00794
- """
-
- def __init__(self, prob, exclude_first_token=True):
- super().__init__()
- assert 0 <= prob < 1.
- self.prob = prob
- self.exclude_first_token = exclude_first_token # exclude CLS token
-
- def forward(self, x, B, T):
- if not self.training or self.prob == 0.:
- return x
-
- if self.exclude_first_token:
- cls_tokens, x = x[:, :1], x[:, 1:]
- else:
- cls_tokens = torch.jit.annotate(torch.Tensor, x[:, :1])
-
- batch = x.size()[0]
- num_tokens = x.size()[1]
-
- batch_indices = torch.arange(batch)
- batch_indices = batch_indices[..., None]
-
- keep_prob = 1 - self.prob
- num_patches_keep = max(1, int(num_tokens * keep_prob))
-
- if T == 1:
- rand = torch.randn(batch, num_tokens)
- patch_indices_keep = rand.topk(num_patches_keep, dim=-1).indices
- else:
- rand = torch.randn(B, num_tokens)
- patch_indices_keep = rand.topk(num_patches_keep, dim=-1).indices
- patch_indices_keep = patch_indices_keep.unsqueeze(1).repeat(1, T, 1)
- patch_indices_keep = rearrange(patch_indices_keep, 'b t n -> (b t) n')
-
-
- x = x[batch_indices, patch_indices_keep]
-
- if self.exclude_first_token:
- x = torch.cat((cls_tokens, x), dim=1)
-
- return x
-
-class CLIPEncoderLayer(nn.Module):
- def __init__(self, config: LanguageBindDepthConfig):
- super().__init__()
- self.embed_dim = config.hidden_size
- self.self_attn = CLIPAttention(config)
- self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
- self.mlp = CLIPMLP(config)
- self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
-
- self.add_time_attn = config.add_time_attn
- if self.add_time_attn:
- self.t = config.num_frames
- self.temporal_embedding = nn.Parameter(torch.zeros(1, config.num_frames, config.hidden_size))
- nn.init.normal_(self.temporal_embedding, std=config.hidden_size ** -0.5)
-
- self.embed_dim = config.hidden_size
- self.temporal_attn = CLIPAttention(config)
- self.temporal_layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
- self.temporal_mlp = CLIPMLP(config)
- self.temporal_layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: torch.Tensor,
- causal_attention_mask: torch.Tensor,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- `(config.encoder_attention_heads,)`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- """
-
-
- if self.add_time_attn:
- bt, n, d = hidden_states.shape
- t = self.t
-
- # time embed
- if t != 1:
- n = hidden_states.shape[1]
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- hidden_states = hidden_states + self.temporal_embedding[:, :t, :]
- hidden_states = rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- # time attn
- residual = hidden_states
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- # hidden_states = self.layer_norm1(hidden_states) # share layernorm
- hidden_states = self.temporal_layer_norm1(hidden_states)
- hidden_states, attn_weights = self.temporal_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- )
- hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- residual = hidden_states
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- # hidden_states = self.layer_norm2(hidden_states) # share layernorm
- hidden_states = self.temporal_layer_norm2(hidden_states)
- hidden_states = self.temporal_mlp(hidden_states)
- hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- # spatial attn
- residual = hidden_states
-
- hidden_states = self.layer_norm1(hidden_states)
- hidden_states, attn_weights = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- )
- hidden_states = residual + hidden_states
-
- residual = hidden_states
- hidden_states = self.layer_norm2(hidden_states)
- hidden_states = self.mlp(hidden_states)
- hidden_states = residual + hidden_states
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (attn_weights,)
-
- return outputs
-
-
-
-
-
-
-
-
-
-class CLIPPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = LanguageBindDepthConfig
- base_model_prefix = "clip"
- supports_gradient_checkpointing = True
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """Initialize the weights"""
- factor = self.config.initializer_factor
- if isinstance(module, CLIPTextEmbeddings):
- module.token_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02)
- module.position_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02)
- elif isinstance(module, CLIPVisionEmbeddings):
- factor = self.config.initializer_factor
- nn.init.normal_(module.class_embedding, mean=0.0, std=module.embed_dim**-0.5 * factor)
- nn.init.normal_(module.patch_embedding.weight, std=module.config.initializer_range * factor)
- nn.init.normal_(module.position_embedding.weight, std=module.config.initializer_range * factor)
- elif isinstance(module, CLIPAttention):
- factor = self.config.initializer_factor
- in_proj_std = (module.embed_dim**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor
- out_proj_std = (module.embed_dim**-0.5) * factor
- nn.init.normal_(module.q_proj.weight, std=in_proj_std)
- nn.init.normal_(module.k_proj.weight, std=in_proj_std)
- nn.init.normal_(module.v_proj.weight, std=in_proj_std)
- nn.init.normal_(module.out_proj.weight, std=out_proj_std)
- elif isinstance(module, CLIPMLP):
- factor = self.config.initializer_factor
- in_proj_std = (
- (module.config.hidden_size**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor
- )
- fc_std = (2 * module.config.hidden_size) ** -0.5 * factor
- nn.init.normal_(module.fc1.weight, std=fc_std)
- nn.init.normal_(module.fc2.weight, std=in_proj_std)
- elif isinstance(module, LanguageBindDepth):
- nn.init.normal_(
- module.text_projection.weight,
- std=module.text_embed_dim**-0.5 * self.config.initializer_factor,
- )
- nn.init.normal_(
- module.visual_projection.weight,
- std=module.vision_embed_dim**-0.5 * self.config.initializer_factor,
- )
- elif isinstance(module, CLIPVisionModelWithProjection):
- nn.init.normal_(
- module.visual_projection.weight,
- std=self.config.hidden_size**-0.5 * self.config.initializer_factor,
- )
- elif isinstance(module, CLIPTextModelWithProjection):
- nn.init.normal_(
- module.text_projection.weight,
- std=self.config.hidden_size**-0.5 * self.config.initializer_factor,
- )
-
- if isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, CLIPEncoder):
- module.gradient_checkpointing = value
-
-
-CLIP_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-CLIP_TEXT_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-CLIP_VISION_INPUTS_DOCSTRING = r"""
- Args:
- pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
- Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
- [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-CLIP_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
- Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
- [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details.
- return_loss (`bool`, *optional*):
- Whether or not to return the contrastive loss.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-class CLIPEncoder(nn.Module):
- """
- Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
- [`CLIPEncoderLayer`].
-
- Args:
- config: CLIPConfig
- """
-
- def __init__(self, config: LanguageBindDepthConfig):
- super().__init__()
- self.config = config
- self.layers = nn.ModuleList([CLIPEncoderLayer(config) for _ in range(config.num_hidden_layers)])
- self.gradient_checkpointing = False
-
- def forward(
- self,
- inputs_embeds,
- attention_mask: Optional[torch.Tensor] = None,
- causal_attention_mask: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutput]:
- r"""
- Args:
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert `input_ids` indices into associated vectors
- than the model's internal embedding lookup matrix.
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Causal mask for the text model. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- encoder_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- hidden_states = inputs_embeds
- for idx, encoder_layer in enumerate(self.layers):
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs, output_attentions)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(encoder_layer),
- hidden_states,
- attention_mask,
- causal_attention_mask,
- )
- else:
- layer_outputs = encoder_layer(
- hidden_states,
- attention_mask,
- causal_attention_mask,
- output_attentions=output_attentions,
- )
-
- hidden_states = layer_outputs[0]
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[1],)
-
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- if not return_dict:
- return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
- return BaseModelOutput(
- last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
- )
-
-
-# Copied from transformers.models.bart.modeling_bart._make_causal_mask
-def _make_causal_mask(
- input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
-):
- """
- Make causal mask used for bi-directional self-attention.
- """
- bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)
- mask_cond = torch.arange(mask.size(-1), device=device)
- mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
- mask = mask.to(dtype)
-
- if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
- return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
-
-
-class CLIPTextTransformer(nn.Module):
- def __init__(self, config: CLIPTextConfig):
- super().__init__()
- self.config = config
- embed_dim = config.hidden_size
- self.embeddings = CLIPTextEmbeddings(config)
- self.encoder = CLIPEncoder(config)
- self.final_layer_norm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
-
- @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPTextConfig)
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPooling]:
- r"""
- Returns:
-
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is None:
- raise ValueError("You have to specify input_ids")
-
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
-
- hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
-
- # CLIP's text model uses causal mask, prepare it here.
- # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324
- causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device)
- # expand attention_mask
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
-
- encoder_outputs = self.encoder(
- inputs_embeds=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- last_hidden_state = encoder_outputs[0]
- last_hidden_state = self.final_layer_norm(last_hidden_state)
-
- # text_embeds.shape = [batch_size, sequence_length, transformer.width]
- # take features from the eot embedding (eot_token is the highest number in each sequence)
- # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
- pooled_output = last_hidden_state[
- torch.arange(last_hidden_state.shape[0], device=last_hidden_state.device),
- input_ids.to(dtype=torch.int, device=last_hidden_state.device).argmax(dim=-1),
- ]
-
- if not return_dict:
- return (last_hidden_state, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPooling(
- last_hidden_state=last_hidden_state,
- pooler_output=pooled_output,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """The text model from CLIP without any head or projection on top.""",
- CLIP_START_DOCSTRING,
-)
-class CLIPTextModel(CLIPPreTrainedModel):
- config_class = CLIPTextConfig
-
- _no_split_modules = ["CLIPEncoderLayer"]
-
- def __init__(self, config: CLIPTextConfig):
- super().__init__(config)
- self.text_model = CLIPTextTransformer(config)
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self) -> nn.Module:
- return self.text_model.embeddings.token_embedding
-
- def set_input_embeddings(self, value):
- self.text_model.embeddings.token_embedding = value
-
- @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPTextConfig)
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPooling]:
- r"""
- Returns:
-
- Examples:
-
- ```python
- >>> from transformers import AutoTokenizer, CLIPTextModel
-
- >>> model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
- >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
-
- >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
-
- >>> outputs = model(**inputs)
- >>> last_hidden_state = outputs.last_hidden_state
- >>> pooled_output = outputs.pooler_output # pooled (EOS token) states
- ```"""
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- return self.text_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
-
-class CLIPVisionTransformer(nn.Module):
- def __init__(self, config: CLIPVisionConfig):
- super().__init__()
- self.config = config
- embed_dim = config.hidden_size
-
- self.embeddings = CLIPVisionEmbeddings(config)
- self.patch_dropout = PatchDropout(config.force_patch_dropout)
- self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
- self.encoder = CLIPEncoder(config)
- self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
-
- @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPVisionConfig)
- def forward(
- self,
- pixel_values: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPooling]:
- r"""
- Returns:
-
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if pixel_values is None:
- raise ValueError("You have to specify pixel_values")
- ######################################
- if len(pixel_values.shape) == 7:
- b_new, pair_new, T, bs_new, channel_new, h_new, w_new = pixel_values.shape
- # print(pixel_values.shape)
- B = b_new * pair_new * bs_new
- pixel_values = pixel_values.reshape(B*T, channel_new, h_new, w_new)
-
- elif len(pixel_values.shape) == 5:
- B, _, T, _, _ = pixel_values.shape
- # print(pixel_values.shape)
- pixel_values = rearrange(pixel_values, 'b c t h w -> (b t) c h w')
- else:
- # print(pixel_values.shape)
- B, _, _, _ = pixel_values.shape
- T = 1
- ###########################
- hidden_states = self.embeddings(pixel_values)
-
- hidden_states = self.patch_dropout(hidden_states, B, T) ##############################################
-
- hidden_states = self.pre_layrnorm(hidden_states)
-
- encoder_outputs = self.encoder(
- inputs_embeds=hidden_states,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- last_hidden_state = encoder_outputs[0]
- pooled_output = last_hidden_state[:, 0, :]
- pooled_output = self.post_layernorm(pooled_output)
-
- pooled_output = pooled_output.reshape(B, T, -1).mean(1) ################################
-
- if not return_dict:
- return (last_hidden_state, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPooling(
- last_hidden_state=last_hidden_state,
- pooler_output=pooled_output,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """The vision model from CLIP without any head or projection on top.""",
- CLIP_START_DOCSTRING,
-)
-class CLIPVisionModel(CLIPPreTrainedModel):
- config_class = CLIPVisionConfig
- main_input_name = "pixel_values"
-
- def __init__(self, config: CLIPVisionConfig):
- super().__init__(config)
- self.vision_model = CLIPVisionTransformer(config)
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self) -> nn.Module:
- return self.vision_model.embeddings.patch_embedding
-
- @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPVisionConfig)
- def forward(
- self,
- pixel_values: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPooling]:
- r"""
- Returns:
-
- Examples:
-
- ```python
- >>> from PIL import Image
- >>> import requests
- >>> from transformers import AutoProcessor, CLIPVisionModel
-
- >>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
- >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
-
- >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
- >>> image = Image.open(requests.get(url, stream=True).raw)
-
- >>> inputs = processor(images=image, return_tensors="pt")
-
- >>> outputs = model(**inputs)
- >>> last_hidden_state = outputs.last_hidden_state
- >>> pooled_output = outputs.pooler_output # pooled CLS states
- ```"""
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- return self.vision_model(
- pixel_values=pixel_values,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
-
-@add_start_docstrings(CLIP_START_DOCSTRING)
-class LanguageBindDepth(CLIPPreTrainedModel):
- config_class = LanguageBindDepthConfig
-
- def __init__(self, config: LanguageBindDepthConfig):
- super().__init__(config)
-
- if not isinstance(config.text_config, CLIPTextConfig):
- raise ValueError(
- "config.text_config is expected to be of type CLIPTextConfig but is of type"
- f" {type(config.text_config)}."
- )
-
- if not isinstance(config.vision_config, CLIPVisionConfig):
- raise ValueError(
- "config.vision_config is expected to be of type CLIPVisionConfig but is of type"
- f" {type(config.vision_config)}."
- )
-
- text_config = config.text_config
- vision_config = config.vision_config
- self.add_time_attn = vision_config.add_time_attn
- self.lora_r = vision_config.lora_r
- self.lora_alpha = vision_config.lora_alpha
- self.lora_dropout = vision_config.lora_dropout
-
- self.projection_dim = config.projection_dim
- self.text_embed_dim = text_config.hidden_size
- self.vision_embed_dim = vision_config.hidden_size
-
- self.text_model = CLIPTextTransformer(text_config)
- self.vision_model = CLIPVisionTransformer(vision_config)
-
- self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False)
- self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False)
- self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))
-
- # Initialize weights and apply final processing
- self.post_init()
- self.convert_to_lora()
- self.resize_pos(self.vision_model.embeddings, vision_config)
-
- def convert_to_lora(self):
- if self.lora_r == 0:
- return
- if self.add_time_attn:
- target_modules = ["temporal_attn.k_proj", "temporal_attn.v_proj",
- "temporal_attn.q_proj", "temporal_attn.out_proj",
- "temporal_mlp.fc1", "temporal_mlp.fc2"]
- else:
- target_modules = ["k_proj", "v_proj", "q_proj", "out_proj"]
- config = LoraConfig(
- r=self.lora_r, # 16
- lora_alpha=self.lora_alpha, # 16
- target_modules=target_modules, # self_attn.out_proj
- lora_dropout=self.lora_dropout, # 0.1
- bias="none",
- modules_to_save=[],
- )
- self.vision_model.encoder.is_gradient_checkpointing = False
- self.vision_model.encoder = get_peft_model(self.vision_model.encoder, config)
-
- def resize_pos(self, m, vision_config):
- # convert embedding
- if vision_config.num_mel_bins!=0 and vision_config.target_length!=0:
- m.image_size = [vision_config.num_mel_bins, vision_config.target_length]
- m.config.image_size = [m.image_size, m.image_size] if isinstance(m.image_size, int) else m.image_size
- # pos resize
- old_pos_embed_state_dict = m.position_embedding.state_dict()
- old_pos_embed = old_pos_embed_state_dict['weight']
- dtype = old_pos_embed.dtype
- grid_size = [m.config.image_size[0] // m.patch_size, m.config.image_size[1] // m.patch_size]
- extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more)
- new_seq_len = grid_size[0] * grid_size[1] + extra_tokens
- if new_seq_len == old_pos_embed.shape[0]:
- # m.to(args.device)
- return
-
- m.num_patches = grid_size[0] * grid_size[1]
- m.num_positions = m.num_patches + 1
- m.register_buffer("position_ids", torch.arange(m.num_positions).expand((1, -1)))
- new_position_embedding = nn.Embedding(m.num_positions, m.embed_dim)
-
- if extra_tokens:
- pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:]
- else:
- pos_emb_tok, pos_emb_img = None, old_pos_embed
- old_grid_size = [int(math.sqrt(len(pos_emb_img)))] * 2
-
- # if is_master(args):
- # logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size)
- pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2)
- pos_emb_img = F.interpolate(
- pos_emb_img,
- size=grid_size,
- mode='bicubic',
- antialias=True,
- align_corners=False,
- )
- pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0]
- if pos_emb_tok is not None:
- new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0)
- else:
- new_pos_embed = pos_emb_img
- old_pos_embed_state_dict['weight'] = new_pos_embed.to(dtype)
- m.position_embedding = new_position_embedding
- m.position_embedding.load_state_dict(old_pos_embed_state_dict)
-
- # m.to(args.device)
-
- @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING)
- def get_text_features(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> torch.FloatTensor:
- r"""
- Returns:
- text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by
- applying the projection layer to the pooled output of [`CLIPTextModel`].
-
- Examples:
-
- ```python
- >>> from transformers import AutoTokenizer, CLIPModel
-
- >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
- >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
-
- >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
- >>> text_features = model.get_text_features(**inputs)
- ```"""
- # Use CLIP model's config for some fields (if specified) instead of those of vision & text components.
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- text_outputs = self.text_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = text_outputs[1]
- text_features = self.text_projection(pooled_output)
-
- return text_features
-
- @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING)
- def get_image_features(
- self,
- pixel_values: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> torch.FloatTensor:
- r"""
- Returns:
- image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by
- applying the projection layer to the pooled output of [`CLIPVisionModel`].
-
- Examples:
-
- ```python
- >>> from PIL import Image
- >>> import requests
- >>> from transformers import AutoProcessor, CLIPModel
-
- >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
- >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
-
- >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
- >>> image = Image.open(requests.get(url, stream=True).raw)
-
- >>> inputs = processor(images=image, return_tensors="pt")
-
- >>> image_features = model.get_image_features(**inputs)
- ```"""
- # Use CLIP model's config for some fields (if specified) instead of those of vision & text components.
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- vision_outputs = self.vision_model(
- pixel_values=pixel_values,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = vision_outputs[1] # pooled_output
- image_features = self.visual_projection(pooled_output)
-
- return image_features
-
- @add_start_docstrings_to_model_forward(CLIP_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=CLIPOutput, config_class=LanguageBindDepthConfig)
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- pixel_values: Optional[torch.FloatTensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- return_loss: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CLIPOutput]:
- r"""
- Returns:
-
- Examples:
-
- ```python
- >>> from PIL import Image
- >>> import requests
- >>> from transformers import AutoProcessor, CLIPModel
-
- >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
- >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
-
- >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
- >>> image = Image.open(requests.get(url, stream=True).raw)
-
- >>> inputs = processor(
- ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
- ... )
-
- >>> outputs = model(**inputs)
- >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
- >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
- ```"""
- # Use CLIP model's config for some fields (if specified) instead of those of vision & text components.
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- vision_outputs = self.vision_model(
- pixel_values=pixel_values,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- text_outputs = self.text_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- image_embeds = vision_outputs[1]
- image_embeds = self.visual_projection(image_embeds)
-
- text_embeds = text_outputs[1]
- text_embeds = self.text_projection(text_embeds)
-
- # normalized features
- image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True)
- text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)
-
- # cosine similarity as logits
- logit_scale = self.logit_scale.exp()
- logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale
- logits_per_image = logits_per_text.t()
-
- loss = None
- if return_loss:
- loss = clip_loss(logits_per_text)
-
- if not return_dict:
- output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs)
- return ((loss,) + output) if loss is not None else output
-
- return CLIPOutput(
- loss=loss,
- logits_per_image=logits_per_image,
- logits_per_text=logits_per_text,
- text_embeds=text_embeds,
- image_embeds=image_embeds,
- text_model_output=text_outputs,
- vision_model_output=vision_outputs,
- )
\ No newline at end of file
diff --git a/spaces/LightChen2333/OpenSLU/model/__init__.py b/spaces/LightChen2333/OpenSLU/model/__init__.py
deleted file mode 100644
index 33d939509e11296578019006a5d2e1ea07bf1c1c..0000000000000000000000000000000000000000
--- a/spaces/LightChen2333/OpenSLU/model/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from model.open_slu_model import OpenSLUModel
-
-__all__ = ["OpenSLUModel"]
\ No newline at end of file
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/satrn/satrn_academic.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/satrn/satrn_academic.py
deleted file mode 100644
index 00a664e2093f4b4c5cbf77708813c66761428814..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/satrn/satrn_academic.py
+++ /dev/null
@@ -1,68 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/recog_pipelines/satrn_pipeline.py',
- '../../_base_/recog_datasets/ST_MJ_train.py',
- '../../_base_/recog_datasets/academic_test.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-label_convertor = dict(
- type='AttnConvertor', dict_type='DICT90', with_unknown=True)
-
-model = dict(
- type='SATRN',
- backbone=dict(type='ShallowCNN', input_channels=3, hidden_dim=512),
- encoder=dict(
- type='SatrnEncoder',
- n_layers=12,
- n_head=8,
- d_k=512 // 8,
- d_v=512 // 8,
- d_model=512,
- n_position=100,
- d_inner=512 * 4,
- dropout=0.1),
- decoder=dict(
- type='NRTRDecoder',
- n_layers=6,
- d_embedding=512,
- n_head=8,
- d_model=512,
- d_inner=512 * 4,
- d_k=512 // 8,
- d_v=512 // 8),
- loss=dict(type='TFLoss'),
- label_convertor=label_convertor,
- max_seq_len=25)
-
-# optimizer
-optimizer = dict(type='Adam', lr=3e-4)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(policy='step', step=[3, 4])
-total_epochs = 6
-
-data = dict(
- samples_per_gpu=64,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/data_utils.py b/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/data_utils.py
deleted file mode 100644
index 4855699d23d5dee36d4a12e875c7465265caac0f..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/data_utils.py
+++ /dev/null
@@ -1,392 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import commons
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-from text import text_to_sequence, cleaned_text_to_sequence
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_and_text)
- self._filter()
-
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- audiopath, text = audiopath_and_text[0], audiopath_and_text[1]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- return (text, spec, wav)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths
-
-
-"""Multi speaker version"""
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_sid_text_new = []
- lengths = []
- for audiopath, sid, text in self.audiopaths_sid_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_sid_text_new.append([audiopath, sid, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_sid_text = audiopaths_sid_text_new
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- sid = self.get_sid(sid)
- return (text, spec, wav, sid)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- sid[i] = row[3]
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i+1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
-
- # subsample
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/__init__.py
deleted file mode 100644
index a8eb83a9d88b25cb8f1faebc9236da929a7722c7..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .syncbn import batchnorm2d_sync
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/style.css b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/style.css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/MarcusSu1216/XingTong/wav_upload.py b/spaces/MarcusSu1216/XingTong/wav_upload.py
deleted file mode 100644
index cac679de78634e638e9a998615406b1c36374fb5..0000000000000000000000000000000000000000
--- a/spaces/MarcusSu1216/XingTong/wav_upload.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from google.colab import files
-import shutil
-import os
-import argparse
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--type", type=str, required=True, help="type of file to upload")
- args = parser.parse_args()
- file_type = args.type
-
- basepath = os.getcwd()
- uploaded = files.upload() # 上传文件
- assert(file_type in ['zip', 'audio'])
- if file_type == "zip":
- upload_path = "./upload/"
- for filename in uploaded.keys():
- #将上传的文件移动到指定的位置上
- shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, "userzip.zip"))
- elif file_type == "audio":
- upload_path = "./raw/"
- for filename in uploaded.keys():
- #将上传的文件移动到指定的位置上
- shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename))
\ No newline at end of file
diff --git a/spaces/MrSinan/LFW-MaskedRecogntion/README.md b/spaces/MrSinan/LFW-MaskedRecogntion/README.md
deleted file mode 100644
index 5f740dbf90ba2439c86c88faef26635a876784b9..0000000000000000000000000000000000000000
--- a/spaces/MrSinan/LFW-MaskedRecogntion/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: LFW Masked Recognition
-emoji: 👥
-python_version: 3.7
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
-license: afl-3.0
-
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/autoencoder_models/__init__.py b/spaces/NCTCMumbai/NCTC/models/research/autoencoder/autoencoder_models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/NewBing520997/bingo/README.md b/spaces/NewBing520997/bingo/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/NewBing520997/bingo/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/speech_recognition/test_data_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/speech_recognition/test_data_utils.py
deleted file mode 100644
index a72e0b66948da1349d87eafdef4c4004dd535c96..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/speech_recognition/test_data_utils.py
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-import unittest
-
-import torch
-from examples.speech_recognition.data import data_utils
-
-
-class DataUtilsTest(unittest.TestCase):
- def test_normalization(self):
- sample_len1 = torch.tensor(
- [
- [
- -0.7661,
- -1.3889,
- -2.0972,
- -0.9134,
- -0.7071,
- -0.9765,
- -0.8700,
- -0.8283,
- 0.7512,
- 1.3211,
- 2.1532,
- 2.1174,
- 1.2800,
- 1.2633,
- 1.6147,
- 1.6322,
- 2.0723,
- 3.1522,
- 3.2852,
- 2.2309,
- 2.5569,
- 2.2183,
- 2.2862,
- 1.5886,
- 0.8773,
- 0.8725,
- 1.2662,
- 0.9899,
- 1.1069,
- 1.3926,
- 1.2795,
- 1.1199,
- 1.1477,
- 1.2687,
- 1.3843,
- 1.1903,
- 0.8355,
- 1.1367,
- 1.2639,
- 1.4707,
- ]
- ]
- )
- out = data_utils.apply_mv_norm(sample_len1)
- assert not torch.isnan(out).any()
- assert (out == sample_len1).all()
diff --git a/spaces/OIUGLK/bingo/src/components/ui/dialog.tsx b/spaces/OIUGLK/bingo/src/components/ui/dialog.tsx
deleted file mode 100644
index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/src/components/ui/dialog.tsx
+++ /dev/null
@@ -1,128 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as DialogPrimitive from '@radix-ui/react-dialog'
-
-import { cn } from '@/lib/utils'
-import { IconClose } from '@/components/ui/icons'
-
-const Dialog = DialogPrimitive.Root
-
-const DialogTrigger = DialogPrimitive.Trigger
-
-const DialogPortal = ({
- className,
- children,
- ...props
-}: DialogPrimitive.DialogPortalProps) => (
-
-
- {children}
-
-
-)
-DialogPortal.displayName = DialogPrimitive.Portal.displayName
-
-const DialogOverlay = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogOverlay.displayName = DialogPrimitive.Overlay.displayName
-
-const DialogContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
- {children}
-
-
- Close
-
-
-
-))
-DialogContent.displayName = DialogPrimitive.Content.displayName
-
-const DialogHeader = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogHeader.displayName = 'DialogHeader'
-
-const DialogFooter = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogFooter.displayName = 'DialogFooter'
-
-const DialogTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogTitle.displayName = DialogPrimitive.Title.displayName
-
-const DialogDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogDescription.displayName = DialogPrimitive.Description.displayName
-
-export {
- Dialog,
- DialogTrigger,
- DialogContent,
- DialogHeader,
- DialogFooter,
- DialogTitle,
- DialogDescription
-}
diff --git a/spaces/OllieWallie/Openai/README.md b/spaces/OllieWallie/Openai/README.md
deleted file mode 100644
index 3d35af466dda5c6d4973a62b130d542b504995a7..0000000000000000000000000000000000000000
--- a/spaces/OllieWallie/Openai/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Openai
-emoji: 📚
-colorFrom: indigo
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/auto_pipeline.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/auto_pipeline.py
deleted file mode 100644
index 0a1c3fcdb332d5fe9b44aed8a800c0e01d144471..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/auto_pipeline.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-"""Return a pipeline automatically based on its name.
-"""
-
-from lmflow.pipeline.evaluator import Evaluator
-from lmflow.pipeline.finetuner import Finetuner
-from lmflow.pipeline.inferencer import Inferencer
-from lmflow.pipeline.raft_aligner import RaftAligner
-
-
-PIPELINE_MAPPING = {
- "evaluator": Evaluator,
- "finetuner": Finetuner,
- "inferencer": Inferencer,
- "raft_aligner": RaftAligner,
-}
-
-
-class AutoPipeline:
- """
- The class designed to return a pipeline automatically based on its name.
- """
- @classmethod
- def get_pipeline(self,
- pipeline_name,
- model_args,
- data_args,
- pipeline_args,
- *args,
- **kwargs
- ):
- if pipeline_name not in PIPELINE_MAPPING:
- raise NotImplementedError(
- f'Pipeline "{pipeline_name}" is not supported'
- )
-
- pipeline = PIPELINE_MAPPING[pipeline_name](
- model_args,
- data_args,
- pipeline_args,
- *args,
- **kwargs
- )
- return pipeline
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/utils/__init__.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OverSky/mio-amadeus/README.md b/spaces/OverSky/mio-amadeus/README.md
deleted file mode 100644
index 1ca4e24788d895c6591091ee5a6e3322ad174136..0000000000000000000000000000000000000000
--- a/spaces/OverSky/mio-amadeus/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mio Amadeus
-emoji: 😻
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.16.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Palplatine/artefact_memes/app.py b/spaces/Palplatine/artefact_memes/app.py
deleted file mode 100644
index 7157bed4d5d39cbc9a7ed289599eeffac1485bbf..0000000000000000000000000000000000000000
--- a/spaces/Palplatine/artefact_memes/app.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import pandas as pd
-import numpy as np
-import matplotlib.pyplot as plt
-import seaborn as sns
-from sentence_transformers import SentenceTransformer
-from tensorflow.keras.models import model_from_json
-from tensorflow.keras.optimizers import Adam
-import os
-import cv2
-from PIL import Image
-import streamlit as st
-
-
-#####################################################################################################################################
-st.set_page_config(layout='wide')
-
-# Sidebar: logo Artefact + main info on text
-with st.sidebar:
- col1, col2, col3 = st.columns(3)
- with col2:
- logo_facebook = Image.open('static/logo_facebook.png')
- st.image(logo_facebook)
-
- # Checkboxes
- hateful = st.checkbox('Check to see top hateful words used')
-
- if hateful:
- # Loading some hateful text data
- df_hate = pd.read_csv('static/data_hate.csv')
-
- number_chosen_hate = st.number_input('How many top hateful words do you want to see?', value=5)
- df_chosen_hate = df_hate.iloc[:number_chosen_hate, :]
-
- st.write(f'{number_chosen_hate} most used words in the hateful vocabulary:')
- st.dataframe(df_chosen_hate)
-
- non_hateful = st.checkbox('Check to see top non-hateful words used')
-
- if non_hateful:
- # Loading some non-hateful text data
- df_no_hate = pd.read_csv('static/data_no_hate.csv')
-
- number_chosen = st.number_input('How many top non-hateful words do you want to see?', value=5)
- df_chosen = df_no_hate.iloc[:number_chosen, :]
-
- st.write(f'{number_chosen} most used words in the hateful vocabulary:')
- st.dataframe(df_chosen)
-
-
-#####################################################################################################################################
-st.title('Facebook: Hateful Memes recognition')
-st.write("---")
-
-# Sélection image
-img_filepath = 'static/images_streamlit'
-list_images = sorted([img for img in os.listdir(img_filepath)])
-
-st.subheader('Some examples of hateful and non-hateful memes:')
-with st.expander('Want to see some memes?'):
-
- selected_image = st.select_slider('Select a meme to show it', options = [list_images[i] for i in range(10)], value=(list_images[0]))
-
- col1, col2, col3 = st.columns(3)
-
- with col2:
- st.image(f'{img_filepath}/{selected_image}')
-
-st.write("---")
-#####################################################################################################################################
-
-# Hateful test
-st.subheader('Is a word in our hateful vocabulary or not?')
-with st.expander('Hateful? Non-hateful?'):
-
- word = st.text_input('Write a word to test it', 'like')
- word_lower = word.lower()
-
- # Need to reload them in case it was not done in the sidebar
- df_hate = pd.read_csv('static/data_hate.csv')
- df_no_hate = pd.read_csv('static/data_no_hate.csv')
-
- try:
- if word_lower not in df_hate['word'].values:
- st.write(f'"{word}" is not in our hateful vocabulary.')
- else:
- appeared_hate = df_hate[df_hate['word'] == word_lower]['count'].values[0]
- st.write(f'"{word}" is in our hateful vocabulary, it appears {appeared_hate} times.')
-
- if word_lower not in df_no_hate['word'].values:
- st.write(f'"{word}"is not in our non-hateful vocabulary.')
- else:
- appeared_no_hate = df_no_hate[df_no_hate['word'] == word_lower]['count'].values[0]
- st.write(f'"{word}" is in our non-hateful vocabulary, it appears {appeared_no_hate} times.')
-
- st.write(f'Ratio hateful vs non-hateful: {round(appeared_hate/appeared_no_hate, 2)}.')
-
- except:
- st.write(f'"{word}" is not in our hateful and non-hateful vocabulary.')
-
-st.write("---")
-
-#####################################################################################################################################
-
-# Slider to choose how many words we want to see and plot
-st.subheader('Barplot of top selected words:')
-with st.expander('Select to choose how many top words you want to see and their count'):
-
- option = st.selectbox('Which vocabulary to select?', ('Hateful vocabulary', 'Non-hateful vocabulary', 'Both vocabularies'))
- st.write('You selected', option)
-
- if option == 'Hateful vocabulary':
-
- df_hate_subset = df_hate[df_hate.iloc[:, 1] >= 20]
-
- start_word, end_word = st.select_slider(
- 'Select a range of top words',
- options=[x for x in range(1, df_hate_subset.shape[0]+1)],
- value=(1, 10))
-
- df_slider_hate = df_hate_subset.iloc[start_word-1:end_word, :]
-
- fig, ax = plt.subplots()
- bars = plt.barh(y=df_slider_hate['word'], width=df_slider_hate['count'], color=['darkmagenta', 'darkblue', 'darkgreen', 'darkred', 'darkgrey', 'darkorange'])
-
- ax.bar_label(bars)
- ax = plt.gca().invert_yaxis()
-
- st.subheader('Selected words hateful vocabulary:')
- st.pyplot(fig)
-
- elif option == 'Non-hateful vocabulary':
-
- df_no_hate_subset = df_no_hate[df_no_hate.iloc[:, 1] >= 30]
-
- start_word, end_word = st.select_slider(
- 'Select a range of top words',
- options=[x for x in range(1, df_no_hate_subset.shape[0]+1)],
- value=(1, 10))
-
- df_slider_no_hate = df_no_hate_subset.iloc[start_word-1:end_word, :]
-
- fig, ax = plt.subplots()
- bars = plt.barh(y=df_slider_no_hate['word'], width=df_slider_no_hate['count'], color=['darkmagenta', 'darkblue', 'darkgreen', 'darkred', 'darkgrey', 'darkorange'])
-
- ax.bar_label(bars)
- ax = plt.gca().invert_yaxis()
-
- st.subheader('Selected words non-hateful vocabulary:')
- st.pyplot(fig)
-
- else:
-
- df_top = pd.read_csv('./static/data_top.csv')
-
- start_word, end_word = st.select_slider(
- 'Select a range of top words',
- options=[x for x in range(1, df_top.shape[0]+1)],
- value=(1, 10))
-
- df_slider = df_top.iloc[start_word-1:end_word, :]
-
- fig, ax = plt.subplots()
- bars = plt.barh(y=df_slider['word'], width=df_slider['count'], color=['darkmagenta', 'darkblue', 'darkgreen', 'darkred', 'darkgrey', 'darkorange'])
-
- ax.bar_label(bars)
- ax = plt.gca().invert_yaxis()
-
- st.subheader('Selected words (hateful & non-hateful vocabularies):')
- st.pyplot(fig)
-
-
-st.write("---")
-
-#####################################################################################################################################
-
-# Grad Cam?
-st.write('Grad Cam if it works')
-
-
-
-
-st.write("---")
-#####################################################################################################################################
-
-# Testing some sentences
-st.subheader('Testing some sentences if you dare:')
-with st.expander('Input a sentence and check the probability of it being hateful:'):
-
- # Some input
- model_nlp = SentenceTransformer('all-mpnet-base-v2')
- sentence = st.text_input('Write a sentence to test it.', "Hopefully I don't write some hateful content.")
-
- # Encoding
- preprocessed_sentence = model_nlp.encode(sentence)
- preprocessed_sentence = preprocessed_sentence.reshape(1, -1)
-
- # load json and create model
- json_file = open('static/model_nlp/model_nlp.json', 'r')
- loaded_model_json = json_file.read()
- json_file.close()
- loaded_model = model_from_json(loaded_model_json)
- # load weights into new model
- loaded_model.load_weights("static/model_nlp/model_nlp.h5")
-
- # loaded_model.compile(optimizer=Adam(learning_rate=0.005), loss='binary_crossentropy', metrics=['AUC', 'accuracy'])
- y_pred = loaded_model.predict(preprocessed_sentence)
- percentage = y_pred[0][0] * 100
-
- st.write(f'Probability of being hateful: {round(percentage, 2)/100}')
- if y_pred[0][0] < 0.5:
- st.write(f"Congrats, it's not hateful!!!")
- else:
- st.write(f"Shame on you, it's hateful!!!")
-
-st.write("---")
-#####################################################################################################################################
-col1, col2, col3, col4, col5 = st.columns(5)
-with col5:
- logo_artefact = Image.open('static/logo_artefact.png')
- st.image(logo_artefact)
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/music-functions.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/music-functions.go
deleted file mode 100644
index 1fdf7eab321367d018fbf73ba939acf52088ab16..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/music-functions.go and /dev/null differ
diff --git a/spaces/Pentameric/DalleClone/html2canvas.js b/spaces/Pentameric/DalleClone/html2canvas.js
deleted file mode 100644
index dd1606d8698aae0ed4877058d6a218fda3a515cd..0000000000000000000000000000000000000000
--- a/spaces/Pentameric/DalleClone/html2canvas.js
+++ /dev/null
@@ -1,7756 +0,0 @@
-/*!
- * html2canvas 1.4.1
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
-(function (global, factory) {
- typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
- typeof define === 'function' && define.amd ? define(factory) :
- (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory());
-}(this, (function () { 'use strict';
-
- /*! *****************************************************************************
- Copyright (c) Microsoft Corporation.
-
- Permission to use, copy, modify, and/or distribute this software for any
- purpose with or without fee is hereby granted.
-
- THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
- REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
- AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
- INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
- LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
- OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
- PERFORMANCE OF THIS SOFTWARE.
- ***************************************************************************** */
- /* global Reflect, Promise */
-
- var extendStatics = function(d, b) {
- extendStatics = Object.setPrototypeOf ||
- ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||
- function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };
- return extendStatics(d, b);
- };
-
- function __extends(d, b) {
- if (typeof b !== "function" && b !== null)
- throw new TypeError("Class extends value " + String(b) + " is not a constructor or null");
- extendStatics(d, b);
- function __() { this.constructor = d; }
- d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());
- }
-
- var __assign = function() {
- __assign = Object.assign || function __assign(t) {
- for (var s, i = 1, n = arguments.length; i < n; i++) {
- s = arguments[i];
- for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];
- }
- return t;
- };
- return __assign.apply(this, arguments);
- };
-
- function __awaiter(thisArg, _arguments, P, generator) {
- function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
- return new (P || (P = Promise))(function (resolve, reject) {
- function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
- function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
- function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
- step((generator = generator.apply(thisArg, _arguments || [])).next());
- });
- }
-
- function __generator(thisArg, body) {
- var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;
- return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g;
- function verb(n) { return function (v) { return step([n, v]); }; }
- function step(op) {
- if (f) throw new TypeError("Generator is already executing.");
- while (_) try {
- if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;
- if (y = 0, t) op = [op[0] & 2, t.value];
- switch (op[0]) {
- case 0: case 1: t = op; break;
- case 4: _.label++; return { value: op[1], done: false };
- case 5: _.label++; y = op[1]; op = [0]; continue;
- case 7: op = _.ops.pop(); _.trys.pop(); continue;
- default:
- if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }
- if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }
- if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }
- if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }
- if (t[2]) _.ops.pop();
- _.trys.pop(); continue;
- }
- op = body.call(thisArg, _);
- } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }
- if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };
- }
- }
-
- function __spreadArray(to, from, pack) {
- if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {
- if (ar || !(i in from)) {
- if (!ar) ar = Array.prototype.slice.call(from, 0, i);
- ar[i] = from[i];
- }
- }
- return to.concat(ar || from);
- }
-
- var Bounds = /** @class */ (function () {
- function Bounds(left, top, width, height) {
- this.left = left;
- this.top = top;
- this.width = width;
- this.height = height;
- }
- Bounds.prototype.add = function (x, y, w, h) {
- return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h);
- };
- Bounds.fromClientRect = function (context, clientRect) {
- return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height);
- };
- Bounds.fromDOMRectList = function (context, domRectList) {
- var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; });
- return domRect
- ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height)
- : Bounds.EMPTY;
- };
- Bounds.EMPTY = new Bounds(0, 0, 0, 0);
- return Bounds;
- }());
- var parseBounds = function (context, node) {
- return Bounds.fromClientRect(context, node.getBoundingClientRect());
- };
- var parseDocumentSize = function (document) {
- var body = document.body;
- var documentElement = document.documentElement;
- if (!body || !documentElement) {
- throw new Error("Unable to get document size");
- }
- var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth));
- var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight));
- return new Bounds(0, 0, width, height);
- };
-
- /*
- * css-line-break 2.1.0
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var toCodePoints$1 = function (str) {
- var codePoints = [];
- var i = 0;
- var length = str.length;
- while (i < length) {
- var value = str.charCodeAt(i++);
- if (value >= 0xd800 && value <= 0xdbff && i < length) {
- var extra = str.charCodeAt(i++);
- if ((extra & 0xfc00) === 0xdc00) {
- codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000);
- }
- else {
- codePoints.push(value);
- i--;
- }
- }
- else {
- codePoints.push(value);
- }
- }
- return codePoints;
- };
- var fromCodePoint$1 = function () {
- var codePoints = [];
- for (var _i = 0; _i < arguments.length; _i++) {
- codePoints[_i] = arguments[_i];
- }
- if (String.fromCodePoint) {
- return String.fromCodePoint.apply(String, codePoints);
- }
- var length = codePoints.length;
- if (!length) {
- return '';
- }
- var codeUnits = [];
- var index = -1;
- var result = '';
- while (++index < length) {
- var codePoint = codePoints[index];
- if (codePoint <= 0xffff) {
- codeUnits.push(codePoint);
- }
- else {
- codePoint -= 0x10000;
- codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00);
- }
- if (index + 1 === length || codeUnits.length > 0x4000) {
- result += String.fromCharCode.apply(String, codeUnits);
- codeUnits.length = 0;
- }
- }
- return result;
- };
- var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$2 = 0; i$2 < chars$2.length; i$2++) {
- lookup$2[chars$2.charCodeAt(i$2)] = i$2;
- }
-
- /*
- * utrie 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) {
- lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1;
- }
- var decode$1 = function (base64) {
- var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4;
- if (base64[base64.length - 1] === '=') {
- bufferLength--;
- if (base64[base64.length - 2] === '=') {
- bufferLength--;
- }
- }
- var buffer = typeof ArrayBuffer !== 'undefined' &&
- typeof Uint8Array !== 'undefined' &&
- typeof Uint8Array.prototype.slice !== 'undefined'
- ? new ArrayBuffer(bufferLength)
- : new Array(bufferLength);
- var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer);
- for (i = 0; i < len; i += 4) {
- encoded1 = lookup$1$1[base64.charCodeAt(i)];
- encoded2 = lookup$1$1[base64.charCodeAt(i + 1)];
- encoded3 = lookup$1$1[base64.charCodeAt(i + 2)];
- encoded4 = lookup$1$1[base64.charCodeAt(i + 3)];
- bytes[p++] = (encoded1 << 2) | (encoded2 >> 4);
- bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2);
- bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63);
- }
- return buffer;
- };
- var polyUint16Array$1 = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 2) {
- bytes.push((buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
- var polyUint32Array$1 = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 4) {
- bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
-
- /** Shift size for getting the index-2 table offset. */
- var UTRIE2_SHIFT_2$1 = 5;
- /** Shift size for getting the index-1 table offset. */
- var UTRIE2_SHIFT_1$1 = 6 + 5;
- /**
- * Shift size for shifting left the index array values.
- * Increases possible data size with 16-bit index values at the cost
- * of compactability.
- * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY.
- */
- var UTRIE2_INDEX_SHIFT$1 = 2;
- /**
- * Difference between the two shift sizes,
- * for getting an index-1 offset from an index-2 offset. 6=11-5
- */
- var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1;
- /**
- * The part of the index-2 table for U+D800..U+DBFF stores values for
- * lead surrogate code _units_ not code _points_.
- * Values for lead surrogate code _points_ are indexed with this portion of the table.
- * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.)
- */
- var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1;
- /** Number of entries in a data block. 32=0x20 */
- var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1;
- /** Mask for getting the lower bits for the in-data-block offset. */
- var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1;
- var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1;
- /** Count the lengths of both BMP pieces. 2080=0x820 */
- var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1;
- /**
- * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820.
- * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2.
- */
- var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1;
- var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */
- /**
- * The index-1 table, only used for supplementary code points, at offset 2112=0x840.
- * Variable length, for code points up to highStart, where the last single-value range starts.
- * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1.
- * (For 0x100000 supplementary code points U+10000..U+10ffff.)
- *
- * The part of the index-2 table for supplementary code points starts
- * after this index-1 table.
- *
- * Both the index-1 table and the following part of the index-2 table
- * are omitted completely if there is only BMP data.
- */
- var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1;
- /**
- * Number of index-1 entries for the BMP. 32=0x20
- * This part of the index-1 table is omitted from the serialized form.
- */
- var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1;
- /** Number of entries in an index-2 block. 64=0x40 */
- var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1;
- /** Mask for getting the lower bits for the in-index-2-block offset. */
- var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1;
- var slice16$1 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint16Array(Array.prototype.slice.call(view, start, end));
- };
- var slice32$1 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint32Array(Array.prototype.slice.call(view, start, end));
- };
- var createTrieFromBase64$1 = function (base64, _byteLength) {
- var buffer = decode$1(base64);
- var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer);
- var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer);
- var headerLength = 24;
- var index = slice16$1(view16, headerLength / 2, view32[4] / 2);
- var data = view32[5] === 2
- ? slice16$1(view16, (headerLength + view32[4]) / 2)
- : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4));
- return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data);
- };
- var Trie$1 = /** @class */ (function () {
- function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) {
- this.initialValue = initialValue;
- this.errorValue = errorValue;
- this.highStart = highStart;
- this.highValueIndex = highValueIndex;
- this.index = index;
- this.data = data;
- }
- /**
- * Get the value for a code point as stored in the Trie.
- *
- * @param codePoint the code point
- * @return the value
- */
- Trie.prototype.get = function (codePoint) {
- var ix;
- if (codePoint >= 0) {
- if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) {
- // Ordinary BMP code point, excluding leading surrogates.
- // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index.
- // 16 bit data is stored in the index array itself.
- ix = this.index[codePoint >> UTRIE2_SHIFT_2$1];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint <= 0xffff) {
- // Lead Surrogate Code Point. A Separate index section is stored for
- // lead surrogate code units and code points.
- // The main index has the code unit data.
- // For this function, we need the code point data.
- // Note: this expression could be refactored for slightly improved efficiency, but
- // surrogate code points will be so rare in practice that it's not worth it.
- ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint < this.highStart) {
- // Supplemental code point, use two-level lookup.
- ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1);
- ix = this.index[ix];
- ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1;
- ix = this.index[ix];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint <= 0x10ffff) {
- return this.data[this.highValueIndex];
- }
- }
- // Fall through. The code point is outside of the legal range of 0..0x10ffff.
- return this.errorValue;
- };
- return Trie;
- }());
-
- /*
- * base64-arraybuffer 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$3 = 0; i$3 < chars$3.length; i$3++) {
- lookup$3[chars$3.charCodeAt(i$3)] = i$3;
- }
-
- var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA==';
-
- var LETTER_NUMBER_MODIFIER = 50;
- // Non-tailorable Line Breaking Classes
- var BK = 1; // Cause a line break (after)
- var CR$1 = 2; // Cause a line break (after), except between CR and LF
- var LF$1 = 3; // Cause a line break (after)
- var CM = 4; // Prohibit a line break between the character and the preceding character
- var NL = 5; // Cause a line break (after)
- var WJ = 7; // Prohibit line breaks before and after
- var ZW = 8; // Provide a break opportunity
- var GL = 9; // Prohibit line breaks before and after
- var SP = 10; // Enable indirect line breaks
- var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences
- // Break Opportunities
- var B2 = 12; // Provide a line break opportunity before and after the character
- var BA = 13; // Generally provide a line break opportunity after the character
- var BB = 14; // Generally provide a line break opportunity before the character
- var HY = 15; // Provide a line break opportunity after the character, except in numeric context
- var CB = 16; // Provide a line break opportunity contingent on additional information
- // Characters Prohibiting Certain Breaks
- var CL = 17; // Prohibit line breaks before
- var CP = 18; // Prohibit line breaks before
- var EX = 19; // Prohibit line breaks before
- var IN = 20; // Allow only indirect line breaks between pairs
- var NS = 21; // Allow only indirect line breaks before
- var OP = 22; // Prohibit line breaks after
- var QU = 23; // Act like they are both opening and closing
- // Numeric Context
- var IS = 24; // Prevent breaks after any and before numeric
- var NU = 25; // Form numeric expressions for line breaking purposes
- var PO = 26; // Do not break following a numeric expression
- var PR = 27; // Do not break in front of a numeric expression
- var SY = 28; // Prevent a break before; and allow a break after
- // Other Characters
- var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID
- var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters
- var CJ = 31; // Treat as NS or ID for strict or normal breaking.
- var EB = 32; // Do not break from following Emoji Modifier
- var EM = 33; // Do not break from preceding Emoji Base
- var H2 = 34; // Form Korean syllable blocks
- var H3 = 35; // Form Korean syllable blocks
- var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic
- var ID = 37; // Break before or after; except in some numeric context
- var JL = 38; // Form Korean syllable blocks
- var JV = 39; // Form Korean syllable blocks
- var JT = 40; // Form Korean syllable blocks
- var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes
- var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis
- var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions
- var ea_OP = [0x2329, 0xff08];
- var BREAK_MANDATORY = '!';
- var BREAK_NOT_ALLOWED$1 = '×';
- var BREAK_ALLOWED$1 = '÷';
- var UnicodeTrie$1 = createTrieFromBase64$1(base64$1);
- var ALPHABETICS = [AL, HL];
- var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL];
- var SPACE$1 = [SP, ZW];
- var PREFIX_POSTFIX = [PR, PO];
- var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1);
- var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3];
- var HYPHEN = [HY, BA];
- var codePointsToCharacterClasses = function (codePoints, lineBreak) {
- if (lineBreak === void 0) { lineBreak = 'strict'; }
- var types = [];
- var indices = [];
- var categories = [];
- codePoints.forEach(function (codePoint, index) {
- var classType = UnicodeTrie$1.get(codePoint);
- if (classType > LETTER_NUMBER_MODIFIER) {
- categories.push(true);
- classType -= LETTER_NUMBER_MODIFIER;
- }
- else {
- categories.push(false);
- }
- if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) {
- // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0
- if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) {
- indices.push(index);
- return types.push(CB);
- }
- }
- if (classType === CM || classType === ZWJ$1) {
- // LB10 Treat any remaining combining mark or ZWJ as AL.
- if (index === 0) {
- indices.push(index);
- return types.push(AL);
- }
- // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of
- // the base character in all of the following rules. Treat ZWJ as if it were CM.
- var prev = types[index - 1];
- if (LINE_BREAKS.indexOf(prev) === -1) {
- indices.push(indices[index - 1]);
- return types.push(prev);
- }
- indices.push(index);
- return types.push(AL);
- }
- indices.push(index);
- if (classType === CJ) {
- return types.push(lineBreak === 'strict' ? NS : ID);
- }
- if (classType === SA) {
- return types.push(AL);
- }
- if (classType === AI) {
- return types.push(AL);
- }
- // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL
- // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised
- // to take into account the actual line breaking properties for these characters.
- if (classType === XX) {
- if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) {
- return types.push(ID);
- }
- else {
- return types.push(AL);
- }
- }
- types.push(classType);
- });
- return [indices, types, categories];
- };
- var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) {
- var current = classTypes[currentIndex];
- if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) {
- var i = currentIndex;
- while (i <= classTypes.length) {
- i++;
- var next = classTypes[i];
- if (next === b) {
- return true;
- }
- if (next !== SP) {
- break;
- }
- }
- }
- if (current === SP) {
- var i = currentIndex;
- while (i > 0) {
- i--;
- var prev = classTypes[i];
- if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) {
- var n = currentIndex;
- while (n <= classTypes.length) {
- n++;
- var next = classTypes[n];
- if (next === b) {
- return true;
- }
- if (next !== SP) {
- break;
- }
- }
- }
- if (prev !== SP) {
- break;
- }
- }
- }
- return false;
- };
- var previousNonSpaceClassType = function (currentIndex, classTypes) {
- var i = currentIndex;
- while (i >= 0) {
- var type = classTypes[i];
- if (type === SP) {
- i--;
- }
- else {
- return type;
- }
- }
- return 0;
- };
- var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) {
- if (indicies[index] === 0) {
- return BREAK_NOT_ALLOWED$1;
- }
- var currentIndex = index - 1;
- if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) {
- return BREAK_NOT_ALLOWED$1;
- }
- var beforeIndex = currentIndex - 1;
- var afterIndex = currentIndex + 1;
- var current = classTypes[currentIndex];
- // LB4 Always break after hard line breaks.
- // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks.
- var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0;
- var next = classTypes[afterIndex];
- if (current === CR$1 && next === LF$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- if (HARD_LINE_BREAKS.indexOf(current) !== -1) {
- return BREAK_MANDATORY;
- }
- // LB6 Do not break before hard line breaks.
- if (HARD_LINE_BREAKS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB7 Do not break before spaces or zero width space.
- if (SPACE$1.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB8 Break before any character following a zero-width space, even if one or more spaces intervene.
- if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) {
- return BREAK_ALLOWED$1;
- }
- // LB8a Do not break after a zero width joiner.
- if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // zwj emojis
- if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB11 Do not break before or after Word joiner and related characters.
- if (current === WJ || next === WJ) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB12 Do not break after NBSP and related characters.
- if (current === GL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB12a Do not break before NBSP and related characters, except after spaces and hyphens.
- if ([SP, BA, HY].indexOf(current) === -1 && next === GL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces.
- if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB14 Do not break after ‘[’, even after spaces.
- if (previousNonSpaceClassType(currentIndex, classTypes) === OP) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB15 Do not break within ‘”[’, even with intervening spaces.
- if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces.
- if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB17 Do not break within ‘——’, even with intervening spaces.
- if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB18 Break after spaces.
- if (current === SP) {
- return BREAK_ALLOWED$1;
- }
- // LB19 Do not break before or after quotation marks, such as ‘ ” ’.
- if (current === QU || next === QU) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB20 Break before and after unresolved CB.
- if (next === CB || current === CB) {
- return BREAK_ALLOWED$1;
- }
- // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents.
- if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB21a Don't break after Hebrew + Hyphen.
- if (before === HL && HYPHEN.indexOf(current) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB21b Don’t break between Solidus and Hebrew letters.
- if (current === SY && next === HL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB22 Do not break before ellipsis.
- if (next === IN) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB23 Do not break between digits and letters.
- if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes.
- if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) ||
- ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix.
- if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) ||
- (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB25 Do not break between the following pairs of classes relevant to numbers:
- if (
- // (PR | PO) × ( OP | HY )? NU
- ([PR, PO].indexOf(current) !== -1 &&
- (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) ||
- // ( OP | HY ) × NU
- ([OP, HY].indexOf(current) !== -1 && next === NU) ||
- // NU × (NU | SY | IS)
- (current === NU && [NU, SY, IS].indexOf(next) !== -1)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP)
- if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) {
- var prevIndex = currentIndex;
- while (prevIndex >= 0) {
- var type = classTypes[prevIndex];
- if (type === NU) {
- return BREAK_NOT_ALLOWED$1;
- }
- else if ([SY, IS].indexOf(type) !== -1) {
- prevIndex--;
- }
- else {
- break;
- }
- }
- }
- // NU (NU | SY | IS)* (CL | CP)? × (PO | PR))
- if ([PR, PO].indexOf(next) !== -1) {
- var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex;
- while (prevIndex >= 0) {
- var type = classTypes[prevIndex];
- if (type === NU) {
- return BREAK_NOT_ALLOWED$1;
- }
- else if ([SY, IS].indexOf(type) !== -1) {
- prevIndex--;
- }
- else {
- break;
- }
- }
- }
- // LB26 Do not break a Korean syllable.
- if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) ||
- ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) ||
- ([JT, H3].indexOf(current) !== -1 && next === JT)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB27 Treat a Korean Syllable Block the same as ID.
- if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) ||
- (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB28 Do not break between alphabetics (“at”).
- if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”).
- if (current === IS && ALPHABETICS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses.
- if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 &&
- next === OP &&
- ea_OP.indexOf(codePoints[afterIndex]) === -1) ||
- (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB30a Break between two regional indicator symbols if and only if there are an even number of regional
- // indicators preceding the position of the break.
- if (current === RI$1 && next === RI$1) {
- var i = indicies[currentIndex];
- var count = 1;
- while (i > 0) {
- i--;
- if (classTypes[i] === RI$1) {
- count++;
- }
- else {
- break;
- }
- }
- if (count % 2 !== 0) {
- return BREAK_NOT_ALLOWED$1;
- }
- }
- // LB30b Do not break between an emoji base and an emoji modifier.
- if (current === EB && next === EM) {
- return BREAK_NOT_ALLOWED$1;
- }
- return BREAK_ALLOWED$1;
- };
- var cssFormattedClasses = function (codePoints, options) {
- if (!options) {
- options = { lineBreak: 'normal', wordBreak: 'normal' };
- }
- var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2];
- if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') {
- classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); });
- }
- var forbiddenBreakpoints = options.wordBreak === 'keep-all'
- ? isLetterNumber.map(function (letterNumber, i) {
- return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff;
- })
- : undefined;
- return [indicies, classTypes, forbiddenBreakpoints];
- };
- var Break = /** @class */ (function () {
- function Break(codePoints, lineBreak, start, end) {
- this.codePoints = codePoints;
- this.required = lineBreak === BREAK_MANDATORY;
- this.start = start;
- this.end = end;
- }
- Break.prototype.slice = function () {
- return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end));
- };
- return Break;
- }());
- var LineBreaker = function (str, options) {
- var codePoints = toCodePoints$1(str);
- var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2];
- var length = codePoints.length;
- var lastEnd = 0;
- var nextIndex = 0;
- return {
- next: function () {
- if (nextIndex >= length) {
- return { done: true, value: null };
- }
- var lineBreak = BREAK_NOT_ALLOWED$1;
- while (nextIndex < length &&
- (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) ===
- BREAK_NOT_ALLOWED$1) { }
- if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) {
- var value = new Break(codePoints, lineBreak, lastEnd, nextIndex);
- lastEnd = nextIndex;
- return { value: value, done: false };
- }
- return { done: true, value: null };
- },
- };
- };
-
- // https://www.w3.org/TR/css-syntax-3
- var FLAG_UNRESTRICTED = 1 << 0;
- var FLAG_ID = 1 << 1;
- var FLAG_INTEGER = 1 << 2;
- var FLAG_NUMBER = 1 << 3;
- var LINE_FEED = 0x000a;
- var SOLIDUS = 0x002f;
- var REVERSE_SOLIDUS = 0x005c;
- var CHARACTER_TABULATION = 0x0009;
- var SPACE = 0x0020;
- var QUOTATION_MARK = 0x0022;
- var EQUALS_SIGN = 0x003d;
- var NUMBER_SIGN = 0x0023;
- var DOLLAR_SIGN = 0x0024;
- var PERCENTAGE_SIGN = 0x0025;
- var APOSTROPHE = 0x0027;
- var LEFT_PARENTHESIS = 0x0028;
- var RIGHT_PARENTHESIS = 0x0029;
- var LOW_LINE = 0x005f;
- var HYPHEN_MINUS = 0x002d;
- var EXCLAMATION_MARK = 0x0021;
- var LESS_THAN_SIGN = 0x003c;
- var GREATER_THAN_SIGN = 0x003e;
- var COMMERCIAL_AT = 0x0040;
- var LEFT_SQUARE_BRACKET = 0x005b;
- var RIGHT_SQUARE_BRACKET = 0x005d;
- var CIRCUMFLEX_ACCENT = 0x003d;
- var LEFT_CURLY_BRACKET = 0x007b;
- var QUESTION_MARK = 0x003f;
- var RIGHT_CURLY_BRACKET = 0x007d;
- var VERTICAL_LINE = 0x007c;
- var TILDE = 0x007e;
- var CONTROL = 0x0080;
- var REPLACEMENT_CHARACTER = 0xfffd;
- var ASTERISK = 0x002a;
- var PLUS_SIGN = 0x002b;
- var COMMA = 0x002c;
- var COLON = 0x003a;
- var SEMICOLON = 0x003b;
- var FULL_STOP = 0x002e;
- var NULL = 0x0000;
- var BACKSPACE = 0x0008;
- var LINE_TABULATION = 0x000b;
- var SHIFT_OUT = 0x000e;
- var INFORMATION_SEPARATOR_ONE = 0x001f;
- var DELETE = 0x007f;
- var EOF = -1;
- var ZERO = 0x0030;
- var a = 0x0061;
- var e = 0x0065;
- var f = 0x0066;
- var u = 0x0075;
- var z = 0x007a;
- var A = 0x0041;
- var E = 0x0045;
- var F = 0x0046;
- var U = 0x0055;
- var Z = 0x005a;
- var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; };
- var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; };
- var isHex = function (codePoint) {
- return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f);
- };
- var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; };
- var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; };
- var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); };
- var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; };
- var isWhiteSpace = function (codePoint) {
- return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE;
- };
- var isNameStartCodePoint = function (codePoint) {
- return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE;
- };
- var isNameCodePoint = function (codePoint) {
- return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS;
- };
- var isNonPrintableCodePoint = function (codePoint) {
- return ((codePoint >= NULL && codePoint <= BACKSPACE) ||
- codePoint === LINE_TABULATION ||
- (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) ||
- codePoint === DELETE);
- };
- var isValidEscape = function (c1, c2) {
- if (c1 !== REVERSE_SOLIDUS) {
- return false;
- }
- return c2 !== LINE_FEED;
- };
- var isIdentifierStart = function (c1, c2, c3) {
- if (c1 === HYPHEN_MINUS) {
- return isNameStartCodePoint(c2) || isValidEscape(c2, c3);
- }
- else if (isNameStartCodePoint(c1)) {
- return true;
- }
- else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) {
- return true;
- }
- return false;
- };
- var isNumberStart = function (c1, c2, c3) {
- if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) {
- if (isDigit(c2)) {
- return true;
- }
- return c2 === FULL_STOP && isDigit(c3);
- }
- if (c1 === FULL_STOP) {
- return isDigit(c2);
- }
- return isDigit(c1);
- };
- var stringToNumber = function (codePoints) {
- var c = 0;
- var sign = 1;
- if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) {
- if (codePoints[c] === HYPHEN_MINUS) {
- sign = -1;
- }
- c++;
- }
- var integers = [];
- while (isDigit(codePoints[c])) {
- integers.push(codePoints[c++]);
- }
- var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0;
- if (codePoints[c] === FULL_STOP) {
- c++;
- }
- var fraction = [];
- while (isDigit(codePoints[c])) {
- fraction.push(codePoints[c++]);
- }
- var fracd = fraction.length;
- var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0;
- if (codePoints[c] === E || codePoints[c] === e) {
- c++;
- }
- var expsign = 1;
- if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) {
- if (codePoints[c] === HYPHEN_MINUS) {
- expsign = -1;
- }
- c++;
- }
- var exponent = [];
- while (isDigit(codePoints[c])) {
- exponent.push(codePoints[c++]);
- }
- var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0;
- return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp);
- };
- var LEFT_PARENTHESIS_TOKEN = {
- type: 2 /* LEFT_PARENTHESIS_TOKEN */
- };
- var RIGHT_PARENTHESIS_TOKEN = {
- type: 3 /* RIGHT_PARENTHESIS_TOKEN */
- };
- var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ };
- var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ };
- var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ };
- var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ };
- var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ };
- var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ };
- var LEFT_CURLY_BRACKET_TOKEN = {
- type: 11 /* LEFT_CURLY_BRACKET_TOKEN */
- };
- var RIGHT_CURLY_BRACKET_TOKEN = {
- type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */
- };
- var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ };
- var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ };
- var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ };
- var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ };
- var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ };
- var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ };
- var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ };
- var LEFT_SQUARE_BRACKET_TOKEN = {
- type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */
- };
- var RIGHT_SQUARE_BRACKET_TOKEN = {
- type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */
- };
- var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ };
- var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ };
- var Tokenizer = /** @class */ (function () {
- function Tokenizer() {
- this._value = [];
- }
- Tokenizer.prototype.write = function (chunk) {
- this._value = this._value.concat(toCodePoints$1(chunk));
- };
- Tokenizer.prototype.read = function () {
- var tokens = [];
- var token = this.consumeToken();
- while (token !== EOF_TOKEN) {
- tokens.push(token);
- token = this.consumeToken();
- }
- return tokens;
- };
- Tokenizer.prototype.consumeToken = function () {
- var codePoint = this.consumeCodePoint();
- switch (codePoint) {
- case QUOTATION_MARK:
- return this.consumeStringToken(QUOTATION_MARK);
- case NUMBER_SIGN:
- var c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if (isNameCodePoint(c1) || isValidEscape(c2, c3)) {
- var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED;
- var value = this.consumeName();
- return { type: 5 /* HASH_TOKEN */, value: value, flags: flags };
- }
- break;
- case DOLLAR_SIGN:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return SUFFIX_MATCH_TOKEN;
- }
- break;
- case APOSTROPHE:
- return this.consumeStringToken(APOSTROPHE);
- case LEFT_PARENTHESIS:
- return LEFT_PARENTHESIS_TOKEN;
- case RIGHT_PARENTHESIS:
- return RIGHT_PARENTHESIS_TOKEN;
- case ASTERISK:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return SUBSTRING_MATCH_TOKEN;
- }
- break;
- case PLUS_SIGN:
- if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- break;
- case COMMA:
- return COMMA_TOKEN;
- case HYPHEN_MINUS:
- var e1 = codePoint;
- var e2 = this.peekCodePoint(0);
- var e3 = this.peekCodePoint(1);
- if (isNumberStart(e1, e2, e3)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- if (isIdentifierStart(e1, e2, e3)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) {
- this.consumeCodePoint();
- this.consumeCodePoint();
- return CDC_TOKEN;
- }
- break;
- case FULL_STOP:
- if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- break;
- case SOLIDUS:
- if (this.peekCodePoint(0) === ASTERISK) {
- this.consumeCodePoint();
- while (true) {
- var c = this.consumeCodePoint();
- if (c === ASTERISK) {
- c = this.consumeCodePoint();
- if (c === SOLIDUS) {
- return this.consumeToken();
- }
- }
- if (c === EOF) {
- return this.consumeToken();
- }
- }
- }
- break;
- case COLON:
- return COLON_TOKEN;
- case SEMICOLON:
- return SEMICOLON_TOKEN;
- case LESS_THAN_SIGN:
- if (this.peekCodePoint(0) === EXCLAMATION_MARK &&
- this.peekCodePoint(1) === HYPHEN_MINUS &&
- this.peekCodePoint(2) === HYPHEN_MINUS) {
- this.consumeCodePoint();
- this.consumeCodePoint();
- return CDO_TOKEN;
- }
- break;
- case COMMERCIAL_AT:
- var a1 = this.peekCodePoint(0);
- var a2 = this.peekCodePoint(1);
- var a3 = this.peekCodePoint(2);
- if (isIdentifierStart(a1, a2, a3)) {
- var value = this.consumeName();
- return { type: 7 /* AT_KEYWORD_TOKEN */, value: value };
- }
- break;
- case LEFT_SQUARE_BRACKET:
- return LEFT_SQUARE_BRACKET_TOKEN;
- case REVERSE_SOLIDUS:
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- break;
- case RIGHT_SQUARE_BRACKET:
- return RIGHT_SQUARE_BRACKET_TOKEN;
- case CIRCUMFLEX_ACCENT:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return PREFIX_MATCH_TOKEN;
- }
- break;
- case LEFT_CURLY_BRACKET:
- return LEFT_CURLY_BRACKET_TOKEN;
- case RIGHT_CURLY_BRACKET:
- return RIGHT_CURLY_BRACKET_TOKEN;
- case u:
- case U:
- var u1 = this.peekCodePoint(0);
- var u2 = this.peekCodePoint(1);
- if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) {
- this.consumeCodePoint();
- this.consumeUnicodeRangeToken();
- }
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- case VERTICAL_LINE:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return DASH_MATCH_TOKEN;
- }
- if (this.peekCodePoint(0) === VERTICAL_LINE) {
- this.consumeCodePoint();
- return COLUMN_TOKEN;
- }
- break;
- case TILDE:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return INCLUDE_MATCH_TOKEN;
- }
- break;
- case EOF:
- return EOF_TOKEN;
- }
- if (isWhiteSpace(codePoint)) {
- this.consumeWhiteSpace();
- return WHITESPACE_TOKEN;
- }
- if (isDigit(codePoint)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- if (isNameStartCodePoint(codePoint)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) };
- };
- Tokenizer.prototype.consumeCodePoint = function () {
- var value = this._value.shift();
- return typeof value === 'undefined' ? -1 : value;
- };
- Tokenizer.prototype.reconsumeCodePoint = function (codePoint) {
- this._value.unshift(codePoint);
- };
- Tokenizer.prototype.peekCodePoint = function (delta) {
- if (delta >= this._value.length) {
- return -1;
- }
- return this._value[delta];
- };
- Tokenizer.prototype.consumeUnicodeRangeToken = function () {
- var digits = [];
- var codePoint = this.consumeCodePoint();
- while (isHex(codePoint) && digits.length < 6) {
- digits.push(codePoint);
- codePoint = this.consumeCodePoint();
- }
- var questionMarks = false;
- while (codePoint === QUESTION_MARK && digits.length < 6) {
- digits.push(codePoint);
- codePoint = this.consumeCodePoint();
- questionMarks = true;
- }
- if (questionMarks) {
- var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16);
- var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16);
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end };
- }
- var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16);
- if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) {
- this.consumeCodePoint();
- codePoint = this.consumeCodePoint();
- var endDigits = [];
- while (isHex(codePoint) && endDigits.length < 6) {
- endDigits.push(codePoint);
- codePoint = this.consumeCodePoint();
- }
- var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16);
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end };
- }
- else {
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start };
- }
- };
- Tokenizer.prototype.consumeIdentLikeToken = function () {
- var value = this.consumeName();
- if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) {
- this.consumeCodePoint();
- return this.consumeUrlToken();
- }
- else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 19 /* FUNCTION_TOKEN */, value: value };
- }
- return { type: 20 /* IDENT_TOKEN */, value: value };
- };
- Tokenizer.prototype.consumeUrlToken = function () {
- var value = [];
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF) {
- return { type: 22 /* URL_TOKEN */, value: '' };
- }
- var next = this.peekCodePoint(0);
- if (next === APOSTROPHE || next === QUOTATION_MARK) {
- var stringToken = this.consumeStringToken(this.consumeCodePoint());
- if (stringToken.type === 0 /* STRING_TOKEN */) {
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 22 /* URL_TOKEN */, value: stringToken.value };
- }
- }
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) {
- return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) };
- }
- else if (isWhiteSpace(codePoint)) {
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) };
- }
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- else if (codePoint === QUOTATION_MARK ||
- codePoint === APOSTROPHE ||
- codePoint === LEFT_PARENTHESIS ||
- isNonPrintableCodePoint(codePoint)) {
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- else if (codePoint === REVERSE_SOLIDUS) {
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- value.push(this.consumeEscapedCodePoint());
- }
- else {
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- }
- else {
- value.push(codePoint);
- }
- }
- };
- Tokenizer.prototype.consumeWhiteSpace = function () {
- while (isWhiteSpace(this.peekCodePoint(0))) {
- this.consumeCodePoint();
- }
- };
- Tokenizer.prototype.consumeBadUrlRemnants = function () {
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) {
- return;
- }
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- this.consumeEscapedCodePoint();
- }
- }
- };
- Tokenizer.prototype.consumeStringSlice = function (count) {
- var SLICE_STACK_SIZE = 50000;
- var value = '';
- while (count > 0) {
- var amount = Math.min(SLICE_STACK_SIZE, count);
- value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount));
- count -= amount;
- }
- this._value.shift();
- return value;
- };
- Tokenizer.prototype.consumeStringToken = function (endingCodePoint) {
- var value = '';
- var i = 0;
- do {
- var codePoint = this._value[i];
- if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) {
- value += this.consumeStringSlice(i);
- return { type: 0 /* STRING_TOKEN */, value: value };
- }
- if (codePoint === LINE_FEED) {
- this._value.splice(0, i);
- return BAD_STRING_TOKEN;
- }
- if (codePoint === REVERSE_SOLIDUS) {
- var next = this._value[i + 1];
- if (next !== EOF && next !== undefined) {
- if (next === LINE_FEED) {
- value += this.consumeStringSlice(i);
- i = -1;
- this._value.shift();
- }
- else if (isValidEscape(codePoint, next)) {
- value += this.consumeStringSlice(i);
- value += fromCodePoint$1(this.consumeEscapedCodePoint());
- i = -1;
- }
- }
- }
- i++;
- } while (true);
- };
- Tokenizer.prototype.consumeNumber = function () {
- var repr = [];
- var type = FLAG_INTEGER;
- var c1 = this.peekCodePoint(0);
- if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) {
- repr.push(this.consumeCodePoint());
- }
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- if (c1 === FULL_STOP && isDigit(c2)) {
- repr.push(this.consumeCodePoint(), this.consumeCodePoint());
- type = FLAG_NUMBER;
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- }
- c1 = this.peekCodePoint(0);
- c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) {
- repr.push(this.consumeCodePoint(), this.consumeCodePoint());
- type = FLAG_NUMBER;
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- }
- return [stringToNumber(repr), type];
- };
- Tokenizer.prototype.consumeNumericToken = function () {
- var _a = this.consumeNumber(), number = _a[0], flags = _a[1];
- var c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if (isIdentifierStart(c1, c2, c3)) {
- var unit = this.consumeName();
- return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit };
- }
- if (c1 === PERCENTAGE_SIGN) {
- this.consumeCodePoint();
- return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags };
- }
- return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags };
- };
- Tokenizer.prototype.consumeEscapedCodePoint = function () {
- var codePoint = this.consumeCodePoint();
- if (isHex(codePoint)) {
- var hex = fromCodePoint$1(codePoint);
- while (isHex(this.peekCodePoint(0)) && hex.length < 6) {
- hex += fromCodePoint$1(this.consumeCodePoint());
- }
- if (isWhiteSpace(this.peekCodePoint(0))) {
- this.consumeCodePoint();
- }
- var hexCodePoint = parseInt(hex, 16);
- if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) {
- return REPLACEMENT_CHARACTER;
- }
- return hexCodePoint;
- }
- if (codePoint === EOF) {
- return REPLACEMENT_CHARACTER;
- }
- return codePoint;
- };
- Tokenizer.prototype.consumeName = function () {
- var result = '';
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (isNameCodePoint(codePoint)) {
- result += fromCodePoint$1(codePoint);
- }
- else if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- result += fromCodePoint$1(this.consumeEscapedCodePoint());
- }
- else {
- this.reconsumeCodePoint(codePoint);
- return result;
- }
- }
- };
- return Tokenizer;
- }());
-
- var Parser = /** @class */ (function () {
- function Parser(tokens) {
- this._tokens = tokens;
- }
- Parser.create = function (value) {
- var tokenizer = new Tokenizer();
- tokenizer.write(value);
- return new Parser(tokenizer.read());
- };
- Parser.parseValue = function (value) {
- return Parser.create(value).parseComponentValue();
- };
- Parser.parseValues = function (value) {
- return Parser.create(value).parseComponentValues();
- };
- Parser.prototype.parseComponentValue = function () {
- var token = this.consumeToken();
- while (token.type === 31 /* WHITESPACE_TOKEN */) {
- token = this.consumeToken();
- }
- if (token.type === 32 /* EOF_TOKEN */) {
- throw new SyntaxError("Error parsing CSS component value, unexpected EOF");
- }
- this.reconsumeToken(token);
- var value = this.consumeComponentValue();
- do {
- token = this.consumeToken();
- } while (token.type === 31 /* WHITESPACE_TOKEN */);
- if (token.type === 32 /* EOF_TOKEN */) {
- return value;
- }
- throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one");
- };
- Parser.prototype.parseComponentValues = function () {
- var values = [];
- while (true) {
- var value = this.consumeComponentValue();
- if (value.type === 32 /* EOF_TOKEN */) {
- return values;
- }
- values.push(value);
- values.push();
- }
- };
- Parser.prototype.consumeComponentValue = function () {
- var token = this.consumeToken();
- switch (token.type) {
- case 11 /* LEFT_CURLY_BRACKET_TOKEN */:
- case 28 /* LEFT_SQUARE_BRACKET_TOKEN */:
- case 2 /* LEFT_PARENTHESIS_TOKEN */:
- return this.consumeSimpleBlock(token.type);
- case 19 /* FUNCTION_TOKEN */:
- return this.consumeFunction(token);
- }
- return token;
- };
- Parser.prototype.consumeSimpleBlock = function (type) {
- var block = { type: type, values: [] };
- var token = this.consumeToken();
- while (true) {
- if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) {
- return block;
- }
- this.reconsumeToken(token);
- block.values.push(this.consumeComponentValue());
- token = this.consumeToken();
- }
- };
- Parser.prototype.consumeFunction = function (functionToken) {
- var cssFunction = {
- name: functionToken.value,
- values: [],
- type: 18 /* FUNCTION */
- };
- while (true) {
- var token = this.consumeToken();
- if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) {
- return cssFunction;
- }
- this.reconsumeToken(token);
- cssFunction.values.push(this.consumeComponentValue());
- }
- };
- Parser.prototype.consumeToken = function () {
- var token = this._tokens.shift();
- return typeof token === 'undefined' ? EOF_TOKEN : token;
- };
- Parser.prototype.reconsumeToken = function (token) {
- this._tokens.unshift(token);
- };
- return Parser;
- }());
- var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; };
- var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; };
- var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; };
- var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; };
- var isIdentWithValue = function (token, value) {
- return isIdentToken(token) && token.value === value;
- };
- var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; };
- var nonFunctionArgSeparator = function (token) {
- return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */;
- };
- var parseFunctionArgs = function (tokens) {
- var args = [];
- var arg = [];
- tokens.forEach(function (token) {
- if (token.type === 4 /* COMMA_TOKEN */) {
- if (arg.length === 0) {
- throw new Error("Error parsing function args, zero tokens for arg");
- }
- args.push(arg);
- arg = [];
- return;
- }
- if (token.type !== 31 /* WHITESPACE_TOKEN */) {
- arg.push(token);
- }
- });
- if (arg.length) {
- args.push(arg);
- }
- return args;
- };
- var isEndingTokenFor = function (token, type) {
- if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) {
- return true;
- }
- if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) {
- return true;
- }
- return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */;
- };
-
- var isLength = function (token) {
- return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */;
- };
-
- var isLengthPercentage = function (token) {
- return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token);
- };
- var parseLengthPercentageTuple = function (tokens) {
- return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]];
- };
- var ZERO_LENGTH = {
- type: 17 /* NUMBER_TOKEN */,
- number: 0,
- flags: FLAG_INTEGER
- };
- var FIFTY_PERCENT = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 50,
- flags: FLAG_INTEGER
- };
- var HUNDRED_PERCENT = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 100,
- flags: FLAG_INTEGER
- };
- var getAbsoluteValueForTuple = function (tuple, width, height) {
- var x = tuple[0], y = tuple[1];
- return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)];
- };
- var getAbsoluteValue = function (token, parent) {
- if (token.type === 16 /* PERCENTAGE_TOKEN */) {
- return (token.number / 100) * parent;
- }
- if (isDimensionToken(token)) {
- switch (token.unit) {
- case 'rem':
- case 'em':
- return 16 * token.number; // TODO use correct font-size
- case 'px':
- default:
- return token.number;
- }
- }
- return token.number;
- };
-
- var DEG = 'deg';
- var GRAD = 'grad';
- var RAD = 'rad';
- var TURN = 'turn';
- var angle = {
- name: 'angle',
- parse: function (_context, value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- switch (value.unit) {
- case DEG:
- return (Math.PI * value.number) / 180;
- case GRAD:
- return (Math.PI / 200) * value.number;
- case RAD:
- return value.number;
- case TURN:
- return Math.PI * 2 * value.number;
- }
- }
- throw new Error("Unsupported angle type");
- }
- };
- var isAngle = function (value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) {
- return true;
- }
- }
- return false;
- };
- var parseNamedSide = function (tokens) {
- var sideOrCorner = tokens
- .filter(isIdentToken)
- .map(function (ident) { return ident.value; })
- .join(' ');
- switch (sideOrCorner) {
- case 'to bottom right':
- case 'to right bottom':
- case 'left top':
- case 'top left':
- return [ZERO_LENGTH, ZERO_LENGTH];
- case 'to top':
- case 'bottom':
- return deg(0);
- case 'to bottom left':
- case 'to left bottom':
- case 'right top':
- case 'top right':
- return [ZERO_LENGTH, HUNDRED_PERCENT];
- case 'to right':
- case 'left':
- return deg(90);
- case 'to top left':
- case 'to left top':
- case 'right bottom':
- case 'bottom right':
- return [HUNDRED_PERCENT, HUNDRED_PERCENT];
- case 'to bottom':
- case 'top':
- return deg(180);
- case 'to top right':
- case 'to right top':
- case 'left bottom':
- case 'bottom left':
- return [HUNDRED_PERCENT, ZERO_LENGTH];
- case 'to left':
- case 'right':
- return deg(270);
- }
- return 0;
- };
- var deg = function (deg) { return (Math.PI * deg) / 180; };
-
- var color$1 = {
- name: 'color',
- parse: function (context, value) {
- if (value.type === 18 /* FUNCTION */) {
- var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name];
- if (typeof colorFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\"");
- }
- return colorFunction(context, value.values);
- }
- if (value.type === 5 /* HASH_TOKEN */) {
- if (value.value.length === 3) {
- var r = value.value.substring(0, 1);
- var g = value.value.substring(1, 2);
- var b = value.value.substring(2, 3);
- return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1);
- }
- if (value.value.length === 4) {
- var r = value.value.substring(0, 1);
- var g = value.value.substring(1, 2);
- var b = value.value.substring(2, 3);
- var a = value.value.substring(3, 4);
- return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255);
- }
- if (value.value.length === 6) {
- var r = value.value.substring(0, 2);
- var g = value.value.substring(2, 4);
- var b = value.value.substring(4, 6);
- return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1);
- }
- if (value.value.length === 8) {
- var r = value.value.substring(0, 2);
- var g = value.value.substring(2, 4);
- var b = value.value.substring(4, 6);
- var a = value.value.substring(6, 8);
- return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255);
- }
- }
- if (value.type === 20 /* IDENT_TOKEN */) {
- var namedColor = COLORS[value.value.toUpperCase()];
- if (typeof namedColor !== 'undefined') {
- return namedColor;
- }
- }
- return COLORS.TRANSPARENT;
- }
- };
- var isTransparent = function (color) { return (0xff & color) === 0; };
- var asString = function (color) {
- var alpha = 0xff & color;
- var blue = 0xff & (color >> 8);
- var green = 0xff & (color >> 16);
- var red = 0xff & (color >> 24);
- return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")";
- };
- var pack = function (r, g, b, a) {
- return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0;
- };
- var getTokenColorValue = function (token, i) {
- if (token.type === 17 /* NUMBER_TOKEN */) {
- return token.number;
- }
- if (token.type === 16 /* PERCENTAGE_TOKEN */) {
- var max = i === 3 ? 1 : 255;
- return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max);
- }
- return 0;
- };
- var rgb = function (_context, args) {
- var tokens = args.filter(nonFunctionArgSeparator);
- if (tokens.length === 3) {
- var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2];
- return pack(r, g, b, 1);
- }
- if (tokens.length === 4) {
- var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3];
- return pack(r, g, b, a);
- }
- return 0;
- };
- function hue2rgb(t1, t2, hue) {
- if (hue < 0) {
- hue += 1;
- }
- if (hue >= 1) {
- hue -= 1;
- }
- if (hue < 1 / 6) {
- return (t2 - t1) * hue * 6 + t1;
- }
- else if (hue < 1 / 2) {
- return t2;
- }
- else if (hue < 2 / 3) {
- return (t2 - t1) * 6 * (2 / 3 - hue) + t1;
- }
- else {
- return t1;
- }
- }
- var hsl = function (context, args) {
- var tokens = args.filter(nonFunctionArgSeparator);
- var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3];
- var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2);
- var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0;
- var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0;
- var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1;
- if (s === 0) {
- return pack(l * 255, l * 255, l * 255, 1);
- }
- var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s;
- var t1 = l * 2 - t2;
- var r = hue2rgb(t1, t2, h + 1 / 3);
- var g = hue2rgb(t1, t2, h);
- var b = hue2rgb(t1, t2, h - 1 / 3);
- return pack(r * 255, g * 255, b * 255, a);
- };
- var SUPPORTED_COLOR_FUNCTIONS = {
- hsl: hsl,
- hsla: hsl,
- rgb: rgb,
- rgba: rgb
- };
- var parseColor = function (context, value) {
- return color$1.parse(context, Parser.create(value).parseComponentValue());
- };
- var COLORS = {
- ALICEBLUE: 0xf0f8ffff,
- ANTIQUEWHITE: 0xfaebd7ff,
- AQUA: 0x00ffffff,
- AQUAMARINE: 0x7fffd4ff,
- AZURE: 0xf0ffffff,
- BEIGE: 0xf5f5dcff,
- BISQUE: 0xffe4c4ff,
- BLACK: 0x000000ff,
- BLANCHEDALMOND: 0xffebcdff,
- BLUE: 0x0000ffff,
- BLUEVIOLET: 0x8a2be2ff,
- BROWN: 0xa52a2aff,
- BURLYWOOD: 0xdeb887ff,
- CADETBLUE: 0x5f9ea0ff,
- CHARTREUSE: 0x7fff00ff,
- CHOCOLATE: 0xd2691eff,
- CORAL: 0xff7f50ff,
- CORNFLOWERBLUE: 0x6495edff,
- CORNSILK: 0xfff8dcff,
- CRIMSON: 0xdc143cff,
- CYAN: 0x00ffffff,
- DARKBLUE: 0x00008bff,
- DARKCYAN: 0x008b8bff,
- DARKGOLDENROD: 0xb886bbff,
- DARKGRAY: 0xa9a9a9ff,
- DARKGREEN: 0x006400ff,
- DARKGREY: 0xa9a9a9ff,
- DARKKHAKI: 0xbdb76bff,
- DARKMAGENTA: 0x8b008bff,
- DARKOLIVEGREEN: 0x556b2fff,
- DARKORANGE: 0xff8c00ff,
- DARKORCHID: 0x9932ccff,
- DARKRED: 0x8b0000ff,
- DARKSALMON: 0xe9967aff,
- DARKSEAGREEN: 0x8fbc8fff,
- DARKSLATEBLUE: 0x483d8bff,
- DARKSLATEGRAY: 0x2f4f4fff,
- DARKSLATEGREY: 0x2f4f4fff,
- DARKTURQUOISE: 0x00ced1ff,
- DARKVIOLET: 0x9400d3ff,
- DEEPPINK: 0xff1493ff,
- DEEPSKYBLUE: 0x00bfffff,
- DIMGRAY: 0x696969ff,
- DIMGREY: 0x696969ff,
- DODGERBLUE: 0x1e90ffff,
- FIREBRICK: 0xb22222ff,
- FLORALWHITE: 0xfffaf0ff,
- FORESTGREEN: 0x228b22ff,
- FUCHSIA: 0xff00ffff,
- GAINSBORO: 0xdcdcdcff,
- GHOSTWHITE: 0xf8f8ffff,
- GOLD: 0xffd700ff,
- GOLDENROD: 0xdaa520ff,
- GRAY: 0x808080ff,
- GREEN: 0x008000ff,
- GREENYELLOW: 0xadff2fff,
- GREY: 0x808080ff,
- HONEYDEW: 0xf0fff0ff,
- HOTPINK: 0xff69b4ff,
- INDIANRED: 0xcd5c5cff,
- INDIGO: 0x4b0082ff,
- IVORY: 0xfffff0ff,
- KHAKI: 0xf0e68cff,
- LAVENDER: 0xe6e6faff,
- LAVENDERBLUSH: 0xfff0f5ff,
- LAWNGREEN: 0x7cfc00ff,
- LEMONCHIFFON: 0xfffacdff,
- LIGHTBLUE: 0xadd8e6ff,
- LIGHTCORAL: 0xf08080ff,
- LIGHTCYAN: 0xe0ffffff,
- LIGHTGOLDENRODYELLOW: 0xfafad2ff,
- LIGHTGRAY: 0xd3d3d3ff,
- LIGHTGREEN: 0x90ee90ff,
- LIGHTGREY: 0xd3d3d3ff,
- LIGHTPINK: 0xffb6c1ff,
- LIGHTSALMON: 0xffa07aff,
- LIGHTSEAGREEN: 0x20b2aaff,
- LIGHTSKYBLUE: 0x87cefaff,
- LIGHTSLATEGRAY: 0x778899ff,
- LIGHTSLATEGREY: 0x778899ff,
- LIGHTSTEELBLUE: 0xb0c4deff,
- LIGHTYELLOW: 0xffffe0ff,
- LIME: 0x00ff00ff,
- LIMEGREEN: 0x32cd32ff,
- LINEN: 0xfaf0e6ff,
- MAGENTA: 0xff00ffff,
- MAROON: 0x800000ff,
- MEDIUMAQUAMARINE: 0x66cdaaff,
- MEDIUMBLUE: 0x0000cdff,
- MEDIUMORCHID: 0xba55d3ff,
- MEDIUMPURPLE: 0x9370dbff,
- MEDIUMSEAGREEN: 0x3cb371ff,
- MEDIUMSLATEBLUE: 0x7b68eeff,
- MEDIUMSPRINGGREEN: 0x00fa9aff,
- MEDIUMTURQUOISE: 0x48d1ccff,
- MEDIUMVIOLETRED: 0xc71585ff,
- MIDNIGHTBLUE: 0x191970ff,
- MINTCREAM: 0xf5fffaff,
- MISTYROSE: 0xffe4e1ff,
- MOCCASIN: 0xffe4b5ff,
- NAVAJOWHITE: 0xffdeadff,
- NAVY: 0x000080ff,
- OLDLACE: 0xfdf5e6ff,
- OLIVE: 0x808000ff,
- OLIVEDRAB: 0x6b8e23ff,
- ORANGE: 0xffa500ff,
- ORANGERED: 0xff4500ff,
- ORCHID: 0xda70d6ff,
- PALEGOLDENROD: 0xeee8aaff,
- PALEGREEN: 0x98fb98ff,
- PALETURQUOISE: 0xafeeeeff,
- PALEVIOLETRED: 0xdb7093ff,
- PAPAYAWHIP: 0xffefd5ff,
- PEACHPUFF: 0xffdab9ff,
- PERU: 0xcd853fff,
- PINK: 0xffc0cbff,
- PLUM: 0xdda0ddff,
- POWDERBLUE: 0xb0e0e6ff,
- PURPLE: 0x800080ff,
- REBECCAPURPLE: 0x663399ff,
- RED: 0xff0000ff,
- ROSYBROWN: 0xbc8f8fff,
- ROYALBLUE: 0x4169e1ff,
- SADDLEBROWN: 0x8b4513ff,
- SALMON: 0xfa8072ff,
- SANDYBROWN: 0xf4a460ff,
- SEAGREEN: 0x2e8b57ff,
- SEASHELL: 0xfff5eeff,
- SIENNA: 0xa0522dff,
- SILVER: 0xc0c0c0ff,
- SKYBLUE: 0x87ceebff,
- SLATEBLUE: 0x6a5acdff,
- SLATEGRAY: 0x708090ff,
- SLATEGREY: 0x708090ff,
- SNOW: 0xfffafaff,
- SPRINGGREEN: 0x00ff7fff,
- STEELBLUE: 0x4682b4ff,
- TAN: 0xd2b48cff,
- TEAL: 0x008080ff,
- THISTLE: 0xd8bfd8ff,
- TOMATO: 0xff6347ff,
- TRANSPARENT: 0x00000000,
- TURQUOISE: 0x40e0d0ff,
- VIOLET: 0xee82eeff,
- WHEAT: 0xf5deb3ff,
- WHITE: 0xffffffff,
- WHITESMOKE: 0xf5f5f5ff,
- YELLOW: 0xffff00ff,
- YELLOWGREEN: 0x9acd32ff
- };
-
- var backgroundClip = {
- name: 'background-clip',
- initialValue: 'border-box',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.map(function (token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'padding-box':
- return 1 /* PADDING_BOX */;
- case 'content-box':
- return 2 /* CONTENT_BOX */;
- }
- }
- return 0 /* BORDER_BOX */;
- });
- }
- };
-
- var backgroundColor = {
- name: "background-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var parseColorStop = function (context, args) {
- var color = color$1.parse(context, args[0]);
- var stop = args[1];
- return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null };
- };
- var processColorStops = function (stops, lineLength) {
- var first = stops[0];
- var last = stops[stops.length - 1];
- if (first.stop === null) {
- first.stop = ZERO_LENGTH;
- }
- if (last.stop === null) {
- last.stop = HUNDRED_PERCENT;
- }
- var processStops = [];
- var previous = 0;
- for (var i = 0; i < stops.length; i++) {
- var stop_1 = stops[i].stop;
- if (stop_1 !== null) {
- var absoluteValue = getAbsoluteValue(stop_1, lineLength);
- if (absoluteValue > previous) {
- processStops.push(absoluteValue);
- }
- else {
- processStops.push(previous);
- }
- previous = absoluteValue;
- }
- else {
- processStops.push(null);
- }
- }
- var gapBegin = null;
- for (var i = 0; i < processStops.length; i++) {
- var stop_2 = processStops[i];
- if (stop_2 === null) {
- if (gapBegin === null) {
- gapBegin = i;
- }
- }
- else if (gapBegin !== null) {
- var gapLength = i - gapBegin;
- var beforeGap = processStops[gapBegin - 1];
- var gapValue = (stop_2 - beforeGap) / (gapLength + 1);
- for (var g = 1; g <= gapLength; g++) {
- processStops[gapBegin + g - 1] = gapValue * g;
- }
- gapBegin = null;
- }
- }
- return stops.map(function (_a, i) {
- var color = _a.color;
- return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) };
- });
- };
- var getAngleFromCorner = function (corner, width, height) {
- var centerX = width / 2;
- var centerY = height / 2;
- var x = getAbsoluteValue(corner[0], width) - centerX;
- var y = centerY - getAbsoluteValue(corner[1], height);
- return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2);
- };
- var calculateGradientDirection = function (angle, width, height) {
- var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height);
- var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian));
- var halfWidth = width / 2;
- var halfHeight = height / 2;
- var halfLineLength = lineLength / 2;
- var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength;
- var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength;
- return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff];
- };
- var distance = function (a, b) { return Math.sqrt(a * a + b * b); };
- var findCorner = function (width, height, x, y, closest) {
- var corners = [
- [0, 0],
- [0, height],
- [width, 0],
- [width, height]
- ];
- return corners.reduce(function (stat, corner) {
- var cx = corner[0], cy = corner[1];
- var d = distance(x - cx, y - cy);
- if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) {
- return {
- optimumCorner: corner,
- optimumDistance: d
- };
- }
- return stat;
- }, {
- optimumDistance: closest ? Infinity : -Infinity,
- optimumCorner: null
- }).optimumCorner;
- };
- var calculateRadius = function (gradient, x, y, width, height) {
- var rx = 0;
- var ry = 0;
- switch (gradient.size) {
- case 0 /* CLOSEST_SIDE */:
- // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center.
- // If the shape is an ellipse, it exactly meets the closest side in each dimension.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- rx = Math.min(Math.abs(x), Math.abs(x - width));
- ry = Math.min(Math.abs(y), Math.abs(y - height));
- }
- break;
- case 2 /* CLOSEST_CORNER */:
- // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center.
- // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- // Compute the ratio ry/rx (which is to be the same as for "closest-side")
- var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width));
- var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1];
- rx = distance(cx - x, (cy - y) / c);
- ry = c * rx;
- }
- break;
- case 1 /* FARTHEST_SIDE */:
- // Same as closest-side, except the ending shape is sized based on the farthest side(s)
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- rx = Math.max(Math.abs(x), Math.abs(x - width));
- ry = Math.max(Math.abs(y), Math.abs(y - height));
- }
- break;
- case 3 /* FARTHEST_CORNER */:
- // Same as closest-corner, except the ending shape is sized based on the farthest corner.
- // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- // Compute the ratio ry/rx (which is to be the same as for "farthest-side")
- var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width));
- var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1];
- rx = distance(cx - x, (cy - y) / c);
- ry = c * rx;
- }
- break;
- }
- if (Array.isArray(gradient.size)) {
- rx = getAbsoluteValue(gradient.size[0], width);
- ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx;
- }
- return [rx, ry];
- };
-
- var linearGradient = function (context, tokens) {
- var angle$1 = deg(180);
- var stops = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- if (i === 0) {
- var firstToken = arg[0];
- if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') {
- angle$1 = parseNamedSide(arg);
- return;
- }
- else if (isAngle(firstToken)) {
- angle$1 = angle.parse(context, firstToken);
- return;
- }
- }
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- });
- return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ };
- };
-
- var prefixLinearGradient = function (context, tokens) {
- var angle$1 = deg(180);
- var stops = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- if (i === 0) {
- var firstToken = arg[0];
- if (firstToken.type === 20 /* IDENT_TOKEN */ &&
- ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) {
- angle$1 = parseNamedSide(arg);
- return;
- }
- else if (isAngle(firstToken)) {
- angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360);
- return;
- }
- }
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- });
- return {
- angle: angle$1,
- stops: stops,
- type: 1 /* LINEAR_GRADIENT */
- };
- };
-
- var webkitGradient = function (context, tokens) {
- var angle = deg(180);
- var stops = [];
- var type = 1 /* LINEAR_GRADIENT */;
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var firstToken = arg[0];
- if (i === 0) {
- if (isIdentToken(firstToken) && firstToken.value === 'linear') {
- type = 1 /* LINEAR_GRADIENT */;
- return;
- }
- else if (isIdentToken(firstToken) && firstToken.value === 'radial') {
- type = 2 /* RADIAL_GRADIENT */;
- return;
- }
- }
- if (firstToken.type === 18 /* FUNCTION */) {
- if (firstToken.name === 'from') {
- var color = color$1.parse(context, firstToken.values[0]);
- stops.push({ stop: ZERO_LENGTH, color: color });
- }
- else if (firstToken.name === 'to') {
- var color = color$1.parse(context, firstToken.values[0]);
- stops.push({ stop: HUNDRED_PERCENT, color: color });
- }
- else if (firstToken.name === 'color-stop') {
- var values = firstToken.values.filter(nonFunctionArgSeparator);
- if (values.length === 2) {
- var color = color$1.parse(context, values[1]);
- var stop_1 = values[0];
- if (isNumberToken(stop_1)) {
- stops.push({
- stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags },
- color: color
- });
- }
- }
- }
- }
- });
- return type === 1 /* LINEAR_GRADIENT */
- ? {
- angle: (angle + deg(180)) % deg(360),
- stops: stops,
- type: type
- }
- : { size: size, shape: shape, stops: stops, position: position, type: type };
- };
-
- var CLOSEST_SIDE = 'closest-side';
- var FARTHEST_SIDE = 'farthest-side';
- var CLOSEST_CORNER = 'closest-corner';
- var FARTHEST_CORNER = 'farthest-corner';
- var CIRCLE = 'circle';
- var ELLIPSE = 'ellipse';
- var COVER = 'cover';
- var CONTAIN = 'contain';
- var radialGradient = function (context, tokens) {
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var stops = [];
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var isColorStop = true;
- if (i === 0) {
- var isAtPosition_1 = false;
- isColorStop = arg.reduce(function (acc, token) {
- if (isAtPosition_1) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'center':
- position.push(FIFTY_PERCENT);
- return acc;
- case 'top':
- case 'left':
- position.push(ZERO_LENGTH);
- return acc;
- case 'right':
- case 'bottom':
- position.push(HUNDRED_PERCENT);
- return acc;
- }
- }
- else if (isLengthPercentage(token) || isLength(token)) {
- position.push(token);
- }
- }
- else if (isIdentToken(token)) {
- switch (token.value) {
- case CIRCLE:
- shape = 0 /* CIRCLE */;
- return false;
- case ELLIPSE:
- shape = 1 /* ELLIPSE */;
- return false;
- case 'at':
- isAtPosition_1 = true;
- return false;
- case CLOSEST_SIDE:
- size = 0 /* CLOSEST_SIDE */;
- return false;
- case COVER:
- case FARTHEST_SIDE:
- size = 1 /* FARTHEST_SIDE */;
- return false;
- case CONTAIN:
- case CLOSEST_CORNER:
- size = 2 /* CLOSEST_CORNER */;
- return false;
- case FARTHEST_CORNER:
- size = 3 /* FARTHEST_CORNER */;
- return false;
- }
- }
- else if (isLength(token) || isLengthPercentage(token)) {
- if (!Array.isArray(size)) {
- size = [];
- }
- size.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- if (isColorStop) {
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- }
- });
- return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ };
- };
-
- var prefixRadialGradient = function (context, tokens) {
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var stops = [];
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var isColorStop = true;
- if (i === 0) {
- isColorStop = arg.reduce(function (acc, token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'center':
- position.push(FIFTY_PERCENT);
- return false;
- case 'top':
- case 'left':
- position.push(ZERO_LENGTH);
- return false;
- case 'right':
- case 'bottom':
- position.push(HUNDRED_PERCENT);
- return false;
- }
- }
- else if (isLengthPercentage(token) || isLength(token)) {
- position.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- else if (i === 1) {
- isColorStop = arg.reduce(function (acc, token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case CIRCLE:
- shape = 0 /* CIRCLE */;
- return false;
- case ELLIPSE:
- shape = 1 /* ELLIPSE */;
- return false;
- case CONTAIN:
- case CLOSEST_SIDE:
- size = 0 /* CLOSEST_SIDE */;
- return false;
- case FARTHEST_SIDE:
- size = 1 /* FARTHEST_SIDE */;
- return false;
- case CLOSEST_CORNER:
- size = 2 /* CLOSEST_CORNER */;
- return false;
- case COVER:
- case FARTHEST_CORNER:
- size = 3 /* FARTHEST_CORNER */;
- return false;
- }
- }
- else if (isLength(token) || isLengthPercentage(token)) {
- if (!Array.isArray(size)) {
- size = [];
- }
- size.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- if (isColorStop) {
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- }
- });
- return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ };
- };
-
- var isLinearGradient = function (background) {
- return background.type === 1 /* LINEAR_GRADIENT */;
- };
- var isRadialGradient = function (background) {
- return background.type === 2 /* RADIAL_GRADIENT */;
- };
- var image = {
- name: 'image',
- parse: function (context, value) {
- if (value.type === 22 /* URL_TOKEN */) {
- var image_1 = { url: value.value, type: 0 /* URL */ };
- context.cache.addImage(value.value);
- return image_1;
- }
- if (value.type === 18 /* FUNCTION */) {
- var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name];
- if (typeof imageFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\"");
- }
- return imageFunction(context, value.values);
- }
- throw new Error("Unsupported image type " + value.type);
- }
- };
- function isSupportedImage(value) {
- return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') &&
- (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name]));
- }
- var SUPPORTED_IMAGE_FUNCTIONS = {
- 'linear-gradient': linearGradient,
- '-moz-linear-gradient': prefixLinearGradient,
- '-ms-linear-gradient': prefixLinearGradient,
- '-o-linear-gradient': prefixLinearGradient,
- '-webkit-linear-gradient': prefixLinearGradient,
- 'radial-gradient': radialGradient,
- '-moz-radial-gradient': prefixRadialGradient,
- '-ms-radial-gradient': prefixRadialGradient,
- '-o-radial-gradient': prefixRadialGradient,
- '-webkit-radial-gradient': prefixRadialGradient,
- '-webkit-gradient': webkitGradient
- };
-
- var backgroundImage = {
- name: 'background-image',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return [];
- }
- return tokens
- .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); })
- .map(function (value) { return image.parse(context, value); });
- }
- };
-
- var backgroundOrigin = {
- name: 'background-origin',
- initialValue: 'border-box',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.map(function (token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'padding-box':
- return 1 /* PADDING_BOX */;
- case 'content-box':
- return 2 /* CONTENT_BOX */;
- }
- }
- return 0 /* BORDER_BOX */;
- });
- }
- };
-
- var backgroundPosition = {
- name: 'background-position',
- initialValue: '0% 0%',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens)
- .map(function (values) { return values.filter(isLengthPercentage); })
- .map(parseLengthPercentageTuple);
- }
- };
-
- var backgroundRepeat = {
- name: 'background-repeat',
- initialValue: 'repeat',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens)
- .map(function (values) {
- return values
- .filter(isIdentToken)
- .map(function (token) { return token.value; })
- .join(' ');
- })
- .map(parseBackgroundRepeat);
- }
- };
- var parseBackgroundRepeat = function (value) {
- switch (value) {
- case 'no-repeat':
- return 1 /* NO_REPEAT */;
- case 'repeat-x':
- case 'repeat no-repeat':
- return 2 /* REPEAT_X */;
- case 'repeat-y':
- case 'no-repeat repeat':
- return 3 /* REPEAT_Y */;
- case 'repeat':
- default:
- return 0 /* REPEAT */;
- }
- };
-
- var BACKGROUND_SIZE;
- (function (BACKGROUND_SIZE) {
- BACKGROUND_SIZE["AUTO"] = "auto";
- BACKGROUND_SIZE["CONTAIN"] = "contain";
- BACKGROUND_SIZE["COVER"] = "cover";
- })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {}));
- var backgroundSize = {
- name: 'background-size',
- initialValue: '0',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); });
- }
- };
- var isBackgroundSizeInfoToken = function (value) {
- return isIdentToken(value) || isLengthPercentage(value);
- };
-
- var borderColorForSide = function (side) { return ({
- name: "border-" + side + "-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- }); };
- var borderTopColor = borderColorForSide('top');
- var borderRightColor = borderColorForSide('right');
- var borderBottomColor = borderColorForSide('bottom');
- var borderLeftColor = borderColorForSide('left');
-
- var borderRadiusForSide = function (side) { return ({
- name: "border-radius-" + side,
- initialValue: '0 0',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseLengthPercentageTuple(tokens.filter(isLengthPercentage));
- }
- }); };
- var borderTopLeftRadius = borderRadiusForSide('top-left');
- var borderTopRightRadius = borderRadiusForSide('top-right');
- var borderBottomRightRadius = borderRadiusForSide('bottom-right');
- var borderBottomLeftRadius = borderRadiusForSide('bottom-left');
-
- var borderStyleForSide = function (side) { return ({
- name: "border-" + side + "-style",
- initialValue: 'solid',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, style) {
- switch (style) {
- case 'none':
- return 0 /* NONE */;
- case 'dashed':
- return 2 /* DASHED */;
- case 'dotted':
- return 3 /* DOTTED */;
- case 'double':
- return 4 /* DOUBLE */;
- }
- return 1 /* SOLID */;
- }
- }); };
- var borderTopStyle = borderStyleForSide('top');
- var borderRightStyle = borderStyleForSide('right');
- var borderBottomStyle = borderStyleForSide('bottom');
- var borderLeftStyle = borderStyleForSide('left');
-
- var borderWidthForSide = function (side) { return ({
- name: "border-" + side + "-width",
- initialValue: '0',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isDimensionToken(token)) {
- return token.number;
- }
- return 0;
- }
- }); };
- var borderTopWidth = borderWidthForSide('top');
- var borderRightWidth = borderWidthForSide('right');
- var borderBottomWidth = borderWidthForSide('bottom');
- var borderLeftWidth = borderWidthForSide('left');
-
- var color = {
- name: "color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var direction = {
- name: 'direction',
- initialValue: 'ltr',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, direction) {
- switch (direction) {
- case 'rtl':
- return 1 /* RTL */;
- case 'ltr':
- default:
- return 0 /* LTR */;
- }
- }
- };
-
- var display = {
- name: 'display',
- initialValue: 'inline-block',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).reduce(function (bit, token) {
- return bit | parseDisplayValue(token.value);
- }, 0 /* NONE */);
- }
- };
- var parseDisplayValue = function (display) {
- switch (display) {
- case 'block':
- case '-webkit-box':
- return 2 /* BLOCK */;
- case 'inline':
- return 4 /* INLINE */;
- case 'run-in':
- return 8 /* RUN_IN */;
- case 'flow':
- return 16 /* FLOW */;
- case 'flow-root':
- return 32 /* FLOW_ROOT */;
- case 'table':
- return 64 /* TABLE */;
- case 'flex':
- case '-webkit-flex':
- return 128 /* FLEX */;
- case 'grid':
- case '-ms-grid':
- return 256 /* GRID */;
- case 'ruby':
- return 512 /* RUBY */;
- case 'subgrid':
- return 1024 /* SUBGRID */;
- case 'list-item':
- return 2048 /* LIST_ITEM */;
- case 'table-row-group':
- return 4096 /* TABLE_ROW_GROUP */;
- case 'table-header-group':
- return 8192 /* TABLE_HEADER_GROUP */;
- case 'table-footer-group':
- return 16384 /* TABLE_FOOTER_GROUP */;
- case 'table-row':
- return 32768 /* TABLE_ROW */;
- case 'table-cell':
- return 65536 /* TABLE_CELL */;
- case 'table-column-group':
- return 131072 /* TABLE_COLUMN_GROUP */;
- case 'table-column':
- return 262144 /* TABLE_COLUMN */;
- case 'table-caption':
- return 524288 /* TABLE_CAPTION */;
- case 'ruby-base':
- return 1048576 /* RUBY_BASE */;
- case 'ruby-text':
- return 2097152 /* RUBY_TEXT */;
- case 'ruby-base-container':
- return 4194304 /* RUBY_BASE_CONTAINER */;
- case 'ruby-text-container':
- return 8388608 /* RUBY_TEXT_CONTAINER */;
- case 'contents':
- return 16777216 /* CONTENTS */;
- case 'inline-block':
- return 33554432 /* INLINE_BLOCK */;
- case 'inline-list-item':
- return 67108864 /* INLINE_LIST_ITEM */;
- case 'inline-table':
- return 134217728 /* INLINE_TABLE */;
- case 'inline-flex':
- return 268435456 /* INLINE_FLEX */;
- case 'inline-grid':
- return 536870912 /* INLINE_GRID */;
- }
- return 0 /* NONE */;
- };
-
- var float = {
- name: 'float',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, float) {
- switch (float) {
- case 'left':
- return 1 /* LEFT */;
- case 'right':
- return 2 /* RIGHT */;
- case 'inline-start':
- return 3 /* INLINE_START */;
- case 'inline-end':
- return 4 /* INLINE_END */;
- }
- return 0 /* NONE */;
- }
- };
-
- var letterSpacing = {
- name: 'letter-spacing',
- initialValue: '0',
- prefix: false,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') {
- return 0;
- }
- if (token.type === 17 /* NUMBER_TOKEN */) {
- return token.number;
- }
- if (token.type === 15 /* DIMENSION_TOKEN */) {
- return token.number;
- }
- return 0;
- }
- };
-
- var LINE_BREAK;
- (function (LINE_BREAK) {
- LINE_BREAK["NORMAL"] = "normal";
- LINE_BREAK["STRICT"] = "strict";
- })(LINE_BREAK || (LINE_BREAK = {}));
- var lineBreak = {
- name: 'line-break',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, lineBreak) {
- switch (lineBreak) {
- case 'strict':
- return LINE_BREAK.STRICT;
- case 'normal':
- default:
- return LINE_BREAK.NORMAL;
- }
- }
- };
-
- var lineHeight = {
- name: 'line-height',
- initialValue: 'normal',
- prefix: false,
- type: 4 /* TOKEN_VALUE */
- };
- var computeLineHeight = function (token, fontSize) {
- if (isIdentToken(token) && token.value === 'normal') {
- return 1.2 * fontSize;
- }
- else if (token.type === 17 /* NUMBER_TOKEN */) {
- return fontSize * token.number;
- }
- else if (isLengthPercentage(token)) {
- return getAbsoluteValue(token, fontSize);
- }
- return fontSize;
- };
-
- var listStyleImage = {
- name: 'list-style-image',
- initialValue: 'none',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') {
- return null;
- }
- return image.parse(context, token);
- }
- };
-
- var listStylePosition = {
- name: 'list-style-position',
- initialValue: 'outside',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, position) {
- switch (position) {
- case 'inside':
- return 0 /* INSIDE */;
- case 'outside':
- default:
- return 1 /* OUTSIDE */;
- }
- }
- };
-
- var listStyleType = {
- name: 'list-style-type',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, type) {
- switch (type) {
- case 'disc':
- return 0 /* DISC */;
- case 'circle':
- return 1 /* CIRCLE */;
- case 'square':
- return 2 /* SQUARE */;
- case 'decimal':
- return 3 /* DECIMAL */;
- case 'cjk-decimal':
- return 4 /* CJK_DECIMAL */;
- case 'decimal-leading-zero':
- return 5 /* DECIMAL_LEADING_ZERO */;
- case 'lower-roman':
- return 6 /* LOWER_ROMAN */;
- case 'upper-roman':
- return 7 /* UPPER_ROMAN */;
- case 'lower-greek':
- return 8 /* LOWER_GREEK */;
- case 'lower-alpha':
- return 9 /* LOWER_ALPHA */;
- case 'upper-alpha':
- return 10 /* UPPER_ALPHA */;
- case 'arabic-indic':
- return 11 /* ARABIC_INDIC */;
- case 'armenian':
- return 12 /* ARMENIAN */;
- case 'bengali':
- return 13 /* BENGALI */;
- case 'cambodian':
- return 14 /* CAMBODIAN */;
- case 'cjk-earthly-branch':
- return 15 /* CJK_EARTHLY_BRANCH */;
- case 'cjk-heavenly-stem':
- return 16 /* CJK_HEAVENLY_STEM */;
- case 'cjk-ideographic':
- return 17 /* CJK_IDEOGRAPHIC */;
- case 'devanagari':
- return 18 /* DEVANAGARI */;
- case 'ethiopic-numeric':
- return 19 /* ETHIOPIC_NUMERIC */;
- case 'georgian':
- return 20 /* GEORGIAN */;
- case 'gujarati':
- return 21 /* GUJARATI */;
- case 'gurmukhi':
- return 22 /* GURMUKHI */;
- case 'hebrew':
- return 22 /* HEBREW */;
- case 'hiragana':
- return 23 /* HIRAGANA */;
- case 'hiragana-iroha':
- return 24 /* HIRAGANA_IROHA */;
- case 'japanese-formal':
- return 25 /* JAPANESE_FORMAL */;
- case 'japanese-informal':
- return 26 /* JAPANESE_INFORMAL */;
- case 'kannada':
- return 27 /* KANNADA */;
- case 'katakana':
- return 28 /* KATAKANA */;
- case 'katakana-iroha':
- return 29 /* KATAKANA_IROHA */;
- case 'khmer':
- return 30 /* KHMER */;
- case 'korean-hangul-formal':
- return 31 /* KOREAN_HANGUL_FORMAL */;
- case 'korean-hanja-formal':
- return 32 /* KOREAN_HANJA_FORMAL */;
- case 'korean-hanja-informal':
- return 33 /* KOREAN_HANJA_INFORMAL */;
- case 'lao':
- return 34 /* LAO */;
- case 'lower-armenian':
- return 35 /* LOWER_ARMENIAN */;
- case 'malayalam':
- return 36 /* MALAYALAM */;
- case 'mongolian':
- return 37 /* MONGOLIAN */;
- case 'myanmar':
- return 38 /* MYANMAR */;
- case 'oriya':
- return 39 /* ORIYA */;
- case 'persian':
- return 40 /* PERSIAN */;
- case 'simp-chinese-formal':
- return 41 /* SIMP_CHINESE_FORMAL */;
- case 'simp-chinese-informal':
- return 42 /* SIMP_CHINESE_INFORMAL */;
- case 'tamil':
- return 43 /* TAMIL */;
- case 'telugu':
- return 44 /* TELUGU */;
- case 'thai':
- return 45 /* THAI */;
- case 'tibetan':
- return 46 /* TIBETAN */;
- case 'trad-chinese-formal':
- return 47 /* TRAD_CHINESE_FORMAL */;
- case 'trad-chinese-informal':
- return 48 /* TRAD_CHINESE_INFORMAL */;
- case 'upper-armenian':
- return 49 /* UPPER_ARMENIAN */;
- case 'disclosure-open':
- return 50 /* DISCLOSURE_OPEN */;
- case 'disclosure-closed':
- return 51 /* DISCLOSURE_CLOSED */;
- case 'none':
- default:
- return -1 /* NONE */;
- }
- }
- };
-
- var marginForSide = function (side) { return ({
- name: "margin-" + side,
- initialValue: '0',
- prefix: false,
- type: 4 /* TOKEN_VALUE */
- }); };
- var marginTop = marginForSide('top');
- var marginRight = marginForSide('right');
- var marginBottom = marginForSide('bottom');
- var marginLeft = marginForSide('left');
-
- var overflow = {
- name: 'overflow',
- initialValue: 'visible',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).map(function (overflow) {
- switch (overflow.value) {
- case 'hidden':
- return 1 /* HIDDEN */;
- case 'scroll':
- return 2 /* SCROLL */;
- case 'clip':
- return 3 /* CLIP */;
- case 'auto':
- return 4 /* AUTO */;
- case 'visible':
- default:
- return 0 /* VISIBLE */;
- }
- });
- }
- };
-
- var overflowWrap = {
- name: 'overflow-wrap',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, overflow) {
- switch (overflow) {
- case 'break-word':
- return "break-word" /* BREAK_WORD */;
- case 'normal':
- default:
- return "normal" /* NORMAL */;
- }
- }
- };
-
- var paddingForSide = function (side) { return ({
- name: "padding-" + side,
- initialValue: '0',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'length-percentage'
- }); };
- var paddingTop = paddingForSide('top');
- var paddingRight = paddingForSide('right');
- var paddingBottom = paddingForSide('bottom');
- var paddingLeft = paddingForSide('left');
-
- var textAlign = {
- name: 'text-align',
- initialValue: 'left',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, textAlign) {
- switch (textAlign) {
- case 'right':
- return 2 /* RIGHT */;
- case 'center':
- case 'justify':
- return 1 /* CENTER */;
- case 'left':
- default:
- return 0 /* LEFT */;
- }
- }
- };
-
- var position = {
- name: 'position',
- initialValue: 'static',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, position) {
- switch (position) {
- case 'relative':
- return 1 /* RELATIVE */;
- case 'absolute':
- return 2 /* ABSOLUTE */;
- case 'fixed':
- return 3 /* FIXED */;
- case 'sticky':
- return 4 /* STICKY */;
- }
- return 0 /* STATIC */;
- }
- };
-
- var textShadow = {
- name: 'text-shadow',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (context, tokens) {
- if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) {
- return [];
- }
- return parseFunctionArgs(tokens).map(function (values) {
- var shadow = {
- color: COLORS.TRANSPARENT,
- offsetX: ZERO_LENGTH,
- offsetY: ZERO_LENGTH,
- blur: ZERO_LENGTH
- };
- var c = 0;
- for (var i = 0; i < values.length; i++) {
- var token = values[i];
- if (isLength(token)) {
- if (c === 0) {
- shadow.offsetX = token;
- }
- else if (c === 1) {
- shadow.offsetY = token;
- }
- else {
- shadow.blur = token;
- }
- c++;
- }
- else {
- shadow.color = color$1.parse(context, token);
- }
- }
- return shadow;
- });
- }
- };
-
- var textTransform = {
- name: 'text-transform',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, textTransform) {
- switch (textTransform) {
- case 'uppercase':
- return 2 /* UPPERCASE */;
- case 'lowercase':
- return 1 /* LOWERCASE */;
- case 'capitalize':
- return 3 /* CAPITALIZE */;
- }
- return 0 /* NONE */;
- }
- };
-
- var transform$1 = {
- name: 'transform',
- initialValue: 'none',
- prefix: true,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') {
- return null;
- }
- if (token.type === 18 /* FUNCTION */) {
- var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name];
- if (typeof transformFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\"");
- }
- return transformFunction(token.values);
- }
- return null;
- }
- };
- var matrix = function (args) {
- var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; });
- return values.length === 6 ? values : null;
- };
- // doesn't support 3D transforms at the moment
- var matrix3d = function (args) {
- var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; });
- var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15];
- return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null;
- };
- var SUPPORTED_TRANSFORM_FUNCTIONS = {
- matrix: matrix,
- matrix3d: matrix3d
- };
-
- var DEFAULT_VALUE = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 50,
- flags: FLAG_INTEGER
- };
- var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE];
- var transformOrigin = {
- name: 'transform-origin',
- initialValue: '50% 50%',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var origins = tokens.filter(isLengthPercentage);
- if (origins.length !== 2) {
- return DEFAULT;
- }
- return [origins[0], origins[1]];
- }
- };
-
- var visibility = {
- name: 'visible',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, visibility) {
- switch (visibility) {
- case 'hidden':
- return 1 /* HIDDEN */;
- case 'collapse':
- return 2 /* COLLAPSE */;
- case 'visible':
- default:
- return 0 /* VISIBLE */;
- }
- }
- };
-
- var WORD_BREAK;
- (function (WORD_BREAK) {
- WORD_BREAK["NORMAL"] = "normal";
- WORD_BREAK["BREAK_ALL"] = "break-all";
- WORD_BREAK["KEEP_ALL"] = "keep-all";
- })(WORD_BREAK || (WORD_BREAK = {}));
- var wordBreak = {
- name: 'word-break',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, wordBreak) {
- switch (wordBreak) {
- case 'break-all':
- return WORD_BREAK.BREAK_ALL;
- case 'keep-all':
- return WORD_BREAK.KEEP_ALL;
- case 'normal':
- default:
- return WORD_BREAK.NORMAL;
- }
- }
- };
-
- var zIndex = {
- name: 'z-index',
- initialValue: 'auto',
- prefix: false,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */) {
- return { auto: true, order: 0 };
- }
- if (isNumberToken(token)) {
- return { auto: false, order: token.number };
- }
- throw new Error("Invalid z-index number parsed");
- }
- };
-
- var time = {
- name: 'time',
- parse: function (_context, value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- switch (value.unit.toLowerCase()) {
- case 's':
- return 1000 * value.number;
- case 'ms':
- return value.number;
- }
- }
- throw new Error("Unsupported time type");
- }
- };
-
- var opacity = {
- name: 'opacity',
- initialValue: '1',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isNumberToken(token)) {
- return token.number;
- }
- return 1;
- }
- };
-
- var textDecorationColor = {
- name: "text-decoration-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var textDecorationLine = {
- name: 'text-decoration-line',
- initialValue: 'none',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens
- .filter(isIdentToken)
- .map(function (token) {
- switch (token.value) {
- case 'underline':
- return 1 /* UNDERLINE */;
- case 'overline':
- return 2 /* OVERLINE */;
- case 'line-through':
- return 3 /* LINE_THROUGH */;
- case 'none':
- return 4 /* BLINK */;
- }
- return 0 /* NONE */;
- })
- .filter(function (line) { return line !== 0 /* NONE */; });
- }
- };
-
- var fontFamily = {
- name: "font-family",
- initialValue: '',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var accumulator = [];
- var results = [];
- tokens.forEach(function (token) {
- switch (token.type) {
- case 20 /* IDENT_TOKEN */:
- case 0 /* STRING_TOKEN */:
- accumulator.push(token.value);
- break;
- case 17 /* NUMBER_TOKEN */:
- accumulator.push(token.number.toString());
- break;
- case 4 /* COMMA_TOKEN */:
- results.push(accumulator.join(' '));
- accumulator.length = 0;
- break;
- }
- });
- if (accumulator.length) {
- results.push(accumulator.join(' '));
- }
- return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); });
- }
- };
-
- var fontSize = {
- name: "font-size",
- initialValue: '0',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'length'
- };
-
- var fontWeight = {
- name: 'font-weight',
- initialValue: 'normal',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isNumberToken(token)) {
- return token.number;
- }
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'bold':
- return 700;
- case 'normal':
- default:
- return 400;
- }
- }
- return 400;
- }
- };
-
- var fontVariant = {
- name: 'font-variant',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).map(function (token) { return token.value; });
- }
- };
-
- var fontStyle = {
- name: 'font-style',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, overflow) {
- switch (overflow) {
- case 'oblique':
- return "oblique" /* OBLIQUE */;
- case 'italic':
- return "italic" /* ITALIC */;
- case 'normal':
- default:
- return "normal" /* NORMAL */;
- }
- }
- };
-
- var contains = function (bit, value) { return (bit & value) !== 0; };
-
- var content = {
- name: 'content',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return [];
- }
- return tokens;
- }
- };
-
- var counterIncrement = {
- name: 'counter-increment',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return null;
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return null;
- }
- var increments = [];
- var filtered = tokens.filter(nonWhiteSpace);
- for (var i = 0; i < filtered.length; i++) {
- var counter = filtered[i];
- var next = filtered[i + 1];
- if (counter.type === 20 /* IDENT_TOKEN */) {
- var increment = next && isNumberToken(next) ? next.number : 1;
- increments.push({ counter: counter.value, increment: increment });
- }
- }
- return increments;
- }
- };
-
- var counterReset = {
- name: 'counter-reset',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var resets = [];
- var filtered = tokens.filter(nonWhiteSpace);
- for (var i = 0; i < filtered.length; i++) {
- var counter = filtered[i];
- var next = filtered[i + 1];
- if (isIdentToken(counter) && counter.value !== 'none') {
- var reset = next && isNumberToken(next) ? next.number : 0;
- resets.push({ counter: counter.value, reset: reset });
- }
- }
- return resets;
- }
- };
-
- var duration = {
- name: 'duration',
- initialValue: '0s',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (context, tokens) {
- return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); });
- }
- };
-
- var quotes = {
- name: 'quotes',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return null;
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return null;
- }
- var quotes = [];
- var filtered = tokens.filter(isStringToken);
- if (filtered.length % 2 !== 0) {
- return null;
- }
- for (var i = 0; i < filtered.length; i += 2) {
- var open_1 = filtered[i].value;
- var close_1 = filtered[i + 1].value;
- quotes.push({ open: open_1, close: close_1 });
- }
- return quotes;
- }
- };
- var getQuote = function (quotes, depth, open) {
- if (!quotes) {
- return '';
- }
- var quote = quotes[Math.min(depth, quotes.length - 1)];
- if (!quote) {
- return '';
- }
- return open ? quote.open : quote.close;
- };
-
- var paintOrder = {
- name: 'paint-order',
- initialValue: 'normal',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */];
- var layers = [];
- tokens.filter(isIdentToken).forEach(function (token) {
- switch (token.value) {
- case 'stroke':
- layers.push(1 /* STROKE */);
- break;
- case 'fill':
- layers.push(0 /* FILL */);
- break;
- case 'markers':
- layers.push(2 /* MARKERS */);
- break;
- }
- });
- DEFAULT_VALUE.forEach(function (value) {
- if (layers.indexOf(value) === -1) {
- layers.push(value);
- }
- });
- return layers;
- }
- };
-
- var webkitTextStrokeColor = {
- name: "-webkit-text-stroke-color",
- initialValue: 'currentcolor',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var webkitTextStrokeWidth = {
- name: "-webkit-text-stroke-width",
- initialValue: '0',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isDimensionToken(token)) {
- return token.number;
- }
- return 0;
- }
- };
-
- var CSSParsedDeclaration = /** @class */ (function () {
- function CSSParsedDeclaration(context, declaration) {
- var _a, _b;
- this.animationDuration = parse(context, duration, declaration.animationDuration);
- this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip);
- this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor);
- this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage);
- this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin);
- this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition);
- this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat);
- this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize);
- this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor);
- this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor);
- this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor);
- this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor);
- this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius);
- this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius);
- this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius);
- this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius);
- this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle);
- this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle);
- this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle);
- this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle);
- this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth);
- this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth);
- this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth);
- this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth);
- this.color = parse(context, color, declaration.color);
- this.direction = parse(context, direction, declaration.direction);
- this.display = parse(context, display, declaration.display);
- this.float = parse(context, float, declaration.cssFloat);
- this.fontFamily = parse(context, fontFamily, declaration.fontFamily);
- this.fontSize = parse(context, fontSize, declaration.fontSize);
- this.fontStyle = parse(context, fontStyle, declaration.fontStyle);
- this.fontVariant = parse(context, fontVariant, declaration.fontVariant);
- this.fontWeight = parse(context, fontWeight, declaration.fontWeight);
- this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing);
- this.lineBreak = parse(context, lineBreak, declaration.lineBreak);
- this.lineHeight = parse(context, lineHeight, declaration.lineHeight);
- this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage);
- this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition);
- this.listStyleType = parse(context, listStyleType, declaration.listStyleType);
- this.marginTop = parse(context, marginTop, declaration.marginTop);
- this.marginRight = parse(context, marginRight, declaration.marginRight);
- this.marginBottom = parse(context, marginBottom, declaration.marginBottom);
- this.marginLeft = parse(context, marginLeft, declaration.marginLeft);
- this.opacity = parse(context, opacity, declaration.opacity);
- var overflowTuple = parse(context, overflow, declaration.overflow);
- this.overflowX = overflowTuple[0];
- this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0];
- this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap);
- this.paddingTop = parse(context, paddingTop, declaration.paddingTop);
- this.paddingRight = parse(context, paddingRight, declaration.paddingRight);
- this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom);
- this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft);
- this.paintOrder = parse(context, paintOrder, declaration.paintOrder);
- this.position = parse(context, position, declaration.position);
- this.textAlign = parse(context, textAlign, declaration.textAlign);
- this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color);
- this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration);
- this.textShadow = parse(context, textShadow, declaration.textShadow);
- this.textTransform = parse(context, textTransform, declaration.textTransform);
- this.transform = parse(context, transform$1, declaration.transform);
- this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin);
- this.visibility = parse(context, visibility, declaration.visibility);
- this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor);
- this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth);
- this.wordBreak = parse(context, wordBreak, declaration.wordBreak);
- this.zIndex = parse(context, zIndex, declaration.zIndex);
- }
- CSSParsedDeclaration.prototype.isVisible = function () {
- return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */;
- };
- CSSParsedDeclaration.prototype.isTransparent = function () {
- return isTransparent(this.backgroundColor);
- };
- CSSParsedDeclaration.prototype.isTransformed = function () {
- return this.transform !== null;
- };
- CSSParsedDeclaration.prototype.isPositioned = function () {
- return this.position !== 0 /* STATIC */;
- };
- CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () {
- return this.isPositioned() && !this.zIndex.auto;
- };
- CSSParsedDeclaration.prototype.isFloating = function () {
- return this.float !== 0 /* NONE */;
- };
- CSSParsedDeclaration.prototype.isInlineLevel = function () {
- return (contains(this.display, 4 /* INLINE */) ||
- contains(this.display, 33554432 /* INLINE_BLOCK */) ||
- contains(this.display, 268435456 /* INLINE_FLEX */) ||
- contains(this.display, 536870912 /* INLINE_GRID */) ||
- contains(this.display, 67108864 /* INLINE_LIST_ITEM */) ||
- contains(this.display, 134217728 /* INLINE_TABLE */));
- };
- return CSSParsedDeclaration;
- }());
- var CSSParsedPseudoDeclaration = /** @class */ (function () {
- function CSSParsedPseudoDeclaration(context, declaration) {
- this.content = parse(context, content, declaration.content);
- this.quotes = parse(context, quotes, declaration.quotes);
- }
- return CSSParsedPseudoDeclaration;
- }());
- var CSSParsedCounterDeclaration = /** @class */ (function () {
- function CSSParsedCounterDeclaration(context, declaration) {
- this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement);
- this.counterReset = parse(context, counterReset, declaration.counterReset);
- }
- return CSSParsedCounterDeclaration;
- }());
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var parse = function (context, descriptor, style) {
- var tokenizer = new Tokenizer();
- var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue;
- tokenizer.write(value);
- var parser = new Parser(tokenizer.read());
- switch (descriptor.type) {
- case 2 /* IDENT_VALUE */:
- var token = parser.parseComponentValue();
- return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue);
- case 0 /* VALUE */:
- return descriptor.parse(context, parser.parseComponentValue());
- case 1 /* LIST */:
- return descriptor.parse(context, parser.parseComponentValues());
- case 4 /* TOKEN_VALUE */:
- return parser.parseComponentValue();
- case 3 /* TYPE_VALUE */:
- switch (descriptor.format) {
- case 'angle':
- return angle.parse(context, parser.parseComponentValue());
- case 'color':
- return color$1.parse(context, parser.parseComponentValue());
- case 'image':
- return image.parse(context, parser.parseComponentValue());
- case 'length':
- var length_1 = parser.parseComponentValue();
- return isLength(length_1) ? length_1 : ZERO_LENGTH;
- case 'length-percentage':
- var value_1 = parser.parseComponentValue();
- return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH;
- case 'time':
- return time.parse(context, parser.parseComponentValue());
- }
- break;
- }
- };
-
- var elementDebuggerAttribute = 'data-html2canvas-debug';
- var getElementDebugType = function (element) {
- var attribute = element.getAttribute(elementDebuggerAttribute);
- switch (attribute) {
- case 'all':
- return 1 /* ALL */;
- case 'clone':
- return 2 /* CLONE */;
- case 'parse':
- return 3 /* PARSE */;
- case 'render':
- return 4 /* RENDER */;
- default:
- return 0 /* NONE */;
- }
- };
- var isDebugging = function (element, type) {
- var elementType = getElementDebugType(element);
- return elementType === 1 /* ALL */ || type === elementType;
- };
-
- var ElementContainer = /** @class */ (function () {
- function ElementContainer(context, element) {
- this.context = context;
- this.textNodes = [];
- this.elements = [];
- this.flags = 0;
- if (isDebugging(element, 3 /* PARSE */)) {
- debugger;
- }
- this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null));
- if (isHTMLElementNode(element)) {
- if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) {
- element.style.animationDuration = '0s';
- }
- if (this.styles.transform !== null) {
- // getBoundingClientRect takes transforms into account
- element.style.transform = 'none';
- }
- }
- this.bounds = parseBounds(this.context, element);
- if (isDebugging(element, 4 /* RENDER */)) {
- this.flags |= 16 /* DEBUG_RENDER */;
- }
- }
- return ElementContainer;
- }());
-
- /*
- * text-segmentation 1.0.3
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA=';
-
- /*
- * utrie 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$1 = 0; i$1 < chars$1.length; i$1++) {
- lookup$1[chars$1.charCodeAt(i$1)] = i$1;
- }
- var decode = function (base64) {
- var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4;
- if (base64[base64.length - 1] === '=') {
- bufferLength--;
- if (base64[base64.length - 2] === '=') {
- bufferLength--;
- }
- }
- var buffer = typeof ArrayBuffer !== 'undefined' &&
- typeof Uint8Array !== 'undefined' &&
- typeof Uint8Array.prototype.slice !== 'undefined'
- ? new ArrayBuffer(bufferLength)
- : new Array(bufferLength);
- var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer);
- for (i = 0; i < len; i += 4) {
- encoded1 = lookup$1[base64.charCodeAt(i)];
- encoded2 = lookup$1[base64.charCodeAt(i + 1)];
- encoded3 = lookup$1[base64.charCodeAt(i + 2)];
- encoded4 = lookup$1[base64.charCodeAt(i + 3)];
- bytes[p++] = (encoded1 << 2) | (encoded2 >> 4);
- bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2);
- bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63);
- }
- return buffer;
- };
- var polyUint16Array = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 2) {
- bytes.push((buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
- var polyUint32Array = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 4) {
- bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
-
- /** Shift size for getting the index-2 table offset. */
- var UTRIE2_SHIFT_2 = 5;
- /** Shift size for getting the index-1 table offset. */
- var UTRIE2_SHIFT_1 = 6 + 5;
- /**
- * Shift size for shifting left the index array values.
- * Increases possible data size with 16-bit index values at the cost
- * of compactability.
- * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY.
- */
- var UTRIE2_INDEX_SHIFT = 2;
- /**
- * Difference between the two shift sizes,
- * for getting an index-1 offset from an index-2 offset. 6=11-5
- */
- var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2;
- /**
- * The part of the index-2 table for U+D800..U+DBFF stores values for
- * lead surrogate code _units_ not code _points_.
- * Values for lead surrogate code _points_ are indexed with this portion of the table.
- * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.)
- */
- var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2;
- /** Number of entries in a data block. 32=0x20 */
- var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2;
- /** Mask for getting the lower bits for the in-data-block offset. */
- var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1;
- var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2;
- /** Count the lengths of both BMP pieces. 2080=0x820 */
- var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH;
- /**
- * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820.
- * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2.
- */
- var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH;
- var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */
- /**
- * The index-1 table, only used for supplementary code points, at offset 2112=0x840.
- * Variable length, for code points up to highStart, where the last single-value range starts.
- * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1.
- * (For 0x100000 supplementary code points U+10000..U+10ffff.)
- *
- * The part of the index-2 table for supplementary code points starts
- * after this index-1 table.
- *
- * Both the index-1 table and the following part of the index-2 table
- * are omitted completely if there is only BMP data.
- */
- var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH;
- /**
- * Number of index-1 entries for the BMP. 32=0x20
- * This part of the index-1 table is omitted from the serialized form.
- */
- var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1;
- /** Number of entries in an index-2 block. 64=0x40 */
- var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2;
- /** Mask for getting the lower bits for the in-index-2-block offset. */
- var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1;
- var slice16 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint16Array(Array.prototype.slice.call(view, start, end));
- };
- var slice32 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint32Array(Array.prototype.slice.call(view, start, end));
- };
- var createTrieFromBase64 = function (base64, _byteLength) {
- var buffer = decode(base64);
- var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer);
- var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer);
- var headerLength = 24;
- var index = slice16(view16, headerLength / 2, view32[4] / 2);
- var data = view32[5] === 2
- ? slice16(view16, (headerLength + view32[4]) / 2)
- : slice32(view32, Math.ceil((headerLength + view32[4]) / 4));
- return new Trie(view32[0], view32[1], view32[2], view32[3], index, data);
- };
- var Trie = /** @class */ (function () {
- function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) {
- this.initialValue = initialValue;
- this.errorValue = errorValue;
- this.highStart = highStart;
- this.highValueIndex = highValueIndex;
- this.index = index;
- this.data = data;
- }
- /**
- * Get the value for a code point as stored in the Trie.
- *
- * @param codePoint the code point
- * @return the value
- */
- Trie.prototype.get = function (codePoint) {
- var ix;
- if (codePoint >= 0) {
- if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) {
- // Ordinary BMP code point, excluding leading surrogates.
- // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index.
- // 16 bit data is stored in the index array itself.
- ix = this.index[codePoint >> UTRIE2_SHIFT_2];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint <= 0xffff) {
- // Lead Surrogate Code Point. A Separate index section is stored for
- // lead surrogate code units and code points.
- // The main index has the code unit data.
- // For this function, we need the code point data.
- // Note: this expression could be refactored for slightly improved efficiency, but
- // surrogate code points will be so rare in practice that it's not worth it.
- ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint < this.highStart) {
- // Supplemental code point, use two-level lookup.
- ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1);
- ix = this.index[ix];
- ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK;
- ix = this.index[ix];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint <= 0x10ffff) {
- return this.data[this.highValueIndex];
- }
- }
- // Fall through. The code point is outside of the legal range of 0..0x10ffff.
- return this.errorValue;
- };
- return Trie;
- }());
-
- /*
- * base64-arraybuffer 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i = 0; i < chars.length; i++) {
- lookup[chars.charCodeAt(i)] = i;
- }
-
- var Prepend = 1;
- var CR = 2;
- var LF = 3;
- var Control = 4;
- var Extend = 5;
- var SpacingMark = 7;
- var L = 8;
- var V = 9;
- var T = 10;
- var LV = 11;
- var LVT = 12;
- var ZWJ = 13;
- var Extended_Pictographic = 14;
- var RI = 15;
- var toCodePoints = function (str) {
- var codePoints = [];
- var i = 0;
- var length = str.length;
- while (i < length) {
- var value = str.charCodeAt(i++);
- if (value >= 0xd800 && value <= 0xdbff && i < length) {
- var extra = str.charCodeAt(i++);
- if ((extra & 0xfc00) === 0xdc00) {
- codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000);
- }
- else {
- codePoints.push(value);
- i--;
- }
- }
- else {
- codePoints.push(value);
- }
- }
- return codePoints;
- };
- var fromCodePoint = function () {
- var codePoints = [];
- for (var _i = 0; _i < arguments.length; _i++) {
- codePoints[_i] = arguments[_i];
- }
- if (String.fromCodePoint) {
- return String.fromCodePoint.apply(String, codePoints);
- }
- var length = codePoints.length;
- if (!length) {
- return '';
- }
- var codeUnits = [];
- var index = -1;
- var result = '';
- while (++index < length) {
- var codePoint = codePoints[index];
- if (codePoint <= 0xffff) {
- codeUnits.push(codePoint);
- }
- else {
- codePoint -= 0x10000;
- codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00);
- }
- if (index + 1 === length || codeUnits.length > 0x4000) {
- result += String.fromCharCode.apply(String, codeUnits);
- codeUnits.length = 0;
- }
- }
- return result;
- };
- var UnicodeTrie = createTrieFromBase64(base64);
- var BREAK_NOT_ALLOWED = '×';
- var BREAK_ALLOWED = '÷';
- var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); };
- var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) {
- var prevIndex = index - 2;
- var prev = classTypes[prevIndex];
- var current = classTypes[index - 1];
- var next = classTypes[index];
- // GB3 Do not break between a CR and LF
- if (current === CR && next === LF) {
- return BREAK_NOT_ALLOWED;
- }
- // GB4 Otherwise, break before and after controls.
- if (current === CR || current === LF || current === Control) {
- return BREAK_ALLOWED;
- }
- // GB5
- if (next === CR || next === LF || next === Control) {
- return BREAK_ALLOWED;
- }
- // Do not break Hangul syllable sequences.
- // GB6
- if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED;
- }
- // GB7
- if ((current === LV || current === V) && (next === V || next === T)) {
- return BREAK_NOT_ALLOWED;
- }
- // GB8
- if ((current === LVT || current === T) && next === T) {
- return BREAK_NOT_ALLOWED;
- }
- // GB9 Do not break before extending characters or ZWJ.
- if (next === ZWJ || next === Extend) {
- return BREAK_NOT_ALLOWED;
- }
- // Do not break before SpacingMarks, or after Prepend characters.
- // GB9a
- if (next === SpacingMark) {
- return BREAK_NOT_ALLOWED;
- }
- // GB9a
- if (current === Prepend) {
- return BREAK_NOT_ALLOWED;
- }
- // GB11 Do not break within emoji modifier sequences or emoji zwj sequences.
- if (current === ZWJ && next === Extended_Pictographic) {
- while (prev === Extend) {
- prev = classTypes[--prevIndex];
- }
- if (prev === Extended_Pictographic) {
- return BREAK_NOT_ALLOWED;
- }
- }
- // GB12 Do not break within emoji flag sequences.
- // That is, do not break between regional indicator (RI) symbols
- // if there is an odd number of RI characters before the break point.
- if (current === RI && next === RI) {
- var countRI = 0;
- while (prev === RI) {
- countRI++;
- prev = classTypes[--prevIndex];
- }
- if (countRI % 2 === 0) {
- return BREAK_NOT_ALLOWED;
- }
- }
- return BREAK_ALLOWED;
- };
- var GraphemeBreaker = function (str) {
- var codePoints = toCodePoints(str);
- var length = codePoints.length;
- var index = 0;
- var lastEnd = 0;
- var classTypes = codePoints.map(codePointToClass);
- return {
- next: function () {
- if (index >= length) {
- return { done: true, value: null };
- }
- var graphemeBreak = BREAK_NOT_ALLOWED;
- while (index < length &&
- (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { }
- if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) {
- var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index));
- lastEnd = index;
- return { value: value, done: false };
- }
- return { done: true, value: null };
- },
- };
- };
- var splitGraphemes = function (str) {
- var breaker = GraphemeBreaker(str);
- var graphemes = [];
- var bk;
- while (!(bk = breaker.next()).done) {
- if (bk.value) {
- graphemes.push(bk.value.slice());
- }
- }
- return graphemes;
- };
-
- var testRangeBounds = function (document) {
- var TEST_HEIGHT = 123;
- if (document.createRange) {
- var range = document.createRange();
- if (range.getBoundingClientRect) {
- var testElement = document.createElement('boundtest');
- testElement.style.height = TEST_HEIGHT + "px";
- testElement.style.display = 'block';
- document.body.appendChild(testElement);
- range.selectNode(testElement);
- var rangeBounds = range.getBoundingClientRect();
- var rangeHeight = Math.round(rangeBounds.height);
- document.body.removeChild(testElement);
- if (rangeHeight === TEST_HEIGHT) {
- return true;
- }
- }
- }
- return false;
- };
- var testIOSLineBreak = function (document) {
- var testElement = document.createElement('boundtest');
- testElement.style.width = '50px';
- testElement.style.display = 'block';
- testElement.style.fontSize = '12px';
- testElement.style.letterSpacing = '0px';
- testElement.style.wordSpacing = '0px';
- document.body.appendChild(testElement);
- var range = document.createRange();
- testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : '';
- var node = testElement.firstChild;
- var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); });
- var offset = 0;
- var prev = {};
- // ios 13 does not handle range getBoundingClientRect line changes correctly #2177
- var supports = textList.every(function (text, i) {
- range.setStart(node, offset);
- range.setEnd(node, offset + text.length);
- var rect = range.getBoundingClientRect();
- offset += text.length;
- var boundAhead = rect.x > prev.x || rect.y > prev.y;
- prev = rect;
- if (i === 0) {
- return true;
- }
- return boundAhead;
- });
- document.body.removeChild(testElement);
- return supports;
- };
- var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; };
- var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; };
- var testSVG = function (document) {
- var img = new Image();
- var canvas = document.createElement('canvas');
- var ctx = canvas.getContext('2d');
- if (!ctx) {
- return false;
- }
- img.src = "data:image/svg+xml, ";
- try {
- ctx.drawImage(img, 0, 0);
- canvas.toDataURL();
- }
- catch (e) {
- return false;
- }
- return true;
- };
- var isGreenPixel = function (data) {
- return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255;
- };
- var testForeignObject = function (document) {
- var canvas = document.createElement('canvas');
- var size = 100;
- canvas.width = size;
- canvas.height = size;
- var ctx = canvas.getContext('2d');
- if (!ctx) {
- return Promise.reject(false);
- }
- ctx.fillStyle = 'rgb(0, 255, 0)';
- ctx.fillRect(0, 0, size, size);
- var img = new Image();
- var greenImageSrc = canvas.toDataURL();
- img.src = greenImageSrc;
- var svg = createForeignObjectSVG(size, size, 0, 0, img);
- ctx.fillStyle = 'red';
- ctx.fillRect(0, 0, size, size);
- return loadSerializedSVG$1(svg)
- .then(function (img) {
- ctx.drawImage(img, 0, 0);
- var data = ctx.getImageData(0, 0, size, size).data;
- ctx.fillStyle = 'red';
- ctx.fillRect(0, 0, size, size);
- var node = document.createElement('div');
- node.style.backgroundImage = "url(" + greenImageSrc + ")";
- node.style.height = size + "px";
- // Firefox 55 does not render inline tags
- return isGreenPixel(data)
- ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node))
- : Promise.reject(false);
- })
- .then(function (img) {
- ctx.drawImage(img, 0, 0);
- // Edge does not render background-images
- return isGreenPixel(ctx.getImageData(0, 0, size, size).data);
- })
- .catch(function () { return false; });
- };
- var createForeignObjectSVG = function (width, height, x, y, node) {
- var xmlns = 'http://www.w3.org/2000/svg';
- var svg = document.createElementNS(xmlns, 'svg');
- var foreignObject = document.createElementNS(xmlns, 'foreignObject');
- svg.setAttributeNS(null, 'width', width.toString());
- svg.setAttributeNS(null, 'height', height.toString());
- foreignObject.setAttributeNS(null, 'width', '100%');
- foreignObject.setAttributeNS(null, 'height', '100%');
- foreignObject.setAttributeNS(null, 'x', x.toString());
- foreignObject.setAttributeNS(null, 'y', y.toString());
- foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true');
- svg.appendChild(foreignObject);
- foreignObject.appendChild(node);
- return svg;
- };
- var loadSerializedSVG$1 = function (svg) {
- return new Promise(function (resolve, reject) {
- var img = new Image();
- img.onload = function () { return resolve(img); };
- img.onerror = reject;
- img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg));
- });
- };
- var FEATURES = {
- get SUPPORT_RANGE_BOUNDS() {
- var value = testRangeBounds(document);
- Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value });
- return value;
- },
- get SUPPORT_WORD_BREAKING() {
- var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document);
- Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value });
- return value;
- },
- get SUPPORT_SVG_DRAWING() {
- var value = testSVG(document);
- Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_FOREIGNOBJECT_DRAWING() {
- var value = typeof Array.from === 'function' && typeof window.fetch === 'function'
- ? testForeignObject(document)
- : Promise.resolve(false);
- Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_CORS_IMAGES() {
- var value = testCORS();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value });
- return value;
- },
- get SUPPORT_RESPONSE_TYPE() {
- var value = testResponseType();
- Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value });
- return value;
- },
- get SUPPORT_CORS_XHR() {
- var value = 'withCredentials' in new XMLHttpRequest();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value });
- return value;
- },
- get SUPPORT_NATIVE_TEXT_SEGMENTATION() {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter);
- Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value });
- return value;
- }
- };
-
- var TextBounds = /** @class */ (function () {
- function TextBounds(text, bounds) {
- this.text = text;
- this.bounds = bounds;
- }
- return TextBounds;
- }());
- var parseTextBounds = function (context, value, styles, node) {
- var textList = breakText(value, styles);
- var textBounds = [];
- var offset = 0;
- textList.forEach(function (text) {
- if (styles.textDecorationLine.length || text.trim().length > 0) {
- if (FEATURES.SUPPORT_RANGE_BOUNDS) {
- var clientRects = createRange(node, offset, text.length).getClientRects();
- if (clientRects.length > 1) {
- var subSegments = segmentGraphemes(text);
- var subOffset_1 = 0;
- subSegments.forEach(function (subSegment) {
- textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects())));
- subOffset_1 += subSegment.length;
- });
- }
- else {
- textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects)));
- }
- }
- else {
- var replacementNode = node.splitText(text.length);
- textBounds.push(new TextBounds(text, getWrapperBounds(context, node)));
- node = replacementNode;
- }
- }
- else if (!FEATURES.SUPPORT_RANGE_BOUNDS) {
- node = node.splitText(text.length);
- }
- offset += text.length;
- });
- return textBounds;
- };
- var getWrapperBounds = function (context, node) {
- var ownerDocument = node.ownerDocument;
- if (ownerDocument) {
- var wrapper = ownerDocument.createElement('html2canvaswrapper');
- wrapper.appendChild(node.cloneNode(true));
- var parentNode = node.parentNode;
- if (parentNode) {
- parentNode.replaceChild(wrapper, node);
- var bounds = parseBounds(context, wrapper);
- if (wrapper.firstChild) {
- parentNode.replaceChild(wrapper.firstChild, wrapper);
- }
- return bounds;
- }
- }
- return Bounds.EMPTY;
- };
- var createRange = function (node, offset, length) {
- var ownerDocument = node.ownerDocument;
- if (!ownerDocument) {
- throw new Error('Node has no owner document');
- }
- var range = ownerDocument.createRange();
- range.setStart(node, offset);
- range.setEnd(node, offset + length);
- return range;
- };
- var segmentGraphemes = function (value) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return splitGraphemes(value);
- };
- var segmentWords = function (value, styles) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, {
- granularity: 'word'
- });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return breakWords(value, styles);
- };
- var breakText = function (value, styles) {
- return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles);
- };
- // https://drafts.csswg.org/css-text/#word-separator
- var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091];
- var breakWords = function (str, styles) {
- var breaker = LineBreaker(str, {
- lineBreak: styles.lineBreak,
- wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak
- });
- var words = [];
- var bk;
- var _loop_1 = function () {
- if (bk.value) {
- var value = bk.value.slice();
- var codePoints = toCodePoints$1(value);
- var word_1 = '';
- codePoints.forEach(function (codePoint) {
- if (wordSeparators.indexOf(codePoint) === -1) {
- word_1 += fromCodePoint$1(codePoint);
- }
- else {
- if (word_1.length) {
- words.push(word_1);
- }
- words.push(fromCodePoint$1(codePoint));
- word_1 = '';
- }
- });
- if (word_1.length) {
- words.push(word_1);
- }
- }
- };
- while (!(bk = breaker.next()).done) {
- _loop_1();
- }
- return words;
- };
-
- var TextContainer = /** @class */ (function () {
- function TextContainer(context, node, styles) {
- this.text = transform(node.data, styles.textTransform);
- this.textBounds = parseTextBounds(context, this.text, styles, node);
- }
- return TextContainer;
- }());
- var transform = function (text, transform) {
- switch (transform) {
- case 1 /* LOWERCASE */:
- return text.toLowerCase();
- case 3 /* CAPITALIZE */:
- return text.replace(CAPITALIZE, capitalize);
- case 2 /* UPPERCASE */:
- return text.toUpperCase();
- default:
- return text;
- }
- };
- var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g;
- var capitalize = function (m, p1, p2) {
- if (m.length > 0) {
- return p1 + p2.toUpperCase();
- }
- return m;
- };
-
- var ImageElementContainer = /** @class */ (function (_super) {
- __extends(ImageElementContainer, _super);
- function ImageElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- _this.src = img.currentSrc || img.src;
- _this.intrinsicWidth = img.naturalWidth;
- _this.intrinsicHeight = img.naturalHeight;
- _this.context.cache.addImage(_this.src);
- return _this;
- }
- return ImageElementContainer;
- }(ElementContainer));
-
- var CanvasElementContainer = /** @class */ (function (_super) {
- __extends(CanvasElementContainer, _super);
- function CanvasElementContainer(context, canvas) {
- var _this = _super.call(this, context, canvas) || this;
- _this.canvas = canvas;
- _this.intrinsicWidth = canvas.width;
- _this.intrinsicHeight = canvas.height;
- return _this;
- }
- return CanvasElementContainer;
- }(ElementContainer));
-
- var SVGElementContainer = /** @class */ (function (_super) {
- __extends(SVGElementContainer, _super);
- function SVGElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- var s = new XMLSerializer();
- var bounds = parseBounds(context, img);
- img.setAttribute('width', bounds.width + "px");
- img.setAttribute('height', bounds.height + "px");
- _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img));
- _this.intrinsicWidth = img.width.baseVal.value;
- _this.intrinsicHeight = img.height.baseVal.value;
- _this.context.cache.addImage(_this.svg);
- return _this;
- }
- return SVGElementContainer;
- }(ElementContainer));
-
- var LIElementContainer = /** @class */ (function (_super) {
- __extends(LIElementContainer, _super);
- function LIElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return LIElementContainer;
- }(ElementContainer));
-
- var OLElementContainer = /** @class */ (function (_super) {
- __extends(OLElementContainer, _super);
- function OLElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.start = element.start;
- _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true;
- return _this;
- }
- return OLElementContainer;
- }(ElementContainer));
-
- var CHECKBOX_BORDER_RADIUS = [
- {
- type: 15 /* DIMENSION_TOKEN */,
- flags: 0,
- unit: 'px',
- number: 3
- }
- ];
- var RADIO_BORDER_RADIUS = [
- {
- type: 16 /* PERCENTAGE_TOKEN */,
- flags: 0,
- number: 50
- }
- ];
- var reformatInputBounds = function (bounds) {
- if (bounds.width > bounds.height) {
- return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height);
- }
- else if (bounds.width < bounds.height) {
- return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width);
- }
- return bounds;
- };
- var getInputValue = function (node) {
- var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value;
- return value.length === 0 ? node.placeholder || '' : value;
- };
- var CHECKBOX = 'checkbox';
- var RADIO = 'radio';
- var PASSWORD = 'password';
- var INPUT_COLOR = 0x2a2a2aff;
- var InputElementContainer = /** @class */ (function (_super) {
- __extends(InputElementContainer, _super);
- function InputElementContainer(context, input) {
- var _this = _super.call(this, context, input) || this;
- _this.type = input.type.toLowerCase();
- _this.checked = input.checked;
- _this.value = getInputValue(input);
- if (_this.type === CHECKBOX || _this.type === RADIO) {
- _this.styles.backgroundColor = 0xdededeff;
- _this.styles.borderTopColor =
- _this.styles.borderRightColor =
- _this.styles.borderBottomColor =
- _this.styles.borderLeftColor =
- 0xa5a5a5ff;
- _this.styles.borderTopWidth =
- _this.styles.borderRightWidth =
- _this.styles.borderBottomWidth =
- _this.styles.borderLeftWidth =
- 1;
- _this.styles.borderTopStyle =
- _this.styles.borderRightStyle =
- _this.styles.borderBottomStyle =
- _this.styles.borderLeftStyle =
- 1 /* SOLID */;
- _this.styles.backgroundClip = [0 /* BORDER_BOX */];
- _this.styles.backgroundOrigin = [0 /* BORDER_BOX */];
- _this.bounds = reformatInputBounds(_this.bounds);
- }
- switch (_this.type) {
- case CHECKBOX:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- CHECKBOX_BORDER_RADIUS;
- break;
- case RADIO:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- RADIO_BORDER_RADIUS;
- break;
- }
- return _this;
- }
- return InputElementContainer;
- }(ElementContainer));
-
- var SelectElementContainer = /** @class */ (function (_super) {
- __extends(SelectElementContainer, _super);
- function SelectElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- var option = element.options[element.selectedIndex || 0];
- _this.value = option ? option.text || '' : '';
- return _this;
- }
- return SelectElementContainer;
- }(ElementContainer));
-
- var TextareaElementContainer = /** @class */ (function (_super) {
- __extends(TextareaElementContainer, _super);
- function TextareaElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return TextareaElementContainer;
- }(ElementContainer));
-
- var IFrameElementContainer = /** @class */ (function (_super) {
- __extends(IFrameElementContainer, _super);
- function IFrameElementContainer(context, iframe) {
- var _this = _super.call(this, context, iframe) || this;
- _this.src = iframe.src;
- _this.width = parseInt(iframe.width, 10) || 0;
- _this.height = parseInt(iframe.height, 10) || 0;
- _this.backgroundColor = _this.styles.backgroundColor;
- try {
- if (iframe.contentWindow &&
- iframe.contentWindow.document &&
- iframe.contentWindow.document.documentElement) {
- _this.tree = parseTree(context, iframe.contentWindow.document.documentElement);
- // http://www.w3.org/TR/css3-background/#special-backgrounds
- var documentBackgroundColor = iframe.contentWindow.document.documentElement
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor)
- : COLORS.TRANSPARENT;
- var bodyBackgroundColor = iframe.contentWindow.document.body
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor)
- : COLORS.TRANSPARENT;
- _this.backgroundColor = isTransparent(documentBackgroundColor)
- ? isTransparent(bodyBackgroundColor)
- ? _this.styles.backgroundColor
- : bodyBackgroundColor
- : documentBackgroundColor;
- }
- }
- catch (e) { }
- return _this;
- }
- return IFrameElementContainer;
- }(ElementContainer));
-
- var LIST_OWNERS = ['OL', 'UL', 'MENU'];
- var parseNodeTree = function (context, node, parent, root) {
- for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) {
- nextNode = childNode.nextSibling;
- if (isTextNode(childNode) && childNode.data.trim().length > 0) {
- parent.textNodes.push(new TextContainer(context, childNode, parent.styles));
- }
- else if (isElementNode(childNode)) {
- if (isSlotElement(childNode) && childNode.assignedNodes) {
- childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); });
- }
- else {
- var container = createContainer(context, childNode);
- if (container.styles.isVisible()) {
- if (createsRealStackingContext(childNode, container, root)) {
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- }
- else if (createsStackingContext(container.styles)) {
- container.flags |= 2 /* CREATES_STACKING_CONTEXT */;
- }
- if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) {
- container.flags |= 8 /* IS_LIST_OWNER */;
- }
- parent.elements.push(container);
- childNode.slot;
- if (childNode.shadowRoot) {
- parseNodeTree(context, childNode.shadowRoot, container, root);
- }
- else if (!isTextareaElement(childNode) &&
- !isSVGElement(childNode) &&
- !isSelectElement(childNode)) {
- parseNodeTree(context, childNode, container, root);
- }
- }
- }
- }
- }
- };
- var createContainer = function (context, element) {
- if (isImageElement(element)) {
- return new ImageElementContainer(context, element);
- }
- if (isCanvasElement(element)) {
- return new CanvasElementContainer(context, element);
- }
- if (isSVGElement(element)) {
- return new SVGElementContainer(context, element);
- }
- if (isLIElement(element)) {
- return new LIElementContainer(context, element);
- }
- if (isOLElement(element)) {
- return new OLElementContainer(context, element);
- }
- if (isInputElement(element)) {
- return new InputElementContainer(context, element);
- }
- if (isSelectElement(element)) {
- return new SelectElementContainer(context, element);
- }
- if (isTextareaElement(element)) {
- return new TextareaElementContainer(context, element);
- }
- if (isIFrameElement(element)) {
- return new IFrameElementContainer(context, element);
- }
- return new ElementContainer(context, element);
- };
- var parseTree = function (context, element) {
- var container = createContainer(context, element);
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- parseNodeTree(context, element, container, container);
- return container;
- };
- var createsRealStackingContext = function (node, container, root) {
- return (container.styles.isPositionedWithZIndex() ||
- container.styles.opacity < 1 ||
- container.styles.isTransformed() ||
- (isBodyElement(node) && root.styles.isTransparent()));
- };
- var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); };
- var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; };
- var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; };
- var isHTMLElementNode = function (node) {
- return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node);
- };
- var isSVGElementNode = function (element) {
- return typeof element.className === 'object';
- };
- var isLIElement = function (node) { return node.tagName === 'LI'; };
- var isOLElement = function (node) { return node.tagName === 'OL'; };
- var isInputElement = function (node) { return node.tagName === 'INPUT'; };
- var isHTMLElement = function (node) { return node.tagName === 'HTML'; };
- var isSVGElement = function (node) { return node.tagName === 'svg'; };
- var isBodyElement = function (node) { return node.tagName === 'BODY'; };
- var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; };
- var isVideoElement = function (node) { return node.tagName === 'VIDEO'; };
- var isImageElement = function (node) { return node.tagName === 'IMG'; };
- var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; };
- var isStyleElement = function (node) { return node.tagName === 'STYLE'; };
- var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; };
- var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; };
- var isSelectElement = function (node) { return node.tagName === 'SELECT'; };
- var isSlotElement = function (node) { return node.tagName === 'SLOT'; };
- // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name
- var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; };
-
- var CounterState = /** @class */ (function () {
- function CounterState() {
- this.counters = {};
- }
- CounterState.prototype.getCounterValue = function (name) {
- var counter = this.counters[name];
- if (counter && counter.length) {
- return counter[counter.length - 1];
- }
- return 1;
- };
- CounterState.prototype.getCounterValues = function (name) {
- var counter = this.counters[name];
- return counter ? counter : [];
- };
- CounterState.prototype.pop = function (counters) {
- var _this = this;
- counters.forEach(function (counter) { return _this.counters[counter].pop(); });
- };
- CounterState.prototype.parse = function (style) {
- var _this = this;
- var counterIncrement = style.counterIncrement;
- var counterReset = style.counterReset;
- var canReset = true;
- if (counterIncrement !== null) {
- counterIncrement.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- if (counter && entry.increment !== 0) {
- canReset = false;
- if (!counter.length) {
- counter.push(1);
- }
- counter[Math.max(0, counter.length - 1)] += entry.increment;
- }
- });
- }
- var counterNames = [];
- if (canReset) {
- counterReset.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- counterNames.push(entry.counter);
- if (!counter) {
- counter = _this.counters[entry.counter] = [];
- }
- counter.push(entry.reset);
- });
- }
- return counterNames;
- };
- return CounterState;
- }());
- var ROMAN_UPPER = {
- integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1],
- values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I']
- };
- var ARMENIAN = {
- integers: [
- 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70,
- 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'Ք',
- 'Փ',
- 'Ւ',
- 'Ց',
- 'Ր',
- 'Տ',
- 'Վ',
- 'Ս',
- 'Ռ',
- 'Ջ',
- 'Պ',
- 'Չ',
- 'Ո',
- 'Շ',
- 'Ն',
- 'Յ',
- 'Մ',
- 'Ճ',
- 'Ղ',
- 'Ձ',
- 'Հ',
- 'Կ',
- 'Ծ',
- 'Խ',
- 'Լ',
- 'Ի',
- 'Ժ',
- 'Թ',
- 'Ը',
- 'Է',
- 'Զ',
- 'Ե',
- 'Դ',
- 'Գ',
- 'Բ',
- 'Ա'
- ]
- };
- var HEBREW = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20,
- 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'י׳',
- 'ט׳',
- 'ח׳',
- 'ז׳',
- 'ו׳',
- 'ה׳',
- 'ד׳',
- 'ג׳',
- 'ב׳',
- 'א׳',
- 'ת',
- 'ש',
- 'ר',
- 'ק',
- 'צ',
- 'פ',
- 'ע',
- 'ס',
- 'נ',
- 'מ',
- 'ל',
- 'כ',
- 'יט',
- 'יח',
- 'יז',
- 'טז',
- 'טו',
- 'י',
- 'ט',
- 'ח',
- 'ז',
- 'ו',
- 'ה',
- 'ד',
- 'ג',
- 'ב',
- 'א'
- ]
- };
- var GEORGIAN = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90,
- 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'ჵ',
- 'ჰ',
- 'ჯ',
- 'ჴ',
- 'ხ',
- 'ჭ',
- 'წ',
- 'ძ',
- 'ც',
- 'ჩ',
- 'შ',
- 'ყ',
- 'ღ',
- 'ქ',
- 'ფ',
- 'ჳ',
- 'ტ',
- 'ს',
- 'რ',
- 'ჟ',
- 'პ',
- 'ო',
- 'ჲ',
- 'ნ',
- 'მ',
- 'ლ',
- 'კ',
- 'ი',
- 'თ',
- 'ჱ',
- 'ზ',
- 'ვ',
- 'ე',
- 'დ',
- 'გ',
- 'ბ',
- 'ა'
- ]
- };
- var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) {
- if (value < min || value > max) {
- return createCounterText(value, fallback, suffix.length > 0);
- }
- return (symbols.integers.reduce(function (string, integer, index) {
- while (value >= integer) {
- value -= integer;
- string += symbols.values[index];
- }
- return string;
- }, '') + suffix);
- };
- var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) {
- var string = '';
- do {
- if (!isNumeric) {
- value--;
- }
- string = resolver(value) + string;
- value /= codePointRangeLength;
- } while (value * codePointRangeLength >= codePointRangeLength);
- return string;
- };
- var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) {
- var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1;
- return ((value < 0 ? '-' : '') +
- (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) {
- return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart);
- }) +
- suffix));
- };
- var createCounterStyleFromSymbols = function (value, symbols, suffix) {
- if (suffix === void 0) { suffix = '. '; }
- var codePointRangeLength = symbols.length;
- return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix);
- };
- var CJK_ZEROS = 1 << 0;
- var CJK_TEN_COEFFICIENTS = 1 << 1;
- var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2;
- var CJK_HUNDRED_COEFFICIENTS = 1 << 3;
- var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) {
- if (value < -9999 || value > 9999) {
- return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0);
- }
- var tmp = Math.abs(value);
- var string = suffix;
- if (tmp === 0) {
- return numbers[0] + string;
- }
- for (var digit = 0; tmp > 0 && digit <= 4; digit++) {
- var coefficient = tmp % 10;
- if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') {
- string = numbers[coefficient] + string;
- }
- else if (coefficient > 1 ||
- (coefficient === 1 && digit === 0) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) ||
- (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) {
- string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string;
- }
- else if (coefficient === 1 && digit > 0) {
- string = multipliers[digit - 1] + string;
- }
- tmp = Math.floor(tmp / 10);
- }
- return (value < 0 ? negativeSign : '') + string;
- };
- var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬';
- var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬';
- var JAPANESE_NEGATIVE = 'マイナス';
- var KOREAN_NEGATIVE = '마이너스';
- var createCounterText = function (value, type, appendSuffix) {
- var defaultSuffix = appendSuffix ? '. ' : '';
- var cjkSuffix = appendSuffix ? '、' : '';
- var koreanSuffix = appendSuffix ? ', ' : '';
- var spaceSuffix = appendSuffix ? ' ' : '';
- switch (type) {
- case 0 /* DISC */:
- return '•' + spaceSuffix;
- case 1 /* CIRCLE */:
- return '◦' + spaceSuffix;
- case 2 /* SQUARE */:
- return '◾' + spaceSuffix;
- case 5 /* DECIMAL_LEADING_ZERO */:
- var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- return string.length < 4 ? "0" + string : string;
- case 4 /* CJK_DECIMAL */:
- return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix);
- case 6 /* LOWER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 7 /* UPPER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix);
- case 8 /* LOWER_GREEK */:
- return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix);
- case 9 /* LOWER_ALPHA */:
- return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix);
- case 10 /* UPPER_ALPHA */:
- return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix);
- case 11 /* ARABIC_INDIC */:
- return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix);
- case 12 /* ARMENIAN */:
- case 49 /* UPPER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix);
- case 35 /* LOWER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 13 /* BENGALI */:
- return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix);
- case 14 /* CAMBODIAN */:
- case 30 /* KHMER */:
- return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix);
- case 15 /* CJK_EARTHLY_BRANCH */:
- return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix);
- case 16 /* CJK_HEAVENLY_STEM */:
- return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix);
- case 17 /* CJK_IDEOGRAPHIC */:
- case 48 /* TRAD_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 47 /* TRAD_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 42 /* SIMP_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 41 /* SIMP_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 26 /* JAPANESE_INFORMAL */:
- return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0);
- case 25 /* JAPANESE_FORMAL */:
- return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 31 /* KOREAN_HANGUL_FORMAL */:
- return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 33 /* KOREAN_HANJA_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0);
- case 32 /* KOREAN_HANJA_FORMAL */:
- return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 18 /* DEVANAGARI */:
- return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix);
- case 20 /* GEORGIAN */:
- return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix);
- case 21 /* GUJARATI */:
- return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix);
- case 22 /* GURMUKHI */:
- return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix);
- case 22 /* HEBREW */:
- return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix);
- case 23 /* HIRAGANA */:
- return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん');
- case 24 /* HIRAGANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす');
- case 27 /* KANNADA */:
- return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix);
- case 28 /* KATAKANA */:
- return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix);
- case 29 /* KATAKANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix);
- case 34 /* LAO */:
- return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix);
- case 37 /* MONGOLIAN */:
- return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix);
- case 38 /* MYANMAR */:
- return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix);
- case 39 /* ORIYA */:
- return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix);
- case 40 /* PERSIAN */:
- return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix);
- case 43 /* TAMIL */:
- return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix);
- case 44 /* TELUGU */:
- return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix);
- case 45 /* THAI */:
- return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix);
- case 46 /* TIBETAN */:
- return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix);
- case 3 /* DECIMAL */:
- default:
- return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- }
- };
-
- var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore';
- var DocumentCloner = /** @class */ (function () {
- function DocumentCloner(context, element, options) {
- this.context = context;
- this.options = options;
- this.scrolledElements = [];
- this.referenceElement = element;
- this.counters = new CounterState();
- this.quoteDepth = 0;
- if (!element.ownerDocument) {
- throw new Error('Cloned element does not have an owner document');
- }
- this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false);
- }
- DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) {
- var _this = this;
- var iframe = createIFrameContainer(ownerDocument, windowSize);
- if (!iframe.contentWindow) {
- return Promise.reject("Unable to find iframe window");
- }
- var scrollX = ownerDocument.defaultView.pageXOffset;
- var scrollY = ownerDocument.defaultView.pageYOffset;
- var cloneWindow = iframe.contentWindow;
- var documentClone = cloneWindow.document;
- /* Chrome doesn't detect relative background-images assigned in inline
-
-
-
-
-