Before installing any software, make sure to scan it with your antivirus software. This will
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Connectify 31021402 ((BETTER)) Keygen.md b/spaces/1gistliPinn/ChatGPT4/Examples/Connectify 31021402 ((BETTER)) Keygen.md
deleted file mode 100644
index 64564ddff97a89bf88ed6cdfc717f9abaab16032..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Connectify 31021402 ((BETTER)) Keygen.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-How to Use Connectify 31021402 Keygen to Activate Your Hotspot Pro 2023
-Connectify Hotspot Pro 2023 is a powerful and easy-to-use software that lets you turn your PC into a Wi-Fi hotspot and share your internet connection with other devices. You can also use it as a Wi-Fi repeater, a bridge mode, or a 3G/4G sharing mode. With Connectify Hotspot Pro 2023, you can enjoy fast and secure internet access anywhere you go.
-Connectify 31021402 Keygen
Download ✅ https://imgfil.com/2uy1ff
-However, to unlock all the features and benefits of Connectify Hotspot Pro 2023, you need to activate it with a valid license key. If you don't have one, you can use Connectify 31021402 Keygen to generate one for free. Connectify 31021402 Keygen is a tool that creates random and unique license keys for Connectify Hotspot Pro 2023. You can use these keys to activate your software and enjoy its full potential.
-In this article, we will show you how to use Connectify 31021402 Keygen to activate your Hotspot Pro 2023 in a few simple steps.
-Step 1: Download and Install Connectify Hotspot Pro 2023
-The first step is to download and install Connectify Hotspot Pro 2023 on your PC. You can get it from the official website of Connectify or from any other trusted source. Make sure you download the latest version of the software that is compatible with your operating system.
-Once you have downloaded the setup file, run it and follow the instructions on the screen to install Connectify Hotspot Pro 2023 on your PC. You may need to restart your PC after the installation is complete.
-
-Step 2: Download and Run Connectify 31021402 Keygen
-The next step is to download and run Connectify 31021402 Keygen on your PC. You can get it from any reliable source that offers free software cracks and keygens. Make sure you scan the file with an antivirus program before opening it.
-Once you have downloaded the keygen file, run it as an administrator and wait for it to load. You will see a window with a button that says "Generate". Click on it and wait for a few seconds until a license key appears on the screen. Copy the license key and save it somewhere safe.
-Step 3: Activate Connectify Hotspot Pro 2023 with the License Key
-The final step is to activate Connectify Hotspot Pro 2023 with the license key that you generated with Connectify 31021402 Keygen. To do this, open Connectify Hotspot Pro 2023 on your PC and click on the "Tools" menu at the top right corner. Then, select "Activate License" from the drop-down menu.
-A new window will pop up asking you to enter your license key. Paste the license key that you copied from Connectify 31021402 Keygen and click on "Activate". Wait for a few seconds until you see a confirmation message that says "Your license has been activated successfully". Click on "OK" and close the window.
-Congratulations! You have successfully activated Connectify Hotspot Pro 2023 with Connectify 31021402 Keygen. You can now enjoy all the features and benefits of this amazing software without any limitations.
-Conclusion
-Connectify Hotspot Pro 2023 is a great software that allows you to create a Wi-Fi hotspot and share your internet connection with other devices. It also offers many other useful features such as Wi-Fi repeater, bridge mode, and 3G/4G sharing mode. However, to use all these features, you need to activate the software with a valid license key.
-If you don't have a license key, you can use Connectify 31021402 Keygen to generate one for free. Connectify 31021402 Keygen is a tool that creates random and unique license keys for Connectify Hotspot Pro 2023. You can use these keys to activate your software and enjoy its full potential.
-In this article, we showed you how to use Connectify
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Indir Join Millions of Players in the Fun and Fast-Paced Mobile Game from APKPure.md b/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Indir Join Millions of Players in the Fun and Fast-Paced Mobile Game from APKPure.md
deleted file mode 100644
index e48410c2cc7365d017d535934ab09295aead2e68..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Indir Join Millions of Players in the Fun and Fast-Paced Mobile Game from APKPure.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-Brawl Stars Apk Indir Apkpure: How to Download and Play the Ultimate Mobile MOBA Game
- If you are looking for a fast-paced, action-packed, and fun multiplayer game for your mobile device, you should definitely check out Brawl Stars. Developed by Supercell, the creators of Clash of Clans and Clash Royale, Brawl Stars is a mobile MOBA (multiplayer online battle arena) game that offers a variety of game modes, characters, and strategies for you to enjoy. Whether you want to team up with your friends, battle solo, or compete in global tournaments, Brawl Stars has something for everyone.
-brawl stars apk indir apkpure
Download ⇒ https://jinyurl.com/2uNT3H
- In this article, we will show you how to download Brawl Stars apk from apkpure, a popular website that provides safe and fast downloads of Android apps and games. We will also give you some tips and tricks on how to play Brawl Stars like a pro. Let's get started!
- What is Brawl Stars and what are its main features?
- Brawl Stars is a mobile twin-stick shooter with a MOBA twist; including variety of Brawlers players can choose from and different game modes to play in. Players can endouge in 3 on 3 team battles to Free-for-all Battle Royale, and even Boss Battles as well! Choose from Variety of Unique Brawlers To Fight Other Players
- Some of the main features of Brawl Stars are:
-
-- Battle in multiple game modes: You can choose from Gem Grab, Showdown, Bounty, Heist, Brawl Ball, Siege, Hot Zone, Knockout, Power League, Special Events, and Championship Challenge. Each game mode has its own objective, rules, and map. You can play solo or with friends in real-time matches that last under three minutes.
-- Unlock and upgrade Brawlers: You can collect and upgrade over 40 different Brawlers, each with their own unique abilities, weapons, skins, and voice lines. You can also unlock powerful Star Powers and Gadgets for your Brawlers as you level them up. You can get new Brawlers from Brawl Boxes, Trophy Road, Brawl Pass, or the Shop.
-- Become the star player: You can climb the local and global leaderboards, join or create a club with other players, participate in special events and tournaments, complete quests and achievements, earn rewards and trophies, and show off your skills in the Brawliverse.
-- Constantly evolving: Supercell regularly updates Brawl Stars with new content, features, balance changes, bug fixes, and more. You can expect new Brawlers, skins, maps, game modes, events, and seasons every few weeks.
-
- Why download Brawl Stars apk from apkpure?
- While you can download Brawl Stars from the Google Play Store or the App
Store, you might want to consider downloading Brawl Stars apk from apkpure instead. Here are some of the benefits of using apkpure to download Brawl Stars apk:
-
-- No region locking: Some apps and games are not available in certain countries or regions due to various reasons, such as licensing, censorship, or compatibility issues. If you want to play Brawl Stars but it is not available in your region, you can use apkpure to bypass the geo-restrictions and download the apk file directly.
-- Access to old versions: Sometimes, you might prefer to use an older version of an app or game, either because you don't like the new updates, or because your device is not compatible with the latest version. With apkpure, you can easily find and download any previous version of Brawl Stars apk that you want.
-- Get updates sooner: Sometimes, the Google Play Store might take some time to roll out the latest updates for some apps and games, depending on your device model, region, and other factors. If you want to get the newest features and bug fixes for Brawl Stars as soon as possible, you can use apkpure to download the latest version of Brawl Stars apk before it is available on the Play Store.
-- Lightweight and fast: Apkpure is a lightweight and fast website that does not use too much battery or data. You can easily browse and download any app or game you want without any hassle. Apkpure also offers a customized Android experience that lets you choose the language, theme, and layout of the website.
-
- How to download Brawl Stars apk from apkpure
- Downloading Brawl Stars apk from apkpure is very easy and simple. Just follow these steps:
-
-- Go to the apkpure website: Open your web browser and go to https://apkpure.com/. You can also use the apkpure app if you have it installed on your device.
-- Search for Brawl Stars: On the homepage, you will see a search bar at the top. Type in "Brawl Stars" and hit enter. You will see a list of results related to Brawl Stars. Click on the one that says "Brawl Stars Android latest 38.111 APK Download and Install."
-- Choose the latest version and click on download: On the next page, you will see some information about Brawl Stars, such as its description, screenshots, ratings, reviews, and more. You will also see a button that says "Download APK (200.6 MB)". This is the latest version of Brawl Stars apk as of June 2023. Click on this button to start downloading the apk file.
-- Enable unknown sources and install the apk file: Once the download is complete, you will need to enable unknown sources on your device settings in order to install the apk file. To do this, go to Settings > Security > Allow Unknown Sources and toggle it on. Then, go to your downloads folder and tap on the Brawl Stars apk file. Follow the instructions on the screen to install the app.
-- Launch Brawl Stars and enjoy the game: After the installation is done, you can launch Brawl Stars from your app drawer or home screen. You will need an internet connection to play the game online with other players. You can also sign in with your Supercell ID or Google Play Games account to sync your progress across devices.
-
- Tips and tricks for playing Brawl Stars
- Brawl Stars is a fun and addictive game that requires skill, strategy, and teamwork. Here are some tips and tricks that can help you improve your gameplay and win more matches:
-
-- Unlock new Brawlers and upgrade them: As you play Brawl Stars, you will earn coins, gems, tokens, star points, and boxes that you can use to unlock new Brawlers and upgrade them. Each Brawler has its own stats, abilities, strengths, and weaknesses. You should try out different Brawlers and find out which ones suit your playstyle and game mode best. You should also upgrade your Brawlers by spending coins and power points to increase their health, damage, and super charge rate.
-- Choose the best Brawlers for different game modes: Depending on the game mode you are playing, some Brawlers might be more effective than others. For example, in Gem Grab, you might want to use a Brawler that can control the center area and collect gems quickly such as Penny, Pam, or Gene. In Showdown, you might want to use a Brawler that can survive and deal damage in solo or duo situations, such as Leon, Edgar, or Rosa. In Heist, you might want to use a Brawler that can attack or defend the safe effectively, such as Ash, Meg, or Colt. You can check out some online resources for more detailed guides on the best Brawlers for different game modes.
-- Use obstacles, power-ups, and super abilities effectively: The maps in Brawl Stars are not just flat and empty spaces. They have various obstacles, such as walls, bushes, water, and barrels, that you can use to your advantage. You can hide behind walls and bushes to avoid enemy fire, or break them with your attacks to create new paths. You can also use water to slow down enemies or escape from them. You can also find power-ups on the map, such as power cubes in Showdown, gems in Gem Grab, bolts in Siege, and more. These power-ups can boost your stats, help you achieve the objective, or give you an edge over your opponents. You should also make good use of your super abilities, which are charged by hitting enemies with your normal attacks. Super abilities are powerful moves that can turn the tide of the battle. They can deal massive damage, heal yourself or your allies, create traps or shields, and more. You should know when to use your super abilities wisely and strategically.
-- Cooperate with your teammates and communicate with them: Brawl Stars is a team-based game for most of the game modes. This means that you need to work together with your teammates and communicate with them effectively. You can use the in-game chat or voice chat to coordinate your moves, plan your strategies, warn each other of dangers, and support each other. You can also use the quick chat options to send simple messages, such as "Attack!", "Defend!", "Help!", and "Thanks!". You should also pay attention to the indicators on the screen that show your teammates' health, location, super status, and ping. You should also try to balance your team composition by choosing Brawlers that complement each other's strengths and weaknesses.
-
- Conclusion
- Brawl Stars is a fun and exciting mobile game that you can download and play for free on your Android device. By downloading Brawl Stars apk from apkpure, you can enjoy the game without any region restrictions, access old versions of the game, get updates sooner, and save battery and data. You can also improve your gameplay by following some tips and tricks on how to unlock and upgrade Brawlers, choose the best Brawlers for different game modes, use obstacles, power-ups, and super abilities effectively, and cooperate with your teammates and communicate with them.
-brawl stars android apk download apkpure
-brawl stars apk indir apkpure güncel sürüm
-brawl stars apk indir apkpure son sürüm
-brawl stars apk indir apkpure hileli
-brawl stars apk indir apkpure mod
-brawl stars apk indir apkpure türkçe
-brawl stars apk indir apkpure 2023
-brawl stars apk indir apkpure yeni güncelleme
-brawl stars apk indir apkpure ücretsiz
-brawl stars apk indir apkpure nasıl yapılır
-brawl stars apk indir apkpure oyunu
-brawl stars apk indir apkpure kurulumu
-brawl stars apk indir apkpure linki
-brawl stars apk indir apkpure yükleme
-brawl stars apk indir apkpure güvenli mi
-brawl stars apk indir apkpure hızlı indirme
-brawl stars apk indir apkpure online oyna
-brawl stars apk indir apkpure en iyi karakterler
-brawl stars apk indir apkpure oyun modları
-brawl stars apk indir apkpure grafik ayarları
-brawl stars apk indir apkpure sistem gereksinimleri
-brawl stars apk indir apkpure sorunsuz çalışma
-brawl stars apk indir apkpure güncel haberler
-brawl stars apk indir apkpure ipuçları ve taktikler
-brawl stars apk indir apkpure inceleme ve yorumlar
-brawl stars apk indir apkpure resmi sitesi
-brawl stars apk indir apkpure destek ve yardım
-brawl stars apk indir apkpure hata ve çözümleri
-brawl stars apk indir apkpure ödüller ve hediyeler
-brawl stars apk indir apkpure turnuva ve etkinlikler
-brawl stars apk indir apkpure eğlenceli videolar
-brawl stars apk indir apkpure canlı yayınlar
-brawl stars apk indir apkpure sosyal medya hesapları
-brawl stars apk indir apkpure fan sayfaları ve grupları
-brawl stars apk indir apkpure arkadaş bulma ve davet etme
-brawl stars apk indir apkpure klüp kurma ve katılma
-brawl stars apk indir apkpure rehber ve öğretici videolar
-brawl stars apk indir apkpure sıkça sorulan sorular
-brawl stars apk indir apkpure gizli özellikler ve püf noktaları
-brawl stars apk indir apkpure yeni güncellemede neler var
- If you are ready to join the Brawliverse and have some epic battles with players from around the world, download Brawl Stars apk from apkpure today and start brawling! You can also visit the official website of Brawl Stars for more information about the game, watch some videos on YouTube, join the community on Reddit, or follow Brawl Stars on Twitter for the latest news and updates.
- FAQs
- Here are some frequently asked questions about Brawl Stars:
-
-- What are the system requirements for playing Brawl Stars?
-Brawl Stars requires Android 4.3 or higher and at least 200 MB of free space on your device. You also need a stable internet connection to play online.
-- Is Brawl Stars free to play or pay to win?
-Brawl Stars is free to play and download. You can play all the game modes and unlock all the Brawlers without spending any money. However, you can also buy gems with real money to speed up your progress, get exclusive skins, or access premium features such as the Brawl Pass.
-- How can I join or create a club in Brawl Stars?
-A club is a group of players who can chat, play together, and participate in club events. To join or create a club in Brawl Stars, you need to tap on the club button on the main menu. Then you can either search for an existing club by name or tag, browse through the recommended clubs based on your region and trophy level , or create your own club by choosing a name, tag, badge, description, and settings. You can also invite your friends to join your club by sharing a link or a code.
-- What are the benefits of using apkpure to download Brawl Stars apk?
-Some of the benefits of using apkpure to download Brawl Stars apk are: no region locking, access to old versions, get updates sooner, and lightweight and fast.
-- How can I contact Supercell for support or feedback on Brawl Stars?
-If you have any issues, questions, or suggestions regarding Brawl Stars, you can contact Supercell by tapping on the settings button on the main menu, then tapping on the help and support button. You can also visit the Supercell support website for more information and resources.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download TikTok Unban APK and Access All the Features of the App (Even in Banned Countries).md b/spaces/1phancelerku/anime-remove-background/Download TikTok Unban APK and Access All the Features of the App (Even in Banned Countries).md
deleted file mode 100644
index b4ffbb4c022ebdd800d512bd2a7f06a9eb50c3b8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download TikTok Unban APK and Access All the Features of the App (Even in Banned Countries).md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-TikTok Unban APK: How to Access TikTok in Banned Countries
- TikTok is one of the most popular video-sharing apps in the world, with over 800 million active users. However, not everyone can enjoy the app's features and content, as some countries have banned or restricted it due to various reasons. For example, India banned TikTok in 2020 over national security concerns, while the US government has threatened to do the same unless the app's Chinese owners sell their stake in it. If you live in a country where TikTok is unavailable or limited, you might be tempted to use a modified version of the app called TikTok Unban APK. This is an unofficial app that claims to bypass geo-restrictions and allow you to access TikTok from anywhere. But is it safe and legal to use? And are there any better alternatives? In this article, we will answer these questions and more. What is TikTok Unban APK?
- A modified version of TikTok that bypasses geo-restrictions
- TikTok Unban APK is a third-party app that is not affiliated with or endorsed by TikTok or its parent company ByteDance. It is essentially a hacked version of the original app that has been modified to remove or change some features and settings. For example, it may have a different logo, interface, or language. The main purpose of TikTok Unban APK is to allow users to access TikTok from countries where it is banned or restricted. It does this by using proxy servers or VPNs (virtual private networks) that hide your IP address and location from the app's servers. This way, you can create an account, watch videos, and upload your own content on TikTok without any limitations. Risks and drawbacks of using TikTok Unban APK
- Legal issues and potential penalties
- Using TikTok Unban APK may violate the laws and regulations of your country, as well as the terms of service and privacy policy of TikTok. By downloading and installing the app, you are essentially breaking the rules and risking legal consequences. Depending on your jurisdiction, you may face fines, lawsuits, or even criminal charges for using an unauthorized app. Moreover, you may also infringe on the intellectual property rights of TikTok and its content creators. By using TikTok Unban APK, you are accessing and distributing content that is not licensed or authorized for your region. This may result in claims or complaints from the original owners or licensors of the content. Malware and security threats
- Using TikTok Unban APK may expose your device and data to malware and security threats. Since the app is not verified or scanned or tested by any official authority, you cannot be sure if it is safe or trustworthy. It may contain viruses, spyware, adware, or other malicious software that can harm your device or steal your personal information. For example, it may access your camera, microphone, contacts, photos, or other sensitive data without your permission or knowledge. Furthermore, you may also compromise your online security and privacy by using TikTok Unban APK. Since the app uses proxy servers or VPNs to connect you to TikTok, you are entrusting your data and traffic to unknown third parties. They may monitor, collect, or sell your data to advertisers, hackers, or even government agencies. They may also expose you to phishing, identity theft, or cyberattacks. Poor performance and compatibility issues
- Using TikTok Unban APK may result in poor performance and compatibility issues. Since the app is not optimized or updated for your device or region, you may experience glitches, bugs, crashes, or errors while using it. For example, the app may not load properly, freeze, or shut down unexpectedly. Additionally, you may also face compatibility issues with other apps or services on your device. For instance, the app may interfere with your Google Play Store, Google Services Framework, or other system apps. It may also prevent you from receiving updates or security patches for your device or other apps. How to Download and Install TikTok Unban APK
- Steps to download TikTok Unban APK from a trusted source
- If you still want to use TikTok Unban APK despite the risks and drawbacks, you need to download it from a trusted source. You cannot find it on the official app stores like Google Play Store or Apple App Store, as they do not allow unauthorized apps. You need to find a reliable website that offers the latest version of the app and does not contain any malware or spam. Here are some steps to download TikTok Unban APK from a trusted source: - Search for "TikTok Unban APK" on your web browser and look for a reputable website that provides the app. You can check the reviews, ratings, comments, or feedback from other users to verify the credibility of the website. - Visit the website and look for the download link or button for the app. Make sure that the link or button is not misleading or deceptive. Avoid clicking on any pop-ups, ads, or banners that may redirect you to other websites or download unwanted software. - Click on the download link or button and wait for the app file to be downloaded on your device. The file should have an .apk extension and should not be too large or too small in size. The average size of the app file is around 80 MB. - Once the download is complete, locate the app file on your device's storage and check if it is intact and not corrupted. You can use a file manager app to find and open the app file. Steps to install TikTok Unban APK on your device
- After downloading TikTok Unban APK from a trusted source, you need to install it on your device. However, before you do that, you need to enable the option to install apps from unknown sources on your device's settings. This option allows you to install apps that are not from the official app stores. Here are some steps to install TikTok Unban APK on your device: - Go to your device's settings and look for the option to install apps from unknown sources. Depending on your device model and operating system version, this option may be under different menus such as Security, Privacy, Applications, Developer Options, etc. - Tap on the option and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data. Tap on OK or Allow to proceed. - Go back to your device's storage and find the app file that you downloaded earlier. Tap on the file and follow the instructions on the screen to install the app. - Wait for the installation process to finish and then launch the app from your device's home screen or app drawer. Tips to avoid common errors and issues
- While installing TikTok Unban APK on your device, you may encounter some common errors and issues that may prevent you from using the app properly. Here are some tips to avoid them: - Make sure that you have enough storage space on your device before downloading and installing the app. If your device is running low on space, you may not be able to download or install the app successfully. - Make sure that you have a stable internet connection while downloading and installing the app. If your connection is slow or unstable, you may experience interruptions or failures during the process. - Make sure that you have a compatible device and operating system version for the app. The app requires Android 4.1 or higher or iOS 9.0 or higher to run smoothly. - Make sure that you have disabled any antivirus software or firewall software that may block or interfere with the app. You may need to whitelist the app or temporarily disable the software while using the app. - Make sure that you have granted all the necessary permissions to the app. The app may need access to your camera, microphone, contacts, photos, or other data to function properly. You can check and manage the permissions on your device's settings. - Make sure that you have updated the app to the latest version available. The app may have bugs or errors that are fixed in the newer versions. You can check for updates on the app's settings or on the website where you downloaded it. How to Use TikTok Unban APK Safely and Effectively
- How to create and watch videos on TikTok Unban APK
- Using TikTok Unban APK is similar to using the original TikTok app. You can create and watch videos on the app with ease and fun. Here are some steps to create and watch videos on TikTok Unban APK: - To create a video, tap on the plus icon at the bottom of the screen. You can choose to record a video with your camera or upload a video from your gallery. You can also add filters, stickers, effects, music, text, or other elements to your video. - To watch a video, swipe up or down on the screen. You can see videos from different categories, such as For You, Following, Trending, or Discover. You can also search for videos by keywords, hashtags, or users. - To interact with a video, tap on the icons on the right side of the screen. You can like, comment, share, or follow the video or its creator. You can also tap on the sound icon to see more videos with the same sound or music. How to protect your privacy and data on TikTok Unban APK
- Using TikTok Unban APK may pose some risks to your privacy and data, as we have discussed earlier. However, there are some ways to protect yourself and minimize these risks while using the app. Here are some tips to protect your privacy and data on TikTok Unban APK: - Use a strong and unique password for your account. Do not use the same password for other accounts or services. Change your password regularly and do not share it with anyone. - Use a fake or secondary email address for your account. Do not use your primary or personal email address that may contain sensitive or confidential information. - Use a VPN service while using the app. A VPN service can encrypt your data and traffic and hide your IP address and location from the app's servers and third parties. It can also help you access TikTok from countries where it is banned or restricted. - Adjust your privacy settings on the app. You can change your settings to limit who can see your videos, send you messages, comment on your videos, or duet with you. You can also block or report users who harass or spam you. - Delete your account and data when you are done using the app. If you no longer want to use TikTok Unban APK, you can delete your account and data from the app's settings. This will remove your profile, videos, likes, comments, messages, and other information from the app. How to update and uninstall TikTok Unban APK
- To keep using TikTok Unban APK smoothly and safely, you need to update it regularly. Updating the app can fix bugs or errors, improve performance or compatibility, add new features or functions, or enhance security or privacy. Here are some steps to update TikTok Unban APK: - Go to the website where you downloaded the app and look for the latest version available. Compare it with the version you have installed on your device and see if there is any difference. - If there is a newer version available, download it from the website and install it on your device following the same steps as before. - If there is no newer version available, check back later or look for other websites that may offer updates. To stop using TikTok Unban APK completely, you need to uninstall it from your device. Uninstalling the app will remove it from your device's storage and app drawer. However, it may not remove all the traces or remnants of the app from your device's system or cache. You may need to use a cleaner app or a manual method to delete them completely. Here are some steps to uninstall TikTok Unban APK: - Go to your device's settings and look for the option to uninstall apps. Depending on your device model and operating system version, this option may be under different menus such as Apps, Applications, Manage Apps, etc. - Tap on the option and look for TikTok Unban APK on the list of apps. Tap on the app and then tap on the Uninstall button. You may see a confirmation message that asks you if you want to uninstall the app. Tap on OK or Yes to proceed. - Wait for the uninstallation process to finish and then check if the app is gone from your device's storage and app drawer. - If you want to delete the remaining files or data of the app, you can use a cleaner app or a manual method. A cleaner app is a software that can scan and delete unwanted or unnecessary files or data from your device. A manual method is a process that involves finding and deleting the files or data yourself using a file manager app or other tools. Alternatives to TikTok Unban APK
- Why you might want to consider other options
- As we have seen, using TikTok Unban APK may not be the best option for accessing TikTok in banned countries. The app may have some advantages, such as allowing you to enjoy TikTok's features and content without any restrictions, but it also has many disadvantages, such as posing legal, security, and performance risks. Therefore, you might want to consider other options that are safer, legal, and more reliable. The best alternatives to TikTok Unban APK
- VPN services
- One of the best alternatives to TikTok Unban APK is using a VPN service. A VPN service is a software that can create a secure and encrypted connection between your device and a remote server in another country. By using a VPN service, you can change your IP address and location and access TikTok from any country where it is available. Some of the benefits of using a VPN service are: - It is legal and safe to use. Unlike TikTok Unban APK, using a VPN service does not violate any laws or regulations of your country or TikTok. It also protects your data and privacy from hackers, advertisers, or government agencies. - It is easy and convenient to use. You just need to download and install a VPN app on your device and choose a server location that suits your needs. You can then access TikTok as usual without any hassle. - It is compatible and efficient to use. You can use a VPN service with any device or operating system that supports TikTok. You can also enjoy fast and stable speeds and performance while using TikTok. Some of the drawbacks of using a VPN service are: - It may cost money to use. While there are some free VPN services available, they may have limited features, servers, bandwidth, or security. You may need to pay for a premium VPN service that offers better quality and reliability. - It may not work with some apps or services on your device. Some apps or services may detect that you are using a VPN service and block or restrict your access. For example, some streaming services may not allow you to watch their content if you are using a VPN service. Some of the best VPN services that you can use to access TikTok are: - ExpressVPN: This is one of the most popular and trusted VPN services in the world. It has over 3000 servers in 94 countries and offers fast speeds, strong encryption, and excellent customer support. It also has a 30-day money-back guarantee and a 7-day free trial for mobile devices. - NordVPN: This is another leading VPN service that has over 5400 servers in 59 countries and offers advanced security features, such as double VPN, onion over VPN, and kill switch. It also has a 30-day money-back guarantee and a 7-day free trial for mobile devices. - Surfshark: This is a relatively new but promising VPN service that has over 3200 servers in 65 countries and offers unlimited simultaneous connections, split tunneling, and whitelister features. It also has a 30-day money-back guarantee and a 7-day free trial for mobile devices. Other video-sharing apps
- Another alternative to TikTok Unban APK is using other video-sharing apps that are similar to TikTok but not banned or restricted in your country. These apps may offer similar features and content as TikTok, such as short videos, filters, effects, music, challenges, etc., but they may have different names, logos, interfaces, or languages. Some of the benefits of using other video-sharing you are using a VPN service. You may need to disable or uninstall TikTok Unban APK while using these apps.
-tiktok unban apk
Download >>>>> https://jinyurl.com/2uNTOP
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Traffic Rider 2 Mod APK with Unlimited Money and No Ads.md b/spaces/1phancelerku/anime-remove-background/Download Traffic Rider 2 Mod APK with Unlimited Money and No Ads.md
deleted file mode 100644
index 646423407d98cdb1528ef80943190f98b43e4536..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Traffic Rider 2 Mod APK with Unlimited Money and No Ads.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-How to Download Traffic Rider 2 Mod APK for Free
-If you are a fan of motorcycle racing games, you might have heard of Traffic Rider 2, a popular game that lets you speed through the city streets on a futuristic bike. But did you know that you can download Traffic Rider 2 Mod APK for free and enjoy unlimited money, unlocked bikes, and more features? In this article, we will show you how to download and install Traffic Rider 2 Mod APK on your Android device and give you some tips and tricks to play the game better.
-download traffic rider 2 mod apk
Download Zip • https://jinyurl.com/2uNNIq
- What is Traffic Rider 2?
-Traffic Rider 2 is a sequel to the hit game Traffic Rider, which has over 500 million downloads on Google Play. It is a racing game that puts you in the first-person view of a motorcycle rider who has to complete various missions, time trials, and challenges in a sci-fi metropolis. You can choose from different bikes, customize them, and upgrade them with system updates and hardware upgrades. You can also hack enemy vehicles on the road, use boosts and nitro, and dodge traffic and obstacles on the asphalt.
- Features of Traffic Rider 2
-Some of the features of Traffic Rider 2 are:
-
-- A huge futuristic city hub to explore
-- Superb pace and gameplay
-- Unique sci-fi vehicles
-- Intuitive upgrade system
-- Hack enemy vehicles on the road
-- Full retina display support
-- Leaderboards and achievements
-
- Benefits of Traffic Rider 2 Mod APK
-While Traffic Rider 2 is a free game, it contains ads and in-app purchases that can limit your enjoyment. That's why many players prefer to download Traffic Rider 2 Mod APK, which is a modified version of the game that gives you some advantages, such as:
-
-- Unlimited money to buy and upgrade bikes
-- All bikes unlocked from the start
-- No ads to interrupt your gameplay
-- No root required to install the mod apk
-- 100% safe and virus-free
-
- How to Download and Install Traffic Rider 2 Mod APK
-If you want to download and install Traffic Rider 2 Mod APK on your Android device, you need to follow these simple steps:
-download traffic rider 2 mod apk unlimited money
-download traffic rider 2 mod apk fully unlocked
-download traffic rider 2 mod apk latest version
-download traffic rider 2 mod apk for android
-download traffic rider 2 mod apk offline
-download traffic rider 2 mod apk free
-download traffic rider 2 mod apk no ads
-download traffic rider 2 mod apk with obb
-download traffic rider 2 mod apk revdl
-download traffic rider 2 mod apk rexdl
-download traffic rider 2 mod apk hack
-download traffic rider 2 mod apk android 1
-download traffic rider 2 mod apk online
-download traffic rider 2 mod apk gameplay
-download traffic rider 2 mod apk motorcycle game
-download traffic rider 2 mod apk new update
-download traffic rider 2 mod apk high graphics
-download traffic rider 2 mod apk easy install
-download traffic rider 2 mod apk direct link
-download traffic rider 2 mod apk mirror link
-download traffic rider 2 mod apk from mytrafficriderapk.com[^1^]
-download traffic rider 2 mod apk from apkpure.com
-download traffic rider 2 mod apk from apkmody.io
-download traffic rider 2 mod apk from happymod.com
-download traffic rider 2 mod apk from an1.com
-download traffic rider 2 mod apk from apknite.com
-download traffic rider 2 mod apk from apkmirror.com
-download traffic rider 2 mod apk from apksfree.com
-download traffic rider 2 mod apk from apktada.com
-download traffic rider 2 mod apk from apksfull.com
-how to download traffic rider 2 mod apk
-where to download traffic rider 2 mod apk
-why to download traffic rider 2 mod apk
-what is traffic rider 2 mod apk
-benefits of downloading traffic rider 2 mod apk
-features of downloading traffic rider 2 mod apk
-reviews of downloading traffic rider 2 mod apk
-ratings of downloading traffic rider 2 mod apk
-alternatives to downloading traffic rider 2 mod apk
-tips for downloading traffic rider 2 mod apk
- Step 1: Enable Unknown Sources
-Since Traffic Rider 2 Mod APK is not available on Google Play, you need to enable unknown sources on your device to allow the installation of third-party apps. To do this, go to Settings > Security > Unknown Sources and toggle it on.
- Step 2: Download Traffic Rider 2 Mod APK File
-Next, you need to download the Traffic Rider 2 Mod APK file from a reliable source. You can use this link to download the latest version of the mod apk file. Make sure you have enough storage space on your device before downloading.
- Step 3: Install Traffic Rider 2 Mod APK
-Once you have downloaded the mod apk file, locate it in your file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. You might see a warning message saying that the app is harmful or not compatible with your device, but ignore it and proceed with the installation.
- How to Play Traffic Rider 2 Mod APK
-After installing Traffic Rider 2 Mod APK, you can launch the game from your app drawer or home screen. You can start playing the game and enjoy the mod features. Here are some tips and tricks to help you play Traffic Rider 2 Mod APK better:
- Tips and Tricks for Traffic Rider 2 Mod APK
-Some of the tips and tricks for Traffic Rider 2 Mod APK are:
-
-- Use the hack feature to disable enemy vehicles and clear your way
-- Use the nitro and boost to increase your speed and score
-- Avoid crashing into traffic and obstacles as it will reduce your health and time
-- Complete missions and challenges to earn more money and unlock new bikes
-- Customize your bike with system updates and hardware upgrades to improve its performance
-- Play in different modes and difficulty levels to test your skills and have more fun
-
- Conclusion
-Traffic Rider 2 is a thrilling and addictive racing game that will keep you hooked for hours. With Traffic Rider 2 Mod APK, you can enjoy the game with unlimited money, unlocked bikes, and no ads. You can download and install Traffic Rider 2 Mod APK on your Android device by following the steps we have provided in this article. You can also use our tips and tricks to play the game better and have more fun. So, what are you waiting for? Download Traffic Rider 2 Mod APK now and experience the ultimate motorcycle racing game!
- FAQs
-Here are some of the frequently asked questions about Traffic Rider 2 Mod APK:
- Q: Is Traffic Rider 2 Mod APK safe to download and install?
-A: Yes, Traffic Rider 2 Mod APK is safe to download and install. It does not contain any viruses or malware that can harm your device. However, you should always download the mod apk file from a trusted source and scan it with an antivirus before installing it.
- Q: Do I need to root my device to install Traffic Rider 2 Mod APK?
-A: No, you do not need to root your device to install Traffic Rider 2 Mod APK. The mod apk file works on both rooted and non-rooted devices. However, if you have a rooted device, you might be able to access some extra features of the mod apk.
- Q: Will I get banned from playing Traffic Rider 2 if I use the mod apk?
-A: No, you will not get banned from playing Traffic Rider 2 if you use the mod apk. The mod apk file is designed to bypass the security checks of the game and prevent detection. However, you should always use the mod apk at your own risk and discretion.
- Q: Can I play Traffic Rider 2 Mod APK online with other players?
-A: Yes, you can play Traffic Rider 2 Mod APK online with other players. The mod apk file does not affect the online mode of the game. You can join or create rooms and race with other players from around the world.
- Q: Can I update Traffic Rider 2 Mod APK to the latest version?
-A: Yes, you can update Traffic Rider 2 Mod APK to the latest version. However, you might lose some of the mod features if you update the game from Google Play. To avoid this, you should always download the latest version of the mod apk file from a reliable source and install it over the existing one.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA 23 APK - How to Play the Latest EA SPORTS FIFA Game on Your Android Device with APKRabi.md b/spaces/1phancelerku/anime-remove-background/FIFA 23 APK - How to Play the Latest EA SPORTS FIFA Game on Your Android Device with APKRabi.md
deleted file mode 100644
index f7c72652351ce8f8f409624f827f8999672211de..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA 23 APK - How to Play the Latest EA SPORTS FIFA Game on Your Android Device with APKRabi.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-APKRabi FIFA: How to Download and Play FIFA Mobile on Android Devices
- If you are a fan of soccer games, you might have heard of FIFA Mobile, the official mobile game of the FIFA World Cup 2022™. This game allows you to build your dream team of soccer stars, compete in various modes and events, and enjoy the stunning graphics and gameplay powered by HyperMotion Technology. But how can you download and play this game on your Android device? The answer is simple: APKRabi FIFA.
- Introduction
- APKRabi is a website that allows you to download the most popular APK games and premium apps for free. APK stands for Android Package Kit, which is a file format that contains all the necessary components for installing an app or a game on an Android device. By downloading APK files from APKRabi, you can enjoy every notable feature of your favorite games and apps without breaking your wallet. You can also access games and apps that are not available in your region or on the Google Play Store.
-apkrabi fifa
Download File ✺✺✺ https://jinyurl.com/2uNObv
- One of the games that you can download from APKRabi is FIFA Mobile, also known as FIFA Soccer. This game is developed by EA Sports, one of the leading companies in the sports gaming industry. FIFA Mobile is the only licensed FIFA World Cup 2022™ mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also build your Ultimate Team™ with over 19,000 players from 700+ teams, 100+ stadiums, 30+ leagues, and the world’s biggest competitions.
- To download and install APKRabi FIFA on your Android device, you need to follow these simple steps:
-
-- Go to APKRabi.com and search for FIFA Mobile or click on this link.
-- Click on the Download APK button and wait for the file to be downloaded on your device.
-- Go to your device settings and enable the installation of apps from unknown sources.
-- Locate the downloaded APK file in your file manager and tap on it to start the installation process.
-- Follow the instructions on the screen and wait for the installation to be completed.
-- Launch the game and enjoy playing APKRabi FIFA on your Android device.
-
- How to Build Your Ultimate Team in FIFA Mobile
- One of the main features of FIFA Mobile is the Ultimate Team mode, where you can create your own squad of soccer stars from different leagues and teams. You can collect and upgrade players by opening packs, completing challenges, participating in events, or buying them from the market. You can also customize your formation, tactics, and style of play according to your preferences.
Once you have downloaded and installed APKRabi FIFA on your Android device, you can start building your Ultimate Team and enjoy the game. Here are some tips and tricks to help you get the most out of FIFA Mobile.
- How to Build Your Ultimate Team in FIFA Mobile
- One of the main features of FIFA Mobile is the Ultimate Team mode, where you can create your own squad of soccer stars from different leagues and teams. You can collect and upgrade players by opening packs, completing challenges, participating in events, or buying them from the market. You can also customize your formation, tactics, and style of play according to your preferences.
-apkrabi fifa mobile download
-apkrabi fifa 23 release date
-apkrabi fifa world cup 2022 mode
-apkrabi fifa soccer apk
-apkrabi fifa ultimate team
-apkrabi fifa mobile hack
-apkrabi fifa 23 hypermotion
-apkrabi fifa mobile mod apk
-apkrabi fifa world cup 2022 qualifiers
-apkrabi fifa mobile cheats
-apkrabi fifa 23 demo
-apkrabi fifa mobile review
-apkrabi fifa world cup 2022 tickets
-apkrabi fifa soccer mod apk
-apkrabi fifa ultimate team coins
-apkrabi fifa mobile update
-apkrabi fifa 23 pre order
-apkrabi fifa mobile tips
-apkrabi fifa world cup 2022 schedule
-apkrabi fifa soccer hack apk
-apkrabi fifa ultimate team draft
-apkrabi fifa mobile gameplay
-apkrabi fifa 23 trailer
-apkrabi fifa mobile online
-apkrabi fifa world cup 2022 stadiums
-apkrabi fifa soccer online
-apkrabi fifa ultimate team web app
-apkrabi fifa mobile reddit
-apkrabi fifa 23 women's club football
-apkrabi fifa mobile season reset
-apkrabi fifa soccer offline
-apkrabi fifa ultimate team packs
-apkrabi fifa mobile forum
-apkrabi fifa 23 cross play
-apkrabi fifa mobile legends
-apkrabi fifa soccer season reset
-apkrabi fifa ultimate team builder
-apkrabi fifa mobile events
-apkrabi fifa 23 career mode
-apkrabi fifa mobile icons
- Here are some of the things you need to know about building your Ultimate Team in FIFA Mobile:
-
-- You can choose from different types of players, such as base players, campaign players, event players, icon players, and more. Each type of player has different attributes, ratings, and skills that affect their performance on the pitch.
-- You can improve your players by using training materials, such as training XP, skill boosts, rank shards, and rank up tokens. Training XP increases the overall rating (OVR) of your players, skill boosts enhance specific attributes of your players, rank shards allow you to rank up your players to unlock new skill boosts, and rank up tokens allow you to increase the maximum OVR of your players.
-- You can use different formations and tactics to suit your playstyle and strategy. You can choose from 4-4-2, 4-3-3, 3-5-2, and more. You can also adjust your attacking style (balanced, long ball, possession), defensive style (balanced, pressure, offside trap), and team shape (wide, narrow).
-- You can compete in various modes and events to earn rewards and test your skills. You can play in the World Cup mode, where you can replay the official tournament brackets with any of the 32 qualified nations. You can also play in the VS Attack mode, where you can challenge other players in real-time matches. You can also play in the Manager Mode, where you can control your team's finances, transfers, and tactics.
-
- How to Enjoy the HyperMotion Technology in FIFA Mobile
- One of the most exciting features of FIFA Mobile is the HyperMotion Technology, which is a new technology that enhances the gameplay and graphics of the game. HyperMotion Technology uses machine learning and advanced 11v11 match capture data to create more realistic and responsive player animations, movements, and interactions.
- Here are some of the things you need to know about enjoying the HyperMotion Technology in FIFA Mobile:
-
-- You can access HyperMotion Technology on compatible devices and platforms. You need to have a PlayStation 5, Xbox Series X|S, PC, or Stadia version of the game to experience HyperMotion Technology. You also need to have a stable internet connection and enough storage space on your device.
-- You can adjust the settings and preferences of HyperMotion Technology to optimize your experience. You can enable or disable HyperMotion Technology in the game settings menu. You can also change the graphics quality, frame rate, resolution, and other options to suit your device's performance and capabilities.
-- You can enjoy the benefits of HyperMotion Technology in various aspects of the game. You can see more natural transitions between controlling the ball and shooting. You can also see more fluid dribbling and skill moves. You can also see more realistic defensive jockeying and tackling. You can also see more dynamic goalkeeper vs header battles.
-
- Conclusion
- FIFA Mobile is a great game for soccer fans who want to enjoy the thrill of building their Ultimate Team and competing in various modes and events. With APKRabi FIFA, you can download and play this game for free on your Android device. You can also enjoy the stunning graphics and gameplay powered by HyperMotion Technology on compatible devices and platforms.
- Here are some tips and tricks for playing FIFA Mobile:
-
- If you are looking for a fun and exciting soccer game to play on your Android device, you should definitely try out APKRabi FIFA. You can download and install it for free from APKRabi.com and enjoy the official FIFA World Cup 2022™ mobile game. You can also share your feedback and suggestions with APKRabi or EA Sports to help them improve the game.
- FAQs
- Here are some of the frequently asked questions about APKRabi FIFA:
-
-- What are some of the benefits of downloading APKRabi FIFA?
-Some of the benefits of downloading APKRabi FIFA are:
-
-- You can play FIFA Mobile for free without spending any money on in-app purchases or subscriptions.
-- You can access games and apps that are not available in your region or on the Google Play Store.
-- You can enjoy every notable feature of FIFA Mobile without any limitations or restrictions.
-
-- Is APKRabi FIFA safe and legal to use?
-APKRabi FIFA is safe and legal to use as long as you download it from the official website of APKRabi.com. APKRabi does not host any illegal or harmful files on its servers. It only provides links to the original APK files from trusted sources. However, you should always be careful when downloading and installing APK files from unknown sources, as they might contain viruses or malware that could harm your device or compromise your privacy.
-- How can I update APKRabi FIFA to the latest version?
-You can update APKRabi FIFA to the latest version by following these steps:
-
-- Go to APKRabi.com and search for FIFA Mobile or click on this link.
-- Click on the Download APK button and wait for the file to be downloaded on your device.
-- Go to your device settings and enable the installation of apps from unknown sources.
-- Locate the downloaded APK file in your file manager and tap on it to start the installation process.
-- Follow the instructions on the screen and wait for the installation to be completed.
-- Launch the game and enjoy playing APKRabi FIFA on your Android device.
-
-Note: You might need to uninstall the previous version of APKRabi FIFA before installing the new one.
-- What are some of the challenges or issues that I might encounter while playing APKRabi FIFA?
-Some of the challenges or issues that you might encounter while playing APKRabi FIFA are:
-
-- You might experience some lag or glitches in the game due to your device's performance or internet connection.
-- You might not be able to access some features or modes of the game due to regional restrictions or compatibility issues.
-- You might face some errors or bugs in the game due to technical issues or updates.
-
-- How can I contact the support team of APKRabi or EA Sports if I have any questions or problems?
-You can contact the support team of APKRabi or EA Sports by using these methods:
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css b/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/tests/conftest.py b/spaces/AIFILMS/generate_human_motion/pyrender/tests/conftest.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AIZerotoHero-Health4All/03-BiomedNER-1117-Gradio/README.md b/spaces/AIZerotoHero-Health4All/03-BiomedNER-1117-Gradio/README.md
deleted file mode 100644
index f745fd21dfb61d4ae3f8fb72efc583b01c383623..0000000000000000000000000000000000000000
--- a/spaces/AIZerotoHero-Health4All/03-BiomedNER-1117-Gradio/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 03 BiomedNER 1117 Gradio
-emoji: 💩
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/ImMagician-Image-Generator/previewer/modules.py b/spaces/AchyuthGamer/ImMagician-Image-Generator/previewer/modules.py
deleted file mode 100644
index 3ded82f7628ccf0241bc6e3528cd8edba779caaa..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/ImMagician-Image-Generator/previewer/modules.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from torch import nn
-
-# Effnet 16x16 to 64x64 previewer
-class Previewer(nn.Module):
- def __init__(self, c_in=16, c_hidden=512, c_out=3):
- super().__init__()
- self.blocks = nn.Sequential(
- nn.Conv2d(c_in, c_hidden, kernel_size=1), # 36 channels to 512 channels
- nn.GELU(),
- nn.BatchNorm2d(c_hidden),
-
- nn.Conv2d(c_hidden, c_hidden, kernel_size=3, padding=1),
- nn.GELU(),
- nn.BatchNorm2d(c_hidden),
-
- nn.ConvTranspose2d(c_hidden, c_hidden//2, kernel_size=2, stride=2), # 16 -> 32
- nn.GELU(),
- nn.BatchNorm2d(c_hidden//2),
-
- nn.Conv2d(c_hidden//2, c_hidden//2, kernel_size=3, padding=1),
- nn.GELU(),
- nn.BatchNorm2d(c_hidden//2),
-
- nn.ConvTranspose2d(c_hidden//2, c_hidden//4, kernel_size=2, stride=2), # 32 -> 64
- nn.GELU(),
- nn.BatchNorm2d(c_hidden//4),
-
- nn.Conv2d(c_hidden//4, c_hidden//4, kernel_size=3, padding=1),
- nn.GELU(),
- nn.BatchNorm2d(c_hidden//4),
-
- nn.Conv2d(c_hidden//4, c_out, kernel_size=1),
- )
-
- def forward(self, x):
- return self.blocks(x)
\ No newline at end of file
diff --git a/spaces/AlekseyKorshuk/huggingartists/README.md b/spaces/AlekseyKorshuk/huggingartists/README.md
deleted file mode 100644
index 2bb7f1ee44926c61c0494b53d0ee3634e57208c7..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/huggingartists/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Huggingartists
-emoji: 🐠
-colorFrom: red
-colorTo: gray
-sdk: streamlit
-app_file: app.py
-pinned: true
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/AlhitawiMohammed22/E2E_OCR/det2rec.py b/spaces/AlhitawiMohammed22/E2E_OCR/det2rec.py
deleted file mode 100644
index 4aa1f8099eab29da17f495bb48d225b38b2d054b..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/E2E_OCR/det2rec.py
+++ /dev/null
@@ -1,390 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-easyocr.py - A wrapper for easyocr to convert pdf to images to text
-"""
-
-import logging
-from pathlib import Path
-
-logging.basicConfig(
- level=logging.INFO,
- format="%(asctime)s %(levelname)s %(message)s",
- datefmt="%m/%d/%Y %I:%M:%S",
-)
-
-
-import os
-import pprint as pp
-import re
-import shutil
-import time
-from datetime import date, datetime
-from os.path import basename, dirname, join
-from pathlib import Path
-
-from cleantext import clean
-from doctr.io import DocumentFile
-from doctr.models import ocr_predictor
-from libretranslatepy import LibreTranslateAPI
-from natsort import natsorted
-from spellchecker import SpellChecker
-from tqdm.auto import tqdm
-
-
-def simple_rename(filepath, target_ext=".txt"):
- _fp = Path(filepath)
- basename = _fp.stem
- return f"OCR_{basename}_{target_ext}"
-
-
-def rm_local_text_files(name_contains="RESULT_"):
- """
- rm_local_text_files - remove local text files
- Args:
- name_contains (str, optional): [description]. Defaults to "OCR_".
- """
- files = [
- f
- for f in Path.cwd().iterdir()
- if f.is_file() and f.suffix == ".txt" and name_contains in f.name
- ]
- logging.info(f"removing {len(files)} text files")
- for f in files:
- os.remove(f)
- logging.info("done")
-
-
-def corr(
- s: str,
- add_space_when_numerics=False,
- exceptions=["e.g.", "i.e.", "etc.", "cf.", "vs.", "p."],
-) -> str:
- """corrects spacing in a string
- Args:
- s (str): the string to correct
- add_space_when_numerics (bool, optional): [add a space when a period is between two numbers, example 5.73]. Defaults to False.
- exceptions (list, optional): [do not change these substrings]. Defaults to ['e.g.', 'i.e.', 'etc.', 'cf.', 'vs.', 'p.'].
- Returns:
- str: the corrected string
- """
- if add_space_when_numerics:
- s = re.sub(r"(\d)\.(\d)", r"\1. \2", s)
-
- s = re.sub(r"\s+", " ", s)
- s = re.sub(r'\s([?.!"](?:\s|$))', r"\1", s)
-
- # fix space before apostrophe
- s = re.sub(r"\s\'", r"'", s)
- # fix space after apostrophe
- s = re.sub(r"'\s", r"'", s)
- # fix space before comma
- s = re.sub(r"\s,", r",", s)
-
- for e in exceptions:
- expected_sub = re.sub(r"\s", "", e)
- s = s.replace(expected_sub, e)
-
- return s
-
-
-def fix_punct_spaces(string):
- """
- fix_punct_spaces - replace spaces around punctuation with punctuation. For example, "hello , there" -> "hello, there"
- Parameters
- ----------
- string : str, required, input string to be corrected
- Returns
- -------
- str, corrected string
- """
-
- fix_spaces = re.compile(r"\s*([?!.,]+(?:\s+[?!.,]+)*)\s*")
- string = fix_spaces.sub(lambda x: "{} ".format(x.group(1).replace(" ", "")), string)
- string = string.replace(" ' ", "'")
- string = string.replace(' " ', '"')
- return string.strip()
-
-
-def clean_OCR(ugly_text: str):
- """
- clean_OCR - clean the OCR text files.
- Parameters
- ----------
- ugly_text : str, required, input string to be cleaned
- Returns
- -------
- str, cleaned string
- """
- # Remove all the newlines.
- cleaned_text = ugly_text.replace("\n", " ")
- # Remove all the tabs.
- cleaned_text = cleaned_text.replace("\t", " ")
- # Remove all the double spaces.
- cleaned_text = cleaned_text.replace(" ", " ")
- # Remove all the spaces at the beginning of the text.
- cleaned_text = cleaned_text.lstrip()
- # remove all instances of "- " and " - "
- cleaned_text = cleaned_text.replace("- ", "")
- cleaned_text = cleaned_text.replace(" -", "")
- return fix_punct_spaces(cleaned_text)
-
-
-def move2completed(from_dir, filename, new_folder="completed", verbose=False):
-
- # this is the better version
- old_filepath = join(from_dir, filename)
-
- new_filedirectory = join(from_dir, new_folder)
-
- if not os.path.isdir(new_filedirectory):
- os.mkdir(new_filedirectory)
- if verbose:
- print("created new directory for files at: \n", new_filedirectory)
- new_filepath = join(new_filedirectory, filename)
-
- try:
- shutil.move(old_filepath, new_filepath)
- logging.info("successfully moved the file {} to */completed.".format(filename))
- except:
- logging.info(
- "ERROR! unable to move file to \n{}. Please investigate".format(
- new_filepath
- )
- )
-
-
-"""## pdf2text functions
-"""
-
-
-custom_replace_list = {
- "t0": "to",
- "'$": "'s",
- ",,": ", ",
- "_ ": " ",
- " '": "'",
-}
-
-replace_corr_exceptions = {
- "i. e.": "i.e.",
- "e. g.": "e.g.",
- "e. g": "e.g.",
- " ,": ",",
-}
-
-
-spell = SpellChecker()
-
-
-def check_word_spelling(word: str) -> bool:
- """
- check_word_spelling - check the spelling of a word
- Args:
- word (str): word to check
- Returns:
- bool: True if word is spelled correctly, False if not
- """
-
- misspelled = spell.unknown([word])
-
- return len(misspelled) == 0
-
-
-def eval_and_replace(text: str, match_token: str = "- ") -> str:
- """
- eval_and_replace - conditionally replace all instances of a substring in a string based on whether the eliminated substring results in a valid word
- Args:
- text (str): text to evaluate
- match_token (str, optional): token to replace. Defaults to "- ".
- Returns:
- str: text with replaced tokens
- """
-
- try:
- if match_token not in text:
- return text
- else:
- while True:
- full_before_text = text.split(match_token, maxsplit=1)[0]
- before_text = [
- char for char in full_before_text.split()[-1] if char.isalpha()
- ]
- before_text = "".join(before_text)
- full_after_text = text.split(match_token, maxsplit=1)[-1]
- after_text = [char for char in full_after_text.split()[0] if char.isalpha()]
- after_text = "".join(after_text)
- full_text = before_text + after_text
- if check_word_spelling(full_text):
- text = full_before_text + full_after_text
- else:
- text = full_before_text + " " + full_after_text
- if match_token not in text:
- break
- except Exception as e:
- logging.error(f"Error spell-checking OCR output, returning default text:\t{e}")
- return text
-
-
-def cleantxt_ocr(ugly_text, lower=False, lang: str = "en") -> str:
- """
- cleantxt_ocr - clean text from OCR
- Args:
- ugly_text (str): text to clean
- lower (bool, optional): _description_. Defaults to False.
- lang (str, optional): _description_. Defaults to "en".
- Returns:
- str: cleaned text
- """
- # a wrapper for clean text with options different than default
-
- # https://pypi.org/project/clean-text/
- cleaned_text = clean(
- ugly_text,
- fix_unicode=True, # fix various unicode errors
- to_ascii=True, # transliterate to closest ASCII representation
- lower=lower, # lowercase text
- no_line_breaks=True, # fully strip line breaks as opposed to only normalizing them
- no_urls=True, # replace all URLs with a special token
- no_emails=True, # replace all email addresses with a special token
- no_phone_numbers=False, # replace all phone numbers with a special token
- no_numbers=False, # replace all numbers with a special token
- no_digits=False, # replace all digits with a special token
- no_currency_symbols=False, # replace all currency symbols with a special token
- no_punct=False, # remove punctuations
- replace_with_punct="", # instead of removing punctuations you may replace them
- replace_with_url="",
- replace_with_email="",
- replace_with_phone_number="",
- replace_with_number="",
- replace_with_digit="0",
- replace_with_currency_symbol="",
- lang=lang, # set to 'de' for German special handling
- )
-
- return cleaned_text
-
-
-def format_ocr_out(OCR_data):
-
- if isinstance(OCR_data, list):
- text = " ".join(OCR_data)
- else:
- text = str(OCR_data)
- _clean = cleantxt_ocr(text)
- return corr(_clean)
-
-
-def postprocess(text: str) -> str:
- """to be used after recombining the lines"""
-
- proc = corr(cleantxt_ocr(text))
-
- for k, v in custom_replace_list.items():
- proc = proc.replace(str(k), str(v))
-
- proc = corr(proc)
-
- for k, v in replace_corr_exceptions.items():
- proc = proc.replace(str(k), str(v))
-
- return eval_and_replace(proc)
-
-
-def result2text(result, as_text=False) -> str or list:
- """Convert OCR result to text"""
-
- full_doc = []
- for i, page in enumerate(result.pages, start=1):
- text = ""
- for block in page.blocks:
- text += "\n\t"
- for line in block.lines:
- for word in line.words:
- # print(dir(word))
- text += word.value + " "
- full_doc.append(text)
-
- return "\n".join(full_doc) if as_text else full_doc
-
-
-def convert_PDF_to_Text(
- PDF_file,
- ocr_model=None,
- max_pages: int = 20,
-):
-
- st = time.perf_counter()
- PDF_file = Path(PDF_file)
- ocr_model = ocr_predictor(pretrained=True) if ocr_model is None else ocr_model
- logging.info(f"starting OCR on {PDF_file.name}")
- doc = DocumentFile.from_pdf(PDF_file)
- truncated = False
- if len(doc) > max_pages:
- logging.warning(
- f"PDF has {len(doc)} pages, which is more than {max_pages}.. truncating"
- )
- doc = doc[:max_pages]
- truncated = True
-
- # Analyze
- logging.info(f"running OCR on {len(doc)} pages")
- result = ocr_model(doc)
- raw_text = result2text(result)
- proc_text = [format_ocr_out(r) for r in raw_text]
- fin_text = [postprocess(t) for t in proc_text]
-
- ocr_results = "\n\n".join(fin_text)
-
- fn_rt = time.perf_counter() - st
-
- logging.info("OCR complete")
-
- results_dict = {
- "num_pages": len(doc),
- "runtime": round(fn_rt, 2),
- "date": str(date.today()),
- "converted_text": ocr_results,
- "truncated": truncated,
- "length": len(ocr_results),
- }
-
- return results_dict
-
-
-# @title translation functions
-
-lt = LibreTranslateAPI("https://translate.astian.org/")
-
-
-def translate_text(text, source_l, target_l="en"):
-
- return str(lt.translate(text, source_l, target_l))
-
-
-def translate_doc(filepath, lang_start, lang_end="en", verbose=False):
- """translate a document from lang_start to lang_end
- {'code': 'en', 'name': 'English'},
- {'code': 'fr', 'name': 'French'},
- {'code': 'de', 'name': 'German'},
- {'code': 'it', 'name': 'Italian'},"""
-
- src_folder = dirname(filepath)
- src_folder = Path(src_folder)
- trgt_folder = src_folder / f"translated_{lang_end}"
- trgt_folder.mkdir(exist_ok=True)
- with open(filepath, "r", encoding="utf-8", errors="ignore") as f:
- foreign_t = f.readlines()
- in_name = basename(filepath)
- translated_doc = []
- for line in tqdm(
- foreign_t, total=len(foreign_t), desc="translating {}...".format(in_name[:10])
- ):
- translated_line = translate_text(line, lang_start, lang_end)
- translated_doc.append(translated_line)
- t_out_name = "[To {}]".format(lang_end) + simple_rename(in_name) + ".txt"
- out_path = join(trgt_folder, t_out_name)
- with open(out_path, "w", encoding="utf-8", errors="ignore") as f_o:
- f_o.writelines(translated_doc)
- if verbose:
- print("finished translating the document! - ", datetime.now())
- return out_path
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/create_dataset.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/create_dataset.md
deleted file mode 100644
index 9c4f4de5390439ca09a2ee8965ad31a4cafa793b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/create_dataset.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# Create a dataset for training
-
-There are many datasets on the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-to-image&sort=downloads) to train a model on, but if you can't find one you're interested in or want to use your own, you can create a dataset with the 🤗 [Datasets](hf.co/docs/datasets) library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation.
-
-This guide will show you two ways to create a dataset to finetune on:
-
-- provide a folder of images to the `--train_data_dir` argument
-- upload a dataset to the Hub and pass the dataset repository id to the `--dataset_name` argument
-
-
-
-💡 Learn more about how to create an image dataset for training in the [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset) guide.
-
-
-
-## Provide a dataset as a folder
-
-For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the [`ImageFolder`](https://huggingface.co/docs/datasets/en/image_dataset#imagefolder) builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like:
-
-```bash
-data_dir/xxx.png
-data_dir/xxy.png
-data_dir/[...]/xxz.png
-```
-
-Pass the path to the dataset directory to the `--train_data_dir` argument, and then you can start training:
-
-```bash
-accelerate launch train_unconditional.py \
- --train_data_dir \
-
-```
-
-## Upload your data to the Hub
-
-
-
-💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the [Image search with 🤗 Datasets](https://huggingface.co/blog/image-search-datasets) post.
-
-
-
-Start by creating a dataset with the [`ImageFolder`](https://huggingface.co/docs/datasets/image_load#imagefolder) feature, which creates an `image` column containing the PIL-encoded images.
-
-You can use the `data_dir` or `data_files` parameters to specify the location of the dataset. The `data_files` parameter supports mapping specific files to dataset splits like `train` or `test`:
-
-```python
-from datasets import load_dataset
-
-# example 1: local folder
-dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
-
-# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
-
-# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset(
- "imagefolder",
- data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip",
-)
-
-# example 4: providing several splits
-dataset = load_dataset(
- "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]}
-)
-```
-
-Then use the [`~datasets.Dataset.push_to_hub`] method to upload the dataset to the Hub:
-
-```python
-# assuming you have ran the huggingface-cli login command in a terminal
-dataset.push_to_hub("name_of_your_dataset")
-
-# if you want to push to a private repo, simply pass private=True:
-dataset.push_to_hub("name_of_your_dataset", private=True)
-```
-
-Now the dataset is available for training by passing the dataset name to the `--dataset_name` argument:
-
-```bash
-accelerate launch --mixed_precision="fp16" train_text_to_image.py \
- --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \
- --dataset_name="name_of_your_dataset" \
-
-```
-
-## Next steps
-
-Now that you've created a dataset, you can plug it into the `train_data_dir` (if your dataset is local) or `dataset_name` (if your dataset is on the Hub) arguments of a training script.
-
-For your next steps, feel free to try and use your dataset to train a model for [unconditional generation](uncondtional_training) or [text-to-image generation](text2image)!
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/using_safetensors.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/using_safetensors.md
deleted file mode 100644
index a7bc0a7c9c1c3a4b8e5394ba33b093cb325157e2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/using_safetensors.md
+++ /dev/null
@@ -1,70 +0,0 @@
-# Load safetensors
-
-[[open-in-colab]]
-
-[safetensors](https://github.com/huggingface/safetensors) is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or *pickled* into a `.bin` file with Python's [`pickle`](https://docs.python.org/3/library/pickle.html) utility. However, `pickle` is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to `pickle`, making it ideal for sharing model weights.
-
-This guide will show you how you load `.safetensor` files, and how to convert Stable Diffusion model weights stored in other formats to `.safetensor`. Before you start, make sure you have safetensors installed:
-
-```py
-# uncomment to install the necessary libraries in Colab
-#!pip install safetensors
-```
-
-If you look at the [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main) repository, you'll see weights inside the `text_encoder`, `unet` and `vae` subfolders are stored in the `.safetensors` format. By default, 🤗 Diffusers automatically loads these `.safetensors` files from their subfolders if they're available in the model repository.
-
-For more explicit control, you can optionally set `use_safetensors=True` (if `safetensors` is not installed, you'll get an error message asking you to install it):
-
-```py
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True)
-```
-
-However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single `.safetensors` file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] method:
-
-```py
-from diffusers import StableDiffusionPipeline
-
-pipeline = StableDiffusionPipeline.from_single_file(
- "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
-)
-```
-
-## Convert to safetensors
-
-Not all weights on the Hub are available in the `.safetensors` format, and you may encounter weights stored as `.bin`. In this case, use the [Convert Space](https://huggingface.co/spaces/diffusers/convert) to convert the weights to `.safetensors`. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted `.safetensors` file on the Hub. This way, if there is any malicious code contained in the pickled files, they're uploaded to the Hub - which has a [security scanner](https://huggingface.co/docs/hub/security-pickle#hubs-security-scanner) to detect unsafe files and suspicious pickle imports - instead of your computer.
-
-You can use the model with the new `.safetensors` weights by specifying the reference to the Pull Request in the `revision` parameter (you can also test it in this [Check PR](https://huggingface.co/spaces/diffusers/check_pr) Space on the Hub), for example `refs/pr/22`:
-
-```py
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22")
-```
-
-## Why use safetensors?
-
-There are several reasons for using safetensors:
-
-- Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don't contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files.
-- Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to `pickle` if you're loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You'll only notice the performance difference if the model is already loaded, and not if you're downloading the weights or loading the model for the first time.
-
- The time it takes to load the entire pipeline:
-
- ```py
- from diffusers import StableDiffusionPipeline
-
- pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
- "Loaded in safetensors 0:00:02.033658"
- "Loaded in PyTorch 0:00:02.663379"
- ```
-
- But the actual time it takes to load 500MB of the model weights is only:
-
- ```bash
- safetensors: 3.4873ms
- PyTorch: 172.7537ms
- ```
-
-- Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the [BLOOM](https://huggingface.co/bigscience/bloom) model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights.
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_activations.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_activations.py
deleted file mode 100644
index 4e8e51453e98157a753fc178ce146849e189a5a1..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_activations.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import unittest
-
-import torch
-from torch import nn
-
-from diffusers.models.activations import get_activation
-
-
-class ActivationsTests(unittest.TestCase):
- def test_swish(self):
- act = get_activation("swish")
-
- self.assertIsInstance(act, nn.SiLU)
-
- self.assertEqual(act(torch.tensor(-100, dtype=torch.float32)).item(), 0)
- self.assertNotEqual(act(torch.tensor(-1, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(0, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(20, dtype=torch.float32)).item(), 20)
-
- def test_silu(self):
- act = get_activation("silu")
-
- self.assertIsInstance(act, nn.SiLU)
-
- self.assertEqual(act(torch.tensor(-100, dtype=torch.float32)).item(), 0)
- self.assertNotEqual(act(torch.tensor(-1, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(0, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(20, dtype=torch.float32)).item(), 20)
-
- def test_mish(self):
- act = get_activation("mish")
-
- self.assertIsInstance(act, nn.Mish)
-
- self.assertEqual(act(torch.tensor(-200, dtype=torch.float32)).item(), 0)
- self.assertNotEqual(act(torch.tensor(-1, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(0, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(20, dtype=torch.float32)).item(), 20)
-
- def test_gelu(self):
- act = get_activation("gelu")
-
- self.assertIsInstance(act, nn.GELU)
-
- self.assertEqual(act(torch.tensor(-100, dtype=torch.float32)).item(), 0)
- self.assertNotEqual(act(torch.tensor(-1, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(0, dtype=torch.float32)).item(), 0)
- self.assertEqual(act(torch.tensor(20, dtype=torch.float32)).item(), 20)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_doc_toc.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_doc_toc.py
deleted file mode 100644
index ff9285c63f16865d0b7a7e6672ee93552b15f77a..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_doc_toc.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-from collections import defaultdict
-
-import yaml
-
-
-PATH_TO_TOC = "docs/source/en/_toctree.yml"
-
-
-def clean_doc_toc(doc_list):
- """
- Cleans the table of content of the model documentation by removing duplicates and sorting models alphabetically.
- """
- counts = defaultdict(int)
- overview_doc = []
- new_doc_list = []
- for doc in doc_list:
- if "local" in doc:
- counts[doc["local"]] += 1
-
- if doc["title"].lower() == "overview":
- overview_doc.append({"local": doc["local"], "title": doc["title"]})
- else:
- new_doc_list.append(doc)
-
- doc_list = new_doc_list
- duplicates = [key for key, value in counts.items() if value > 1]
-
- new_doc = []
- for duplicate_key in duplicates:
- titles = list({doc["title"] for doc in doc_list if doc["local"] == duplicate_key})
- if len(titles) > 1:
- raise ValueError(
- f"{duplicate_key} is present several times in the documentation table of content at "
- "`docs/source/en/_toctree.yml` with different *Title* values. Choose one of those and remove the "
- "others."
- )
- # Only add this once
- new_doc.append({"local": duplicate_key, "title": titles[0]})
-
- # Add none duplicate-keys
- new_doc.extend([doc for doc in doc_list if "local" not in counts or counts[doc["local"]] == 1])
- new_doc = sorted(new_doc, key=lambda s: s["title"].lower())
-
- # "overview" gets special treatment and is always first
- if len(overview_doc) > 1:
- raise ValueError("{doc_list} has two 'overview' docs which is not allowed.")
-
- overview_doc.extend(new_doc)
-
- # Sort
- return overview_doc
-
-
-def check_scheduler_doc(overwrite=False):
- with open(PATH_TO_TOC, encoding="utf-8") as f:
- content = yaml.safe_load(f.read())
-
- # Get to the API doc
- api_idx = 0
- while content[api_idx]["title"] != "API":
- api_idx += 1
- api_doc = content[api_idx]["sections"]
-
- # Then to the model doc
- scheduler_idx = 0
- while api_doc[scheduler_idx]["title"] != "Schedulers":
- scheduler_idx += 1
-
- scheduler_doc = api_doc[scheduler_idx]["sections"]
- new_scheduler_doc = clean_doc_toc(scheduler_doc)
-
- diff = False
- if new_scheduler_doc != scheduler_doc:
- diff = True
- if overwrite:
- api_doc[scheduler_idx]["sections"] = new_scheduler_doc
-
- if diff:
- if overwrite:
- content[api_idx]["sections"] = api_doc
- with open(PATH_TO_TOC, "w", encoding="utf-8") as f:
- f.write(yaml.dump(content, allow_unicode=True))
- else:
- raise ValueError(
- "The model doc part of the table of content is not properly sorted, run `make style` to fix this."
- )
-
-
-def check_pipeline_doc(overwrite=False):
- with open(PATH_TO_TOC, encoding="utf-8") as f:
- content = yaml.safe_load(f.read())
-
- # Get to the API doc
- api_idx = 0
- while content[api_idx]["title"] != "API":
- api_idx += 1
- api_doc = content[api_idx]["sections"]
-
- # Then to the model doc
- pipeline_idx = 0
- while api_doc[pipeline_idx]["title"] != "Pipelines":
- pipeline_idx += 1
-
- diff = False
- pipeline_docs = api_doc[pipeline_idx]["sections"]
- new_pipeline_docs = []
-
- # sort sub pipeline docs
- for pipeline_doc in pipeline_docs:
- if "section" in pipeline_doc:
- sub_pipeline_doc = pipeline_doc["section"]
- new_sub_pipeline_doc = clean_doc_toc(sub_pipeline_doc)
- if overwrite:
- pipeline_doc["section"] = new_sub_pipeline_doc
- new_pipeline_docs.append(pipeline_doc)
-
- # sort overall pipeline doc
- new_pipeline_docs = clean_doc_toc(new_pipeline_docs)
-
- if new_pipeline_docs != pipeline_docs:
- diff = True
- if overwrite:
- api_doc[pipeline_idx]["sections"] = new_pipeline_docs
-
- if diff:
- if overwrite:
- content[api_idx]["sections"] = api_doc
- with open(PATH_TO_TOC, "w", encoding="utf-8") as f:
- f.write(yaml.dump(content, allow_unicode=True))
- else:
- raise ValueError(
- "The model doc part of the table of content is not properly sorted, run `make style` to fix this."
- )
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
- args = parser.parse_args()
-
- check_scheduler_doc(args.fix_and_overwrite)
- check_pipeline_doc(args.fix_and_overwrite)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/voc0712.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/voc0712.py
deleted file mode 100644
index ae09acdd5c9580217815300abbad9f08b71b37ed..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/voc0712.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# dataset settings
-dataset_type = 'VOCDataset'
-data_root = 'data/VOCdevkit/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1000, 600),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type='RepeatDataset',
- times=3,
- dataset=dict(
- type=dataset_type,
- ann_file=[
- data_root + 'VOC2007/ImageSets/Main/trainval.txt',
- data_root + 'VOC2012/ImageSets/Main/trainval.txt'
- ],
- img_prefix=[data_root + 'VOC2007/', data_root + 'VOC2012/'],
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
- img_prefix=data_root + 'VOC2007/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
- img_prefix=data_root + 'VOC2007/',
- pipeline=test_pipeline))
-evaluation = dict(interval=1, metric='mAP')
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py
deleted file mode 100644
index 0fcc558018b69beedbd05781163c8043d93f7277..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_gn-all_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron/resnet101_gn', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index e36c83ba601884b81c06ee69445a94e76224c828..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3plus_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py
deleted file mode 100644
index 226fe84dc0d0c4eb78f9b3c603df20cef0fdfda4..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py
+++ /dev/null
@@ -1,171 +0,0 @@
-"""Logic that powers autocompletion installed by ``pip completion``.
-"""
-
-import optparse
-import os
-import sys
-from itertools import chain
-from typing import Any, Iterable, List, Optional
-
-from pip._internal.cli.main_parser import create_main_parser
-from pip._internal.commands import commands_dict, create_command
-from pip._internal.metadata import get_default_environment
-
-
-def autocomplete() -> None:
- """Entry Point for completion of main and subcommand options."""
- # Don't complete if user hasn't sourced bash_completion file.
- if "PIP_AUTO_COMPLETE" not in os.environ:
- return
- cwords = os.environ["COMP_WORDS"].split()[1:]
- cword = int(os.environ["COMP_CWORD"])
- try:
- current = cwords[cword - 1]
- except IndexError:
- current = ""
-
- parser = create_main_parser()
- subcommands = list(commands_dict)
- options = []
-
- # subcommand
- subcommand_name: Optional[str] = None
- for word in cwords:
- if word in subcommands:
- subcommand_name = word
- break
- # subcommand options
- if subcommand_name is not None:
- # special case: 'help' subcommand has no options
- if subcommand_name == "help":
- sys.exit(1)
- # special case: list locally installed dists for show and uninstall
- should_list_installed = not current.startswith("-") and subcommand_name in [
- "show",
- "uninstall",
- ]
- if should_list_installed:
- env = get_default_environment()
- lc = current.lower()
- installed = [
- dist.canonical_name
- for dist in env.iter_installed_distributions(local_only=True)
- if dist.canonical_name.startswith(lc)
- and dist.canonical_name not in cwords[1:]
- ]
- # if there are no dists installed, fall back to option completion
- if installed:
- for dist in installed:
- print(dist)
- sys.exit(1)
-
- should_list_installables = (
- not current.startswith("-") and subcommand_name == "install"
- )
- if should_list_installables:
- for path in auto_complete_paths(current, "path"):
- print(path)
- sys.exit(1)
-
- subcommand = create_command(subcommand_name)
-
- for opt in subcommand.parser.option_list_all:
- if opt.help != optparse.SUPPRESS_HELP:
- for opt_str in opt._long_opts + opt._short_opts:
- options.append((opt_str, opt.nargs))
-
- # filter out previously specified options from available options
- prev_opts = [x.split("=")[0] for x in cwords[1 : cword - 1]]
- options = [(x, v) for (x, v) in options if x not in prev_opts]
- # filter options by current input
- options = [(k, v) for k, v in options if k.startswith(current)]
- # get completion type given cwords and available subcommand options
- completion_type = get_path_completion_type(
- cwords,
- cword,
- subcommand.parser.option_list_all,
- )
- # get completion files and directories if ``completion_type`` is
- # ````, ```` or ````
- if completion_type:
- paths = auto_complete_paths(current, completion_type)
- options = [(path, 0) for path in paths]
- for option in options:
- opt_label = option[0]
- # append '=' to options which require args
- if option[1] and option[0][:2] == "--":
- opt_label += "="
- print(opt_label)
- else:
- # show main parser options only when necessary
-
- opts = [i.option_list for i in parser.option_groups]
- opts.append(parser.option_list)
- flattened_opts = chain.from_iterable(opts)
- if current.startswith("-"):
- for opt in flattened_opts:
- if opt.help != optparse.SUPPRESS_HELP:
- subcommands += opt._long_opts + opt._short_opts
- else:
- # get completion type given cwords and all available options
- completion_type = get_path_completion_type(cwords, cword, flattened_opts)
- if completion_type:
- subcommands = list(auto_complete_paths(current, completion_type))
-
- print(" ".join([x for x in subcommands if x.startswith(current)]))
- sys.exit(1)
-
-
-def get_path_completion_type(
- cwords: List[str], cword: int, opts: Iterable[Any]
-) -> Optional[str]:
- """Get the type of path completion (``file``, ``dir``, ``path`` or None)
-
- :param cwords: same as the environmental variable ``COMP_WORDS``
- :param cword: same as the environmental variable ``COMP_CWORD``
- :param opts: The available options to check
- :return: path completion type (``file``, ``dir``, ``path`` or None)
- """
- if cword < 2 or not cwords[cword - 2].startswith("-"):
- return None
- for opt in opts:
- if opt.help == optparse.SUPPRESS_HELP:
- continue
- for o in str(opt).split("/"):
- if cwords[cword - 2].split("=")[0] == o:
- if not opt.metavar or any(
- x in ("path", "file", "dir") for x in opt.metavar.split("/")
- ):
- return opt.metavar
- return None
-
-
-def auto_complete_paths(current: str, completion_type: str) -> Iterable[str]:
- """If ``completion_type`` is ``file`` or ``path``, list all regular files
- and directories starting with ``current``; otherwise only list directories
- starting with ``current``.
-
- :param current: The word to be completed
- :param completion_type: path completion type(``file``, ``path`` or ``dir``)
- :return: A generator of regular files and/or directories
- """
- directory, filename = os.path.split(current)
- current_path = os.path.abspath(directory)
- # Don't complete paths if they can't be accessed
- if not os.access(current_path, os.R_OK):
- return
- filename = os.path.normcase(filename)
- # list all files that start with ``filename``
- file_list = (
- x for x in os.listdir(current_path) if os.path.normcase(x).startswith(filename)
- )
- for f in file_list:
- opt = os.path.join(current_path, f)
- comp_file = os.path.normcase(os.path.join(directory, f))
- # complete regular files when there is not ```` after option
- # complete directories when there is ````, ```` or
- # ````after option
- if completion_type != "dir" and os.path.isfile(opt):
- yield comp_file
- elif os.path.isdir(opt):
- yield os.path.join(comp_file, "")
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/exceptions.py
deleted file mode 100644
index 7d92ba699832b01c7fee5e9d08762b3ad4cb4dfd..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/exceptions.py
+++ /dev/null
@@ -1,733 +0,0 @@
-"""Exceptions used throughout package.
-
-This module MUST NOT try to import from anything within `pip._internal` to
-operate. This is expected to be importable from any/all files within the
-subpackage and, thus, should not depend on them.
-"""
-
-import configparser
-import contextlib
-import locale
-import logging
-import pathlib
-import re
-import sys
-from itertools import chain, groupby, repeat
-from typing import TYPE_CHECKING, Dict, Iterator, List, Optional, Union
-
-from pip._vendor.requests.models import Request, Response
-from pip._vendor.rich.console import Console, ConsoleOptions, RenderResult
-from pip._vendor.rich.markup import escape
-from pip._vendor.rich.text import Text
-
-if TYPE_CHECKING:
- from hashlib import _Hash
- from typing import Literal
-
- from pip._internal.metadata import BaseDistribution
- from pip._internal.req.req_install import InstallRequirement
-
-logger = logging.getLogger(__name__)
-
-
-#
-# Scaffolding
-#
-def _is_kebab_case(s: str) -> bool:
- return re.match(r"^[a-z]+(-[a-z]+)*$", s) is not None
-
-
-def _prefix_with_indent(
- s: Union[Text, str],
- console: Console,
- *,
- prefix: str,
- indent: str,
-) -> Text:
- if isinstance(s, Text):
- text = s
- else:
- text = console.render_str(s)
-
- return console.render_str(prefix, overflow="ignore") + console.render_str(
- f"\n{indent}", overflow="ignore"
- ).join(text.split(allow_blank=True))
-
-
-class PipError(Exception):
- """The base pip error."""
-
-
-class DiagnosticPipError(PipError):
- """An error, that presents diagnostic information to the user.
-
- This contains a bunch of logic, to enable pretty presentation of our error
- messages. Each error gets a unique reference. Each error can also include
- additional context, a hint and/or a note -- which are presented with the
- main error message in a consistent style.
-
- This is adapted from the error output styling in `sphinx-theme-builder`.
- """
-
- reference: str
-
- def __init__(
- self,
- *,
- kind: 'Literal["error", "warning"]' = "error",
- reference: Optional[str] = None,
- message: Union[str, Text],
- context: Optional[Union[str, Text]],
- hint_stmt: Optional[Union[str, Text]],
- note_stmt: Optional[Union[str, Text]] = None,
- link: Optional[str] = None,
- ) -> None:
- # Ensure a proper reference is provided.
- if reference is None:
- assert hasattr(self, "reference"), "error reference not provided!"
- reference = self.reference
- assert _is_kebab_case(reference), "error reference must be kebab-case!"
-
- self.kind = kind
- self.reference = reference
-
- self.message = message
- self.context = context
-
- self.note_stmt = note_stmt
- self.hint_stmt = hint_stmt
-
- self.link = link
-
- super().__init__(f"<{self.__class__.__name__}: {self.reference}>")
-
- def __repr__(self) -> str:
- return (
- f"<{self.__class__.__name__}("
- f"reference={self.reference!r}, "
- f"message={self.message!r}, "
- f"context={self.context!r}, "
- f"note_stmt={self.note_stmt!r}, "
- f"hint_stmt={self.hint_stmt!r}"
- ")>"
- )
-
- def __rich_console__(
- self,
- console: Console,
- options: ConsoleOptions,
- ) -> RenderResult:
- colour = "red" if self.kind == "error" else "yellow"
-
- yield f"[{colour} bold]{self.kind}[/]: [bold]{self.reference}[/]"
- yield ""
-
- if not options.ascii_only:
- # Present the main message, with relevant context indented.
- if self.context is not None:
- yield _prefix_with_indent(
- self.message,
- console,
- prefix=f"[{colour}]×[/] ",
- indent=f"[{colour}]│[/] ",
- )
- yield _prefix_with_indent(
- self.context,
- console,
- prefix=f"[{colour}]╰─>[/] ",
- indent=f"[{colour}] [/] ",
- )
- else:
- yield _prefix_with_indent(
- self.message,
- console,
- prefix="[red]×[/] ",
- indent=" ",
- )
- else:
- yield self.message
- if self.context is not None:
- yield ""
- yield self.context
-
- if self.note_stmt is not None or self.hint_stmt is not None:
- yield ""
-
- if self.note_stmt is not None:
- yield _prefix_with_indent(
- self.note_stmt,
- console,
- prefix="[magenta bold]note[/]: ",
- indent=" ",
- )
- if self.hint_stmt is not None:
- yield _prefix_with_indent(
- self.hint_stmt,
- console,
- prefix="[cyan bold]hint[/]: ",
- indent=" ",
- )
-
- if self.link is not None:
- yield ""
- yield f"Link: {self.link}"
-
-
-#
-# Actual Errors
-#
-class ConfigurationError(PipError):
- """General exception in configuration"""
-
-
-class InstallationError(PipError):
- """General exception during installation"""
-
-
-class UninstallationError(PipError):
- """General exception during uninstallation"""
-
-
-class MissingPyProjectBuildRequires(DiagnosticPipError):
- """Raised when pyproject.toml has `build-system`, but no `build-system.requires`."""
-
- reference = "missing-pyproject-build-system-requires"
-
- def __init__(self, *, package: str) -> None:
- super().__init__(
- message=f"Can not process {escape(package)}",
- context=Text(
- "This package has an invalid pyproject.toml file.\n"
- "The [build-system] table is missing the mandatory `requires` key."
- ),
- note_stmt="This is an issue with the package mentioned above, not pip.",
- hint_stmt=Text("See PEP 518 for the detailed specification."),
- )
-
-
-class InvalidPyProjectBuildRequires(DiagnosticPipError):
- """Raised when pyproject.toml an invalid `build-system.requires`."""
-
- reference = "invalid-pyproject-build-system-requires"
-
- def __init__(self, *, package: str, reason: str) -> None:
- super().__init__(
- message=f"Can not process {escape(package)}",
- context=Text(
- "This package has an invalid `build-system.requires` key in "
- f"pyproject.toml.\n{reason}"
- ),
- note_stmt="This is an issue with the package mentioned above, not pip.",
- hint_stmt=Text("See PEP 518 for the detailed specification."),
- )
-
-
-class NoneMetadataError(PipError):
- """Raised when accessing a Distribution's "METADATA" or "PKG-INFO".
-
- This signifies an inconsistency, when the Distribution claims to have
- the metadata file (if not, raise ``FileNotFoundError`` instead), but is
- not actually able to produce its content. This may be due to permission
- errors.
- """
-
- def __init__(
- self,
- dist: "BaseDistribution",
- metadata_name: str,
- ) -> None:
- """
- :param dist: A Distribution object.
- :param metadata_name: The name of the metadata being accessed
- (can be "METADATA" or "PKG-INFO").
- """
- self.dist = dist
- self.metadata_name = metadata_name
-
- def __str__(self) -> str:
- # Use `dist` in the error message because its stringification
- # includes more information, like the version and location.
- return "None {} metadata found for distribution: {}".format(
- self.metadata_name,
- self.dist,
- )
-
-
-class UserInstallationInvalid(InstallationError):
- """A --user install is requested on an environment without user site."""
-
- def __str__(self) -> str:
- return "User base directory is not specified"
-
-
-class InvalidSchemeCombination(InstallationError):
- def __str__(self) -> str:
- before = ", ".join(str(a) for a in self.args[:-1])
- return f"Cannot set {before} and {self.args[-1]} together"
-
-
-class DistributionNotFound(InstallationError):
- """Raised when a distribution cannot be found to satisfy a requirement"""
-
-
-class RequirementsFileParseError(InstallationError):
- """Raised when a general error occurs parsing a requirements file line."""
-
-
-class BestVersionAlreadyInstalled(PipError):
- """Raised when the most up-to-date version of a package is already
- installed."""
-
-
-class BadCommand(PipError):
- """Raised when virtualenv or a command is not found"""
-
-
-class CommandError(PipError):
- """Raised when there is an error in command-line arguments"""
-
-
-class PreviousBuildDirError(PipError):
- """Raised when there's a previous conflicting build directory"""
-
-
-class NetworkConnectionError(PipError):
- """HTTP connection error"""
-
- def __init__(
- self,
- error_msg: str,
- response: Optional[Response] = None,
- request: Optional[Request] = None,
- ) -> None:
- """
- Initialize NetworkConnectionError with `request` and `response`
- objects.
- """
- self.response = response
- self.request = request
- self.error_msg = error_msg
- if (
- self.response is not None
- and not self.request
- and hasattr(response, "request")
- ):
- self.request = self.response.request
- super().__init__(error_msg, response, request)
-
- def __str__(self) -> str:
- return str(self.error_msg)
-
-
-class InvalidWheelFilename(InstallationError):
- """Invalid wheel filename."""
-
-
-class UnsupportedWheel(InstallationError):
- """Unsupported wheel."""
-
-
-class InvalidWheel(InstallationError):
- """Invalid (e.g. corrupt) wheel."""
-
- def __init__(self, location: str, name: str):
- self.location = location
- self.name = name
-
- def __str__(self) -> str:
- return f"Wheel '{self.name}' located at {self.location} is invalid."
-
-
-class MetadataInconsistent(InstallationError):
- """Built metadata contains inconsistent information.
-
- This is raised when the metadata contains values (e.g. name and version)
- that do not match the information previously obtained from sdist filename,
- user-supplied ``#egg=`` value, or an install requirement name.
- """
-
- def __init__(
- self, ireq: "InstallRequirement", field: str, f_val: str, m_val: str
- ) -> None:
- self.ireq = ireq
- self.field = field
- self.f_val = f_val
- self.m_val = m_val
-
- def __str__(self) -> str:
- return (
- f"Requested {self.ireq} has inconsistent {self.field}: "
- f"expected {self.f_val!r}, but metadata has {self.m_val!r}"
- )
-
-
-class InstallationSubprocessError(DiagnosticPipError, InstallationError):
- """A subprocess call failed."""
-
- reference = "subprocess-exited-with-error"
-
- def __init__(
- self,
- *,
- command_description: str,
- exit_code: int,
- output_lines: Optional[List[str]],
- ) -> None:
- if output_lines is None:
- output_prompt = Text("See above for output.")
- else:
- output_prompt = (
- Text.from_markup(f"[red][{len(output_lines)} lines of output][/]\n")
- + Text("".join(output_lines))
- + Text.from_markup(R"[red]\[end of output][/]")
- )
-
- super().__init__(
- message=(
- f"[green]{escape(command_description)}[/] did not run successfully.\n"
- f"exit code: {exit_code}"
- ),
- context=output_prompt,
- hint_stmt=None,
- note_stmt=(
- "This error originates from a subprocess, and is likely not a "
- "problem with pip."
- ),
- )
-
- self.command_description = command_description
- self.exit_code = exit_code
-
- def __str__(self) -> str:
- return f"{self.command_description} exited with {self.exit_code}"
-
-
-class MetadataGenerationFailed(InstallationSubprocessError, InstallationError):
- reference = "metadata-generation-failed"
-
- def __init__(
- self,
- *,
- package_details: str,
- ) -> None:
- super(InstallationSubprocessError, self).__init__(
- message="Encountered error while generating package metadata.",
- context=escape(package_details),
- hint_stmt="See above for details.",
- note_stmt="This is an issue with the package mentioned above, not pip.",
- )
-
- def __str__(self) -> str:
- return "metadata generation failed"
-
-
-class HashErrors(InstallationError):
- """Multiple HashError instances rolled into one for reporting"""
-
- def __init__(self) -> None:
- self.errors: List["HashError"] = []
-
- def append(self, error: "HashError") -> None:
- self.errors.append(error)
-
- def __str__(self) -> str:
- lines = []
- self.errors.sort(key=lambda e: e.order)
- for cls, errors_of_cls in groupby(self.errors, lambda e: e.__class__):
- lines.append(cls.head)
- lines.extend(e.body() for e in errors_of_cls)
- if lines:
- return "\n".join(lines)
- return ""
-
- def __bool__(self) -> bool:
- return bool(self.errors)
-
-
-class HashError(InstallationError):
- """
- A failure to verify a package against known-good hashes
-
- :cvar order: An int sorting hash exception classes by difficulty of
- recovery (lower being harder), so the user doesn't bother fretting
- about unpinned packages when he has deeper issues, like VCS
- dependencies, to deal with. Also keeps error reports in a
- deterministic order.
- :cvar head: A section heading for display above potentially many
- exceptions of this kind
- :ivar req: The InstallRequirement that triggered this error. This is
- pasted on after the exception is instantiated, because it's not
- typically available earlier.
-
- """
-
- req: Optional["InstallRequirement"] = None
- head = ""
- order: int = -1
-
- def body(self) -> str:
- """Return a summary of me for display under the heading.
-
- This default implementation simply prints a description of the
- triggering requirement.
-
- :param req: The InstallRequirement that provoked this error, with
- its link already populated by the resolver's _populate_link().
-
- """
- return f" {self._requirement_name()}"
-
- def __str__(self) -> str:
- return f"{self.head}\n{self.body()}"
-
- def _requirement_name(self) -> str:
- """Return a description of the requirement that triggered me.
-
- This default implementation returns long description of the req, with
- line numbers
-
- """
- return str(self.req) if self.req else "unknown package"
-
-
-class VcsHashUnsupported(HashError):
- """A hash was provided for a version-control-system-based requirement, but
- we don't have a method for hashing those."""
-
- order = 0
- head = (
- "Can't verify hashes for these requirements because we don't "
- "have a way to hash version control repositories:"
- )
-
-
-class DirectoryUrlHashUnsupported(HashError):
- """A hash was provided for a version-control-system-based requirement, but
- we don't have a method for hashing those."""
-
- order = 1
- head = (
- "Can't verify hashes for these file:// requirements because they "
- "point to directories:"
- )
-
-
-class HashMissing(HashError):
- """A hash was needed for a requirement but is absent."""
-
- order = 2
- head = (
- "Hashes are required in --require-hashes mode, but they are "
- "missing from some requirements. Here is a list of those "
- "requirements along with the hashes their downloaded archives "
- "actually had. Add lines like these to your requirements files to "
- "prevent tampering. (If you did not enable --require-hashes "
- "manually, note that it turns on automatically when any package "
- "has a hash.)"
- )
-
- def __init__(self, gotten_hash: str) -> None:
- """
- :param gotten_hash: The hash of the (possibly malicious) archive we
- just downloaded
- """
- self.gotten_hash = gotten_hash
-
- def body(self) -> str:
- # Dodge circular import.
- from pip._internal.utils.hashes import FAVORITE_HASH
-
- package = None
- if self.req:
- # In the case of URL-based requirements, display the original URL
- # seen in the requirements file rather than the package name,
- # so the output can be directly copied into the requirements file.
- package = (
- self.req.original_link
- if self.req.original_link
- # In case someone feeds something downright stupid
- # to InstallRequirement's constructor.
- else getattr(self.req, "req", None)
- )
- return " {} --hash={}:{}".format(
- package or "unknown package", FAVORITE_HASH, self.gotten_hash
- )
-
-
-class HashUnpinned(HashError):
- """A requirement had a hash specified but was not pinned to a specific
- version."""
-
- order = 3
- head = (
- "In --require-hashes mode, all requirements must have their "
- "versions pinned with ==. These do not:"
- )
-
-
-class HashMismatch(HashError):
- """
- Distribution file hash values don't match.
-
- :ivar package_name: The name of the package that triggered the hash
- mismatch. Feel free to write to this after the exception is raise to
- improve its error message.
-
- """
-
- order = 4
- head = (
- "THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS "
- "FILE. If you have updated the package versions, please update "
- "the hashes. Otherwise, examine the package contents carefully; "
- "someone may have tampered with them."
- )
-
- def __init__(self, allowed: Dict[str, List[str]], gots: Dict[str, "_Hash"]) -> None:
- """
- :param allowed: A dict of algorithm names pointing to lists of allowed
- hex digests
- :param gots: A dict of algorithm names pointing to hashes we
- actually got from the files under suspicion
- """
- self.allowed = allowed
- self.gots = gots
-
- def body(self) -> str:
- return " {}:\n{}".format(self._requirement_name(), self._hash_comparison())
-
- def _hash_comparison(self) -> str:
- """
- Return a comparison of actual and expected hash values.
-
- Example::
-
- Expected sha256 abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde
- or 123451234512345123451234512345123451234512345
- Got bcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdef
-
- """
-
- def hash_then_or(hash_name: str) -> "chain[str]":
- # For now, all the decent hashes have 6-char names, so we can get
- # away with hard-coding space literals.
- return chain([hash_name], repeat(" or"))
-
- lines: List[str] = []
- for hash_name, expecteds in self.allowed.items():
- prefix = hash_then_or(hash_name)
- lines.extend(
- (" Expected {} {}".format(next(prefix), e)) for e in expecteds
- )
- lines.append(
- " Got {}\n".format(self.gots[hash_name].hexdigest())
- )
- return "\n".join(lines)
-
-
-class UnsupportedPythonVersion(InstallationError):
- """Unsupported python version according to Requires-Python package
- metadata."""
-
-
-class ConfigurationFileCouldNotBeLoaded(ConfigurationError):
- """When there are errors while loading a configuration file"""
-
- def __init__(
- self,
- reason: str = "could not be loaded",
- fname: Optional[str] = None,
- error: Optional[configparser.Error] = None,
- ) -> None:
- super().__init__(error)
- self.reason = reason
- self.fname = fname
- self.error = error
-
- def __str__(self) -> str:
- if self.fname is not None:
- message_part = f" in {self.fname}."
- else:
- assert self.error is not None
- message_part = f".\n{self.error}\n"
- return f"Configuration file {self.reason}{message_part}"
-
-
-_DEFAULT_EXTERNALLY_MANAGED_ERROR = f"""\
-The Python environment under {sys.prefix} is managed externally, and may not be
-manipulated by the user. Please use specific tooling from the distributor of
-the Python installation to interact with this environment instead.
-"""
-
-
-class ExternallyManagedEnvironment(DiagnosticPipError):
- """The current environment is externally managed.
-
- This is raised when the current environment is externally managed, as
- defined by `PEP 668`_. The ``EXTERNALLY-MANAGED`` configuration is checked
- and displayed when the error is bubbled up to the user.
-
- :param error: The error message read from ``EXTERNALLY-MANAGED``.
- """
-
- reference = "externally-managed-environment"
-
- def __init__(self, error: Optional[str]) -> None:
- if error is None:
- context = Text(_DEFAULT_EXTERNALLY_MANAGED_ERROR)
- else:
- context = Text(error)
- super().__init__(
- message="This environment is externally managed",
- context=context,
- note_stmt=(
- "If you believe this is a mistake, please contact your "
- "Python installation or OS distribution provider. "
- "You can override this, at the risk of breaking your Python "
- "installation or OS, by passing --break-system-packages."
- ),
- hint_stmt=Text("See PEP 668 for the detailed specification."),
- )
-
- @staticmethod
- def _iter_externally_managed_error_keys() -> Iterator[str]:
- # LC_MESSAGES is in POSIX, but not the C standard. The most common
- # platform that does not implement this category is Windows, where
- # using other categories for console message localization is equally
- # unreliable, so we fall back to the locale-less vendor message. This
- # can always be re-evaluated when a vendor proposes a new alternative.
- try:
- category = locale.LC_MESSAGES
- except AttributeError:
- lang: Optional[str] = None
- else:
- lang, _ = locale.getlocale(category)
- if lang is not None:
- yield f"Error-{lang}"
- for sep in ("-", "_"):
- before, found, _ = lang.partition(sep)
- if not found:
- continue
- yield f"Error-{before}"
- yield "Error"
-
- @classmethod
- def from_config(
- cls,
- config: Union[pathlib.Path, str],
- ) -> "ExternallyManagedEnvironment":
- parser = configparser.ConfigParser(interpolation=None)
- try:
- parser.read(config, encoding="utf-8")
- section = parser["externally-managed"]
- for key in cls._iter_externally_managed_error_keys():
- with contextlib.suppress(KeyError):
- return cls(section[key])
- except KeyError:
- pass
- except (OSError, UnicodeDecodeError, configparser.ParsingError):
- from pip._internal.utils._log import VERBOSE
-
- exc_info = logger.isEnabledFor(VERBOSE)
- logger.warning("Failed to read %s", config, exc_info=exc_info)
- return cls(None)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/__init__.py
deleted file mode 100644
index 292e0c6d4a73d5e2b8003394fe316dc3317d9e92..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/__init__.py
+++ /dev/null
@@ -1,1047 +0,0 @@
-import os
-import re
-import abc
-import csv
-import sys
-from .. import zipp
-import email
-import pathlib
-import operator
-import textwrap
-import warnings
-import functools
-import itertools
-import posixpath
-import collections
-
-from . import _adapters, _meta
-from ._collections import FreezableDefaultDict, Pair
-from ._compat import (
- NullFinder,
- install,
- pypy_partial,
-)
-from ._functools import method_cache, pass_none
-from ._itertools import always_iterable, unique_everseen
-from ._meta import PackageMetadata, SimplePath
-
-from contextlib import suppress
-from importlib import import_module
-from importlib.abc import MetaPathFinder
-from itertools import starmap
-from typing import List, Mapping, Optional, Union
-
-
-__all__ = [
- 'Distribution',
- 'DistributionFinder',
- 'PackageMetadata',
- 'PackageNotFoundError',
- 'distribution',
- 'distributions',
- 'entry_points',
- 'files',
- 'metadata',
- 'packages_distributions',
- 'requires',
- 'version',
-]
-
-
-class PackageNotFoundError(ModuleNotFoundError):
- """The package was not found."""
-
- def __str__(self):
- return f"No package metadata was found for {self.name}"
-
- @property
- def name(self):
- (name,) = self.args
- return name
-
-
-class Sectioned:
- """
- A simple entry point config parser for performance
-
- >>> for item in Sectioned.read(Sectioned._sample):
- ... print(item)
- Pair(name='sec1', value='# comments ignored')
- Pair(name='sec1', value='a = 1')
- Pair(name='sec1', value='b = 2')
- Pair(name='sec2', value='a = 2')
-
- >>> res = Sectioned.section_pairs(Sectioned._sample)
- >>> item = next(res)
- >>> item.name
- 'sec1'
- >>> item.value
- Pair(name='a', value='1')
- >>> item = next(res)
- >>> item.value
- Pair(name='b', value='2')
- >>> item = next(res)
- >>> item.name
- 'sec2'
- >>> item.value
- Pair(name='a', value='2')
- >>> list(res)
- []
- """
-
- _sample = textwrap.dedent(
- """
- [sec1]
- # comments ignored
- a = 1
- b = 2
-
- [sec2]
- a = 2
- """
- ).lstrip()
-
- @classmethod
- def section_pairs(cls, text):
- return (
- section._replace(value=Pair.parse(section.value))
- for section in cls.read(text, filter_=cls.valid)
- if section.name is not None
- )
-
- @staticmethod
- def read(text, filter_=None):
- lines = filter(filter_, map(str.strip, text.splitlines()))
- name = None
- for value in lines:
- section_match = value.startswith('[') and value.endswith(']')
- if section_match:
- name = value.strip('[]')
- continue
- yield Pair(name, value)
-
- @staticmethod
- def valid(line):
- return line and not line.startswith('#')
-
-
-class DeprecatedTuple:
- """
- Provide subscript item access for backward compatibility.
-
- >>> recwarn = getfixture('recwarn')
- >>> ep = EntryPoint(name='name', value='value', group='group')
- >>> ep[:]
- ('name', 'value', 'group')
- >>> ep[0]
- 'name'
- >>> len(recwarn)
- 1
- """
-
- _warn = functools.partial(
- warnings.warn,
- "EntryPoint tuple interface is deprecated. Access members by name.",
- DeprecationWarning,
- stacklevel=pypy_partial(2),
- )
-
- def __getitem__(self, item):
- self._warn()
- return self._key()[item]
-
-
-class EntryPoint(DeprecatedTuple):
- """An entry point as defined by Python packaging conventions.
-
- See `the packaging docs on entry points
- `_
- for more information.
- """
-
- pattern = re.compile(
- r'(?P[\w.]+)\s*'
- r'(:\s*(?P[\w.]+)\s*)?'
- r'((?P\[.*\])\s*)?$'
- )
- """
- A regular expression describing the syntax for an entry point,
- which might look like:
-
- - module
- - package.module
- - package.module:attribute
- - package.module:object.attribute
- - package.module:attr [extra1, extra2]
-
- Other combinations are possible as well.
-
- The expression is lenient about whitespace around the ':',
- following the attr, and following any extras.
- """
-
- dist: Optional['Distribution'] = None
-
- def __init__(self, name, value, group):
- vars(self).update(name=name, value=value, group=group)
-
- def load(self):
- """Load the entry point from its definition. If only a module
- is indicated by the value, return that module. Otherwise,
- return the named object.
- """
- match = self.pattern.match(self.value)
- module = import_module(match.group('module'))
- attrs = filter(None, (match.group('attr') or '').split('.'))
- return functools.reduce(getattr, attrs, module)
-
- @property
- def module(self):
- match = self.pattern.match(self.value)
- return match.group('module')
-
- @property
- def attr(self):
- match = self.pattern.match(self.value)
- return match.group('attr')
-
- @property
- def extras(self):
- match = self.pattern.match(self.value)
- return list(re.finditer(r'\w+', match.group('extras') or ''))
-
- def _for(self, dist):
- vars(self).update(dist=dist)
- return self
-
- def __iter__(self):
- """
- Supply iter so one may construct dicts of EntryPoints by name.
- """
- msg = (
- "Construction of dict of EntryPoints is deprecated in "
- "favor of EntryPoints."
- )
- warnings.warn(msg, DeprecationWarning)
- return iter((self.name, self))
-
- def matches(self, **params):
- attrs = (getattr(self, param) for param in params)
- return all(map(operator.eq, params.values(), attrs))
-
- def _key(self):
- return self.name, self.value, self.group
-
- def __lt__(self, other):
- return self._key() < other._key()
-
- def __eq__(self, other):
- return self._key() == other._key()
-
- def __setattr__(self, name, value):
- raise AttributeError("EntryPoint objects are immutable.")
-
- def __repr__(self):
- return (
- f'EntryPoint(name={self.name!r}, value={self.value!r}, '
- f'group={self.group!r})'
- )
-
- def __hash__(self):
- return hash(self._key())
-
-
-class DeprecatedList(list):
- """
- Allow an otherwise immutable object to implement mutability
- for compatibility.
-
- >>> recwarn = getfixture('recwarn')
- >>> dl = DeprecatedList(range(3))
- >>> dl[0] = 1
- >>> dl.append(3)
- >>> del dl[3]
- >>> dl.reverse()
- >>> dl.sort()
- >>> dl.extend([4])
- >>> dl.pop(-1)
- 4
- >>> dl.remove(1)
- >>> dl += [5]
- >>> dl + [6]
- [1, 2, 5, 6]
- >>> dl + (6,)
- [1, 2, 5, 6]
- >>> dl.insert(0, 0)
- >>> dl
- [0, 1, 2, 5]
- >>> dl == [0, 1, 2, 5]
- True
- >>> dl == (0, 1, 2, 5)
- True
- >>> len(recwarn)
- 1
- """
-
- __slots__ = ()
-
- _warn = functools.partial(
- warnings.warn,
- "EntryPoints list interface is deprecated. Cast to list if needed.",
- DeprecationWarning,
- stacklevel=pypy_partial(2),
- )
-
- def _wrap_deprecated_method(method_name: str): # type: ignore
- def wrapped(self, *args, **kwargs):
- self._warn()
- return getattr(super(), method_name)(*args, **kwargs)
-
- return method_name, wrapped
-
- locals().update(
- map(
- _wrap_deprecated_method,
- '__setitem__ __delitem__ append reverse extend pop remove '
- '__iadd__ insert sort'.split(),
- )
- )
-
- def __add__(self, other):
- if not isinstance(other, tuple):
- self._warn()
- other = tuple(other)
- return self.__class__(tuple(self) + other)
-
- def __eq__(self, other):
- if not isinstance(other, tuple):
- self._warn()
- other = tuple(other)
-
- return tuple(self).__eq__(other)
-
-
-class EntryPoints(DeprecatedList):
- """
- An immutable collection of selectable EntryPoint objects.
- """
-
- __slots__ = ()
-
- def __getitem__(self, name): # -> EntryPoint:
- """
- Get the EntryPoint in self matching name.
- """
- if isinstance(name, int):
- warnings.warn(
- "Accessing entry points by index is deprecated. "
- "Cast to tuple if needed.",
- DeprecationWarning,
- stacklevel=2,
- )
- return super().__getitem__(name)
- try:
- return next(iter(self.select(name=name)))
- except StopIteration:
- raise KeyError(name)
-
- def select(self, **params):
- """
- Select entry points from self that match the
- given parameters (typically group and/or name).
- """
- return EntryPoints(ep for ep in self if ep.matches(**params))
-
- @property
- def names(self):
- """
- Return the set of all names of all entry points.
- """
- return {ep.name for ep in self}
-
- @property
- def groups(self):
- """
- Return the set of all groups of all entry points.
-
- For coverage while SelectableGroups is present.
- >>> EntryPoints().groups
- set()
- """
- return {ep.group for ep in self}
-
- @classmethod
- def _from_text_for(cls, text, dist):
- return cls(ep._for(dist) for ep in cls._from_text(text))
-
- @staticmethod
- def _from_text(text):
- return (
- EntryPoint(name=item.value.name, value=item.value.value, group=item.name)
- for item in Sectioned.section_pairs(text or '')
- )
-
-
-class Deprecated:
- """
- Compatibility add-in for mapping to indicate that
- mapping behavior is deprecated.
-
- >>> recwarn = getfixture('recwarn')
- >>> class DeprecatedDict(Deprecated, dict): pass
- >>> dd = DeprecatedDict(foo='bar')
- >>> dd.get('baz', None)
- >>> dd['foo']
- 'bar'
- >>> list(dd)
- ['foo']
- >>> list(dd.keys())
- ['foo']
- >>> 'foo' in dd
- True
- >>> list(dd.values())
- ['bar']
- >>> len(recwarn)
- 1
- """
-
- _warn = functools.partial(
- warnings.warn,
- "SelectableGroups dict interface is deprecated. Use select.",
- DeprecationWarning,
- stacklevel=pypy_partial(2),
- )
-
- def __getitem__(self, name):
- self._warn()
- return super().__getitem__(name)
-
- def get(self, name, default=None):
- self._warn()
- return super().get(name, default)
-
- def __iter__(self):
- self._warn()
- return super().__iter__()
-
- def __contains__(self, *args):
- self._warn()
- return super().__contains__(*args)
-
- def keys(self):
- self._warn()
- return super().keys()
-
- def values(self):
- self._warn()
- return super().values()
-
-
-class SelectableGroups(Deprecated, dict):
- """
- A backward- and forward-compatible result from
- entry_points that fully implements the dict interface.
- """
-
- @classmethod
- def load(cls, eps):
- by_group = operator.attrgetter('group')
- ordered = sorted(eps, key=by_group)
- grouped = itertools.groupby(ordered, by_group)
- return cls((group, EntryPoints(eps)) for group, eps in grouped)
-
- @property
- def _all(self):
- """
- Reconstruct a list of all entrypoints from the groups.
- """
- groups = super(Deprecated, self).values()
- return EntryPoints(itertools.chain.from_iterable(groups))
-
- @property
- def groups(self):
- return self._all.groups
-
- @property
- def names(self):
- """
- for coverage:
- >>> SelectableGroups().names
- set()
- """
- return self._all.names
-
- def select(self, **params):
- if not params:
- return self
- return self._all.select(**params)
-
-
-class PackagePath(pathlib.PurePosixPath):
- """A reference to a path in a package"""
-
- def read_text(self, encoding='utf-8'):
- with self.locate().open(encoding=encoding) as stream:
- return stream.read()
-
- def read_binary(self):
- with self.locate().open('rb') as stream:
- return stream.read()
-
- def locate(self):
- """Return a path-like object for this path"""
- return self.dist.locate_file(self)
-
-
-class FileHash:
- def __init__(self, spec):
- self.mode, _, self.value = spec.partition('=')
-
- def __repr__(self):
- return f''
-
-
-class Distribution:
- """A Python distribution package."""
-
- @abc.abstractmethod
- def read_text(self, filename):
- """Attempt to load metadata file given by the name.
-
- :param filename: The name of the file in the distribution info.
- :return: The text if found, otherwise None.
- """
-
- @abc.abstractmethod
- def locate_file(self, path):
- """
- Given a path to a file in this distribution, return a path
- to it.
- """
-
- @classmethod
- def from_name(cls, name):
- """Return the Distribution for the given package name.
-
- :param name: The name of the distribution package to search for.
- :return: The Distribution instance (or subclass thereof) for the named
- package, if found.
- :raises PackageNotFoundError: When the named package's distribution
- metadata cannot be found.
- """
- for resolver in cls._discover_resolvers():
- dists = resolver(DistributionFinder.Context(name=name))
- dist = next(iter(dists), None)
- if dist is not None:
- return dist
- else:
- raise PackageNotFoundError(name)
-
- @classmethod
- def discover(cls, **kwargs):
- """Return an iterable of Distribution objects for all packages.
-
- Pass a ``context`` or pass keyword arguments for constructing
- a context.
-
- :context: A ``DistributionFinder.Context`` object.
- :return: Iterable of Distribution objects for all packages.
- """
- context = kwargs.pop('context', None)
- if context and kwargs:
- raise ValueError("cannot accept context and kwargs")
- context = context or DistributionFinder.Context(**kwargs)
- return itertools.chain.from_iterable(
- resolver(context) for resolver in cls._discover_resolvers()
- )
-
- @staticmethod
- def at(path):
- """Return a Distribution for the indicated metadata path
-
- :param path: a string or path-like object
- :return: a concrete Distribution instance for the path
- """
- return PathDistribution(pathlib.Path(path))
-
- @staticmethod
- def _discover_resolvers():
- """Search the meta_path for resolvers."""
- declared = (
- getattr(finder, 'find_distributions', None) for finder in sys.meta_path
- )
- return filter(None, declared)
-
- @property
- def metadata(self) -> _meta.PackageMetadata:
- """Return the parsed metadata for this Distribution.
-
- The returned object will have keys that name the various bits of
- metadata. See PEP 566 for details.
- """
- text = (
- self.read_text('METADATA')
- or self.read_text('PKG-INFO')
- # This last clause is here to support old egg-info files. Its
- # effect is to just end up using the PathDistribution's self._path
- # (which points to the egg-info file) attribute unchanged.
- or self.read_text('')
- )
- return _adapters.Message(email.message_from_string(text))
-
- @property
- def name(self):
- """Return the 'Name' metadata for the distribution package."""
- return self.metadata['Name']
-
- @property
- def _normalized_name(self):
- """Return a normalized version of the name."""
- return Prepared.normalize(self.name)
-
- @property
- def version(self):
- """Return the 'Version' metadata for the distribution package."""
- return self.metadata['Version']
-
- @property
- def entry_points(self):
- return EntryPoints._from_text_for(self.read_text('entry_points.txt'), self)
-
- @property
- def files(self):
- """Files in this distribution.
-
- :return: List of PackagePath for this distribution or None
-
- Result is `None` if the metadata file that enumerates files
- (i.e. RECORD for dist-info or SOURCES.txt for egg-info) is
- missing.
- Result may be empty if the metadata exists but is empty.
- """
-
- def make_file(name, hash=None, size_str=None):
- result = PackagePath(name)
- result.hash = FileHash(hash) if hash else None
- result.size = int(size_str) if size_str else None
- result.dist = self
- return result
-
- @pass_none
- def make_files(lines):
- return list(starmap(make_file, csv.reader(lines)))
-
- return make_files(self._read_files_distinfo() or self._read_files_egginfo())
-
- def _read_files_distinfo(self):
- """
- Read the lines of RECORD
- """
- text = self.read_text('RECORD')
- return text and text.splitlines()
-
- def _read_files_egginfo(self):
- """
- SOURCES.txt might contain literal commas, so wrap each line
- in quotes.
- """
- text = self.read_text('SOURCES.txt')
- return text and map('"{}"'.format, text.splitlines())
-
- @property
- def requires(self):
- """Generated requirements specified for this Distribution"""
- reqs = self._read_dist_info_reqs() or self._read_egg_info_reqs()
- return reqs and list(reqs)
-
- def _read_dist_info_reqs(self):
- return self.metadata.get_all('Requires-Dist')
-
- def _read_egg_info_reqs(self):
- source = self.read_text('requires.txt')
- return pass_none(self._deps_from_requires_text)(source)
-
- @classmethod
- def _deps_from_requires_text(cls, source):
- return cls._convert_egg_info_reqs_to_simple_reqs(Sectioned.read(source))
-
- @staticmethod
- def _convert_egg_info_reqs_to_simple_reqs(sections):
- """
- Historically, setuptools would solicit and store 'extra'
- requirements, including those with environment markers,
- in separate sections. More modern tools expect each
- dependency to be defined separately, with any relevant
- extras and environment markers attached directly to that
- requirement. This method converts the former to the
- latter. See _test_deps_from_requires_text for an example.
- """
-
- def make_condition(name):
- return name and f'extra == "{name}"'
-
- def quoted_marker(section):
- section = section or ''
- extra, sep, markers = section.partition(':')
- if extra and markers:
- markers = f'({markers})'
- conditions = list(filter(None, [markers, make_condition(extra)]))
- return '; ' + ' and '.join(conditions) if conditions else ''
-
- def url_req_space(req):
- """
- PEP 508 requires a space between the url_spec and the quoted_marker.
- Ref python/importlib_metadata#357.
- """
- # '@' is uniquely indicative of a url_req.
- return ' ' * ('@' in req)
-
- for section in sections:
- space = url_req_space(section.value)
- yield section.value + space + quoted_marker(section.name)
-
-
-class DistributionFinder(MetaPathFinder):
- """
- A MetaPathFinder capable of discovering installed distributions.
- """
-
- class Context:
- """
- Keyword arguments presented by the caller to
- ``distributions()`` or ``Distribution.discover()``
- to narrow the scope of a search for distributions
- in all DistributionFinders.
-
- Each DistributionFinder may expect any parameters
- and should attempt to honor the canonical
- parameters defined below when appropriate.
- """
-
- name = None
- """
- Specific name for which a distribution finder should match.
- A name of ``None`` matches all distributions.
- """
-
- def __init__(self, **kwargs):
- vars(self).update(kwargs)
-
- @property
- def path(self):
- """
- The sequence of directory path that a distribution finder
- should search.
-
- Typically refers to Python installed package paths such as
- "site-packages" directories and defaults to ``sys.path``.
- """
- return vars(self).get('path', sys.path)
-
- @abc.abstractmethod
- def find_distributions(self, context=Context()):
- """
- Find distributions.
-
- Return an iterable of all Distribution instances capable of
- loading the metadata for packages matching the ``context``,
- a DistributionFinder.Context instance.
- """
-
-
-class FastPath:
- """
- Micro-optimized class for searching a path for
- children.
-
- >>> FastPath('').children()
- ['...']
- """
-
- @functools.lru_cache() # type: ignore
- def __new__(cls, root):
- return super().__new__(cls)
-
- def __init__(self, root):
- self.root = str(root)
-
- def joinpath(self, child):
- return pathlib.Path(self.root, child)
-
- def children(self):
- with suppress(Exception):
- return os.listdir(self.root or '.')
- with suppress(Exception):
- return self.zip_children()
- return []
-
- def zip_children(self):
- zip_path = zipp.Path(self.root)
- names = zip_path.root.namelist()
- self.joinpath = zip_path.joinpath
-
- return dict.fromkeys(child.split(posixpath.sep, 1)[0] for child in names)
-
- def search(self, name):
- return self.lookup(self.mtime).search(name)
-
- @property
- def mtime(self):
- with suppress(OSError):
- return os.stat(self.root).st_mtime
- self.lookup.cache_clear()
-
- @method_cache
- def lookup(self, mtime):
- return Lookup(self)
-
-
-class Lookup:
- def __init__(self, path: FastPath):
- base = os.path.basename(path.root).lower()
- base_is_egg = base.endswith(".egg")
- self.infos = FreezableDefaultDict(list)
- self.eggs = FreezableDefaultDict(list)
-
- for child in path.children():
- low = child.lower()
- if low.endswith((".dist-info", ".egg-info")):
- # rpartition is faster than splitext and suitable for this purpose.
- name = low.rpartition(".")[0].partition("-")[0]
- normalized = Prepared.normalize(name)
- self.infos[normalized].append(path.joinpath(child))
- elif base_is_egg and low == "egg-info":
- name = base.rpartition(".")[0].partition("-")[0]
- legacy_normalized = Prepared.legacy_normalize(name)
- self.eggs[legacy_normalized].append(path.joinpath(child))
-
- self.infos.freeze()
- self.eggs.freeze()
-
- def search(self, prepared):
- infos = (
- self.infos[prepared.normalized]
- if prepared
- else itertools.chain.from_iterable(self.infos.values())
- )
- eggs = (
- self.eggs[prepared.legacy_normalized]
- if prepared
- else itertools.chain.from_iterable(self.eggs.values())
- )
- return itertools.chain(infos, eggs)
-
-
-class Prepared:
- """
- A prepared search for metadata on a possibly-named package.
- """
-
- normalized = None
- legacy_normalized = None
-
- def __init__(self, name):
- self.name = name
- if name is None:
- return
- self.normalized = self.normalize(name)
- self.legacy_normalized = self.legacy_normalize(name)
-
- @staticmethod
- def normalize(name):
- """
- PEP 503 normalization plus dashes as underscores.
- """
- return re.sub(r"[-_.]+", "-", name).lower().replace('-', '_')
-
- @staticmethod
- def legacy_normalize(name):
- """
- Normalize the package name as found in the convention in
- older packaging tools versions and specs.
- """
- return name.lower().replace('-', '_')
-
- def __bool__(self):
- return bool(self.name)
-
-
-@install
-class MetadataPathFinder(NullFinder, DistributionFinder):
- """A degenerate finder for distribution packages on the file system.
-
- This finder supplies only a find_distributions() method for versions
- of Python that do not have a PathFinder find_distributions().
- """
-
- def find_distributions(self, context=DistributionFinder.Context()):
- """
- Find distributions.
-
- Return an iterable of all Distribution instances capable of
- loading the metadata for packages matching ``context.name``
- (or all names if ``None`` indicated) along the paths in the list
- of directories ``context.path``.
- """
- found = self._search_paths(context.name, context.path)
- return map(PathDistribution, found)
-
- @classmethod
- def _search_paths(cls, name, paths):
- """Find metadata directories in paths heuristically."""
- prepared = Prepared(name)
- return itertools.chain.from_iterable(
- path.search(prepared) for path in map(FastPath, paths)
- )
-
- def invalidate_caches(cls):
- FastPath.__new__.cache_clear()
-
-
-class PathDistribution(Distribution):
- def __init__(self, path: SimplePath):
- """Construct a distribution.
-
- :param path: SimplePath indicating the metadata directory.
- """
- self._path = path
-
- def read_text(self, filename):
- with suppress(
- FileNotFoundError,
- IsADirectoryError,
- KeyError,
- NotADirectoryError,
- PermissionError,
- ):
- return self._path.joinpath(filename).read_text(encoding='utf-8')
-
- read_text.__doc__ = Distribution.read_text.__doc__
-
- def locate_file(self, path):
- return self._path.parent / path
-
- @property
- def _normalized_name(self):
- """
- Performance optimization: where possible, resolve the
- normalized name from the file system path.
- """
- stem = os.path.basename(str(self._path))
- return self._name_from_stem(stem) or super()._normalized_name
-
- def _name_from_stem(self, stem):
- name, ext = os.path.splitext(stem)
- if ext not in ('.dist-info', '.egg-info'):
- return
- name, sep, rest = stem.partition('-')
- return name
-
-
-def distribution(distribution_name):
- """Get the ``Distribution`` instance for the named package.
-
- :param distribution_name: The name of the distribution package as a string.
- :return: A ``Distribution`` instance (or subclass thereof).
- """
- return Distribution.from_name(distribution_name)
-
-
-def distributions(**kwargs):
- """Get all ``Distribution`` instances in the current environment.
-
- :return: An iterable of ``Distribution`` instances.
- """
- return Distribution.discover(**kwargs)
-
-
-def metadata(distribution_name) -> _meta.PackageMetadata:
- """Get the metadata for the named package.
-
- :param distribution_name: The name of the distribution package to query.
- :return: A PackageMetadata containing the parsed metadata.
- """
- return Distribution.from_name(distribution_name).metadata
-
-
-def version(distribution_name):
- """Get the version string for the named package.
-
- :param distribution_name: The name of the distribution package to query.
- :return: The version string for the package as defined in the package's
- "Version" metadata key.
- """
- return distribution(distribution_name).version
-
-
-def entry_points(**params) -> Union[EntryPoints, SelectableGroups]:
- """Return EntryPoint objects for all installed packages.
-
- Pass selection parameters (group or name) to filter the
- result to entry points matching those properties (see
- EntryPoints.select()).
-
- For compatibility, returns ``SelectableGroups`` object unless
- selection parameters are supplied. In the future, this function
- will return ``EntryPoints`` instead of ``SelectableGroups``
- even when no selection parameters are supplied.
-
- For maximum future compatibility, pass selection parameters
- or invoke ``.select`` with parameters on the result.
-
- :return: EntryPoints or SelectableGroups for all installed packages.
- """
- norm_name = operator.attrgetter('_normalized_name')
- unique = functools.partial(unique_everseen, key=norm_name)
- eps = itertools.chain.from_iterable(
- dist.entry_points for dist in unique(distributions())
- )
- return SelectableGroups.load(eps).select(**params)
-
-
-def files(distribution_name):
- """Return a list of files for the named package.
-
- :param distribution_name: The name of the distribution package to query.
- :return: List of files composing the distribution.
- """
- return distribution(distribution_name).files
-
-
-def requires(distribution_name):
- """
- Return a list of requirements for the named package.
-
- :return: An iterator of requirements, suitable for
- packaging.requirement.Requirement.
- """
- return distribution(distribution_name).requires
-
-
-def packages_distributions() -> Mapping[str, List[str]]:
- """
- Return a mapping of top-level packages to their
- distributions.
-
- >>> import collections.abc
- >>> pkgs = packages_distributions()
- >>> all(isinstance(dist, collections.abc.Sequence) for dist in pkgs.values())
- True
- """
- pkg_to_dist = collections.defaultdict(list)
- for dist in distributions():
- for pkg in _top_level_declared(dist) or _top_level_inferred(dist):
- pkg_to_dist[pkg].append(dist.metadata['Name'])
- return dict(pkg_to_dist)
-
-
-def _top_level_declared(dist):
- return (dist.read_text('top_level.txt') or '').split()
-
-
-def _top_level_inferred(dist):
- return {
- f.parts[0] if len(f.parts) > 1 else f.with_suffix('').name
- for f in always_iterable(dist.files)
- if f.suffix == ".py"
- }
diff --git a/spaces/AutoLLM/AutoAgents/autoagents/tools/__init__.py b/spaces/AutoLLM/AutoAgents/autoagents/tools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Bakar31/PotterQuest/README.md b/spaces/Bakar31/PotterQuest/README.md
deleted file mode 100644
index f4bc4a86aebd42dc19b2387bcbb5ea42340d09ed..0000000000000000000000000000000000000000
--- a/spaces/Bakar31/PotterQuest/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PotterQuest
-emoji: 📚
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/i18n.py b/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/pyproject.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/pyproject.py
deleted file mode 100644
index eb8e12b2dec992dc38c87510055d6ccb5f66c828..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/pyproject.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import importlib.util
-import os
-from collections import namedtuple
-from typing import Any, List, Optional
-
-from pip._vendor import tomli
-from pip._vendor.packaging.requirements import InvalidRequirement, Requirement
-
-from pip._internal.exceptions import (
- InstallationError,
- InvalidPyProjectBuildRequires,
- MissingPyProjectBuildRequires,
-)
-
-
-def _is_list_of_str(obj: Any) -> bool:
- return isinstance(obj, list) and all(isinstance(item, str) for item in obj)
-
-
-def make_pyproject_path(unpacked_source_directory: str) -> str:
- return os.path.join(unpacked_source_directory, "pyproject.toml")
-
-
-BuildSystemDetails = namedtuple(
- "BuildSystemDetails", ["requires", "backend", "check", "backend_path"]
-)
-
-
-def load_pyproject_toml(
- use_pep517: Optional[bool], pyproject_toml: str, setup_py: str, req_name: str
-) -> Optional[BuildSystemDetails]:
- """Load the pyproject.toml file.
-
- Parameters:
- use_pep517 - Has the user requested PEP 517 processing? None
- means the user hasn't explicitly specified.
- pyproject_toml - Location of the project's pyproject.toml file
- setup_py - Location of the project's setup.py file
- req_name - The name of the requirement we're processing (for
- error reporting)
-
- Returns:
- None if we should use the legacy code path, otherwise a tuple
- (
- requirements from pyproject.toml,
- name of PEP 517 backend,
- requirements we should check are installed after setting
- up the build environment
- directory paths to import the backend from (backend-path),
- relative to the project root.
- )
- """
- has_pyproject = os.path.isfile(pyproject_toml)
- has_setup = os.path.isfile(setup_py)
-
- if not has_pyproject and not has_setup:
- raise InstallationError(
- f"{req_name} does not appear to be a Python project: "
- f"neither 'setup.py' nor 'pyproject.toml' found."
- )
-
- if has_pyproject:
- with open(pyproject_toml, encoding="utf-8") as f:
- pp_toml = tomli.loads(f.read())
- build_system = pp_toml.get("build-system")
- else:
- build_system = None
-
- # The following cases must use PEP 517
- # We check for use_pep517 being non-None and falsey because that means
- # the user explicitly requested --no-use-pep517. The value 0 as
- # opposed to False can occur when the value is provided via an
- # environment variable or config file option (due to the quirk of
- # strtobool() returning an integer in pip's configuration code).
- if has_pyproject and not has_setup:
- if use_pep517 is not None and not use_pep517:
- raise InstallationError(
- "Disabling PEP 517 processing is invalid: "
- "project does not have a setup.py"
- )
- use_pep517 = True
- elif build_system and "build-backend" in build_system:
- if use_pep517 is not None and not use_pep517:
- raise InstallationError(
- "Disabling PEP 517 processing is invalid: "
- "project specifies a build backend of {} "
- "in pyproject.toml".format(build_system["build-backend"])
- )
- use_pep517 = True
-
- # If we haven't worked out whether to use PEP 517 yet,
- # and the user hasn't explicitly stated a preference,
- # we do so if the project has a pyproject.toml file
- # or if we cannot import setuptools or wheels.
-
- # We fallback to PEP 517 when without setuptools or without the wheel package,
- # so setuptools can be installed as a default build backend.
- # For more info see:
- # https://discuss.python.org/t/pip-without-setuptools-could-the-experience-be-improved/11810/9
- # https://github.com/pypa/pip/issues/8559
- elif use_pep517 is None:
- use_pep517 = (
- has_pyproject
- or not importlib.util.find_spec("setuptools")
- or not importlib.util.find_spec("wheel")
- )
-
- # At this point, we know whether we're going to use PEP 517.
- assert use_pep517 is not None
-
- # If we're using the legacy code path, there is nothing further
- # for us to do here.
- if not use_pep517:
- return None
-
- if build_system is None:
- # Either the user has a pyproject.toml with no build-system
- # section, or the user has no pyproject.toml, but has opted in
- # explicitly via --use-pep517.
- # In the absence of any explicit backend specification, we
- # assume the setuptools backend that most closely emulates the
- # traditional direct setup.py execution, and require wheel and
- # a version of setuptools that supports that backend.
-
- build_system = {
- "requires": ["setuptools>=40.8.0", "wheel"],
- "build-backend": "setuptools.build_meta:__legacy__",
- }
-
- # If we're using PEP 517, we have build system information (either
- # from pyproject.toml, or defaulted by the code above).
- # Note that at this point, we do not know if the user has actually
- # specified a backend, though.
- assert build_system is not None
-
- # Ensure that the build-system section in pyproject.toml conforms
- # to PEP 518.
-
- # Specifying the build-system table but not the requires key is invalid
- if "requires" not in build_system:
- raise MissingPyProjectBuildRequires(package=req_name)
-
- # Error out if requires is not a list of strings
- requires = build_system["requires"]
- if not _is_list_of_str(requires):
- raise InvalidPyProjectBuildRequires(
- package=req_name,
- reason="It is not a list of strings.",
- )
-
- # Each requirement must be valid as per PEP 508
- for requirement in requires:
- try:
- Requirement(requirement)
- except InvalidRequirement as error:
- raise InvalidPyProjectBuildRequires(
- package=req_name,
- reason=f"It contains an invalid requirement: {requirement!r}",
- ) from error
-
- backend = build_system.get("build-backend")
- backend_path = build_system.get("backend-path", [])
- check: List[str] = []
- if backend is None:
- # If the user didn't specify a backend, we assume they want to use
- # the setuptools backend. But we can't be sure they have included
- # a version of setuptools which supplies the backend. So we
- # make a note to check that this requirement is present once
- # we have set up the environment.
- # This is quite a lot of work to check for a very specific case. But
- # the problem is, that case is potentially quite common - projects that
- # adopted PEP 518 early for the ability to specify requirements to
- # execute setup.py, but never considered needing to mention the build
- # tools themselves. The original PEP 518 code had a similar check (but
- # implemented in a different way).
- backend = "setuptools.build_meta:__legacy__"
- check = ["setuptools>=40.8.0"]
-
- return BuildSystemDetails(requires, backend, check, backend_path)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__init__.py
deleted file mode 100644
index 7686fe85a7cc94188da76bfb1c10ad2a10821256..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__init__.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from .distro import (
- NORMALIZED_DISTRO_ID,
- NORMALIZED_LSB_ID,
- NORMALIZED_OS_ID,
- LinuxDistribution,
- __version__,
- build_number,
- codename,
- distro_release_attr,
- distro_release_info,
- id,
- info,
- like,
- linux_distribution,
- lsb_release_attr,
- lsb_release_info,
- major_version,
- minor_version,
- name,
- os_release_attr,
- os_release_info,
- uname_attr,
- uname_info,
- version,
- version_parts,
-)
-
-__all__ = [
- "NORMALIZED_DISTRO_ID",
- "NORMALIZED_LSB_ID",
- "NORMALIZED_OS_ID",
- "LinuxDistribution",
- "build_number",
- "codename",
- "distro_release_attr",
- "distro_release_info",
- "id",
- "info",
- "like",
- "linux_distribution",
- "lsb_release_attr",
- "lsb_release_info",
- "major_version",
- "minor_version",
- "name",
- "os_release_attr",
- "os_release_info",
- "uname_attr",
- "uname_info",
- "version",
- "version_parts",
-]
-
-__version__ = __version__
diff --git a/spaces/Bonp/B/README.md b/spaces/Bonp/B/README.md
deleted file mode 100644
index f364dcfecc56844bb011c2779817a39e1f2624dc..0000000000000000000000000000000000000000
--- a/spaces/Bonp/B/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: B
-emoji: 🌖
-colorFrom: green
-colorTo: pink
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/managed_memory_pointer.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/managed_memory_pointer.h
deleted file mode 100644
index c6a4c9756be37a9ba03806132ba6fb3381c21354..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/managed_memory_pointer.h
+++ /dev/null
@@ -1,195 +0,0 @@
-/*
- * Copyright 2020 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace cuda
-{
-namespace detail
-{
-
-// forward decl for iterator traits:
-template
-class managed_memory_pointer;
-
-} // end namespace detail
-} // end namespace cuda
-} // end namespace system
-
-// Specialize iterator traits to define `pointer` to something meaningful.
-template
-struct iterator_traits > > {
-private:
- typedef thrust::pointer<
- Element,
- Tag,
- Reference,
- thrust::system::cuda::detail::managed_memory_pointer >
- ptr;
-
-public:
- typedef typename ptr::iterator_category iterator_category;
- typedef typename ptr::value_type value_type;
- typedef typename ptr::difference_type difference_type;
- typedef Element* pointer;
- typedef typename ptr::reference reference;
-}; // end iterator_traits
-
-namespace system
-{
-namespace cuda
-{
-namespace detail
-{
-
-/*! A version of thrust::cuda_cub::pointer that uses c++ references instead
- * of thrust::cuda::reference. This is to allow managed memory pointers to
- * be used with host-side code in standard libraries that are not compatible
- * with proxy references.
- */
-template
-class managed_memory_pointer
- : public thrust::pointer<
- T,
- thrust::cuda_cub::tag,
- typename thrust::detail::add_reference::type,
- thrust::system::cuda::detail::managed_memory_pointer >
-{
-private:
- typedef thrust::pointer<
- T,
- thrust::cuda_cub::tag,
- typename thrust::detail::add_reference::type,
- thrust::system::cuda::detail::managed_memory_pointer >
- super_t;
-
-public:
- typedef typename super_t::raw_pointer pointer;
-
- /*! \p managed_memory_pointer's no-argument constructor initializes its
- * encapsulated pointer to \c 0.
- */
- __host__ __device__ managed_memory_pointer()
- : super_t()
- {}
-
-#if THRUST_CPP_DIALECT >= 2011
- // NOTE: This is needed so that Thrust smart pointers can be used in
- // `std::unique_ptr`.
- __host__ __device__ managed_memory_pointer(decltype(nullptr))
- : super_t(nullptr)
- {}
-#endif
-
- /*! This constructor allows construction of a from a
- * T*.
- *
- * \param ptr A raw pointer to copy from, presumed to point to a location
- * in memory accessible by the \p cuda system. \tparam OtherT \p OtherT
- * shall be convertible to \p T.
- */
- template
- __host__ __device__ explicit managed_memory_pointer(OtherT* ptr)
- : super_t(ptr)
- {}
-
- /*! This constructor allows construction from another pointer-like object
- * with related type.
- *
- * \param other The \p OtherPointer to copy.
- * \tparam OtherPointer The system tag associated with \p OtherPointer
- * shall be convertible to \p thrust::system::cuda::tag and its element
- * type shall be convertible to \p T.
- */
- template
- __host__ __device__ managed_memory_pointer(
- const OtherPointer& other,
- typename thrust::detail::enable_if_pointer_is_convertible<
- OtherPointer,
- managed_memory_pointer>::type* = 0)
- : super_t(other)
- {}
-
- /*! This constructor allows construction from another pointer-like object
- * with \p void type.
- *
- * \param other The \p OtherPointer to copy.
- * \tparam OtherPointer The system tag associated with \p OtherPointer
- * shall be convertible to \p thrust::system::cuda::tag and its element
- * type shall be \p void.
- */
- template
- __host__ __device__ explicit managed_memory_pointer(
- const OtherPointer& other,
- typename thrust::detail::enable_if_void_pointer_is_system_convertible<
- OtherPointer,
- managed_memory_pointer>::type* = 0)
- : super_t(other)
- {}
-
- /*! Assignment operator allows assigning from another pointer-like object
- * with related type.
- *
- * \param other The other pointer-like object to assign from.
- * \tparam OtherPointer The system tag associated with \p OtherPointer
- * shall be convertible to \p thrust::system::cuda::tag and its element
- * type shall be convertible to \p T.
- */
- template
- __host__ __device__ typename thrust::detail::enable_if_pointer_is_convertible<
- OtherPointer,
- managed_memory_pointer,
- managed_memory_pointer&>::type
- operator=(const OtherPointer& other)
- {
- return super_t::operator=(other);
- }
-
-#if THRUST_CPP_DIALECT >= 2011
- // NOTE: This is needed so that Thrust smart pointers can be used in
- // `std::unique_ptr`.
- __host__ __device__ managed_memory_pointer& operator=(decltype(nullptr))
- {
- super_t::operator=(nullptr);
- return *this;
- }
-#endif
-
- __host__ __device__
- pointer operator->() const
- {
- return this->get();
- }
-
-}; // class managed_memory_pointer
-
-} // namespace detail
-} // namespace cuda
-} // namespace system
-} // namespace thrust
diff --git a/spaces/CVPR/regionclip-demo/detectron2/config/defaults.py b/spaces/CVPR/regionclip-demo/detectron2/config/defaults.py
deleted file mode 100644
index 47e171e783f017a345ccdae98329fd786d3300b0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/config/defaults.py
+++ /dev/null
@@ -1,786 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .config import CfgNode as CN
-
-# ------------------------------------------------------------------de-----------
-# Convention about Training / Test specific parameters
-# -----------------------------------------------------------------------------
-# Whenever an argument can be either used for training or for testing, the
-# corresponding name will be post-fixed by a _TRAIN for a training parameter,
-# or _TEST for a test-specific parameter.
-# For example, the number of images during training will be
-# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be
-# IMAGES_PER_BATCH_TEST
-
-# -----------------------------------------------------------------------------
-# Config definition
-# -----------------------------------------------------------------------------
-
-_C = CN()
-
-# The version number, to upgrade from old configs to new ones if any
-# changes happen. It's recommended to keep a VERSION in your config file.
-_C.VERSION = 2
-
-_C.MODEL = CN()
-_C.MODEL.LOAD_PROPOSALS = False
-_C.MODEL.MASK_ON = False
-_C.MODEL.KEYPOINT_ON = False
-_C.MODEL.DEVICE = "cpu"
-_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN"
-
-# Path (a file path, or URL like detectron2://.., https://..) to a checkpoint file
-# to be loaded to the model. You can find available models in the model zoo.
-_C.MODEL.WEIGHTS = ""
-
-# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR).
-# To train on images of different number of channels, just set different mean & std.
-# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
-_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]
-# When using pre-trained models in Detectron1 or any MSRA models,
-# std has been absorbed into its conv1 weights, so the std needs to be set 1.
-# Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std)
-_C.MODEL.PIXEL_STD = [1.0, 1.0, 1.0]
-
-
-# -----------------------------------------------------------------------------
-# INPUT
-# -----------------------------------------------------------------------------
-_C.INPUT = CN()
-# Size of the smallest side of the image during training
-_C.INPUT.MIN_SIZE_TRAIN = (800,)
-# Sample size of smallest side by choice or random selection from range give by
-# INPUT.MIN_SIZE_TRAIN
-_C.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice"
-# Maximum size of the side of the image during training
-_C.INPUT.MAX_SIZE_TRAIN = 1333
-# Size of the smallest side of the image during testing. Set to zero to disable resize in testing.
-_C.INPUT.MIN_SIZE_TEST = 800
-# Maximum size of the side of the image during testing
-_C.INPUT.MAX_SIZE_TEST = 1333
-# Mode for flipping images used in data augmentation during training
-# choose one of ["horizontal, "vertical", "none"]
-_C.INPUT.RANDOM_FLIP = "horizontal"
-
-# `True` if cropping is used for data augmentation during training
-_C.INPUT.CROP = CN({"ENABLED": False})
-# Cropping type. See documentation of `detectron2.data.transforms.RandomCrop` for explanation.
-_C.INPUT.CROP.TYPE = "relative_range"
-# Size of crop in range (0, 1] if CROP.TYPE is "relative" or "relative_range" and in number of
-# pixels if CROP.TYPE is "absolute"
-_C.INPUT.CROP.SIZE = [0.9, 0.9]
-
-
-# Whether the model needs RGB, YUV, HSV etc.
-# Should be one of the modes defined here, as we use PIL to read the image:
-# https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes
-# with BGR being the one exception. One can set image format to BGR, we will
-# internally use RGB for conversion and flip the channels over
-_C.INPUT.FORMAT = "BGR"
-# The ground truth mask format that the model will use.
-# Mask R-CNN supports either "polygon" or "bitmask" as ground truth.
-_C.INPUT.MASK_FORMAT = "polygon" # alternative: "bitmask"
-
-################### Text Tokenizer from MSR-CLIP ##################
-_C.INPUT.TEXT_TOKENIZER = "openai_bpe" # "bert-base-cased"
-
-################## Data Augmentation from MSR-CLIP ##################
-_C.AUG = CN()
-_C.AUG.SCALE = (0.08, 1.0)
-_C.AUG.RATIO = (3.0/4.0, 4.0/3.0)
-_C.AUG.COLOR_JITTER = [0.4, 0.4, 0.4, 0.1, 0.0]
-_C.AUG.GRAY_SCALE = 0.0
-_C.AUG.GAUSSIAN_BLUR = 0.0
-_C.AUG.DROPBLOCK_LAYERS = [3, 4]
-_C.AUG.DROPBLOCK_KEEP_PROB = 1.0
-_C.AUG.DROPBLOCK_BLOCK_SIZE = 7
-_C.AUG.MIXUP_PROB = 0.0
-_C.AUG.MIXUP = 0.0
-_C.AUG.MIXCUT = 0.0
-_C.AUG.MIXCUT_MINMAX = []
-_C.AUG.MIXUP_SWITCH_PROB = 0.5
-_C.AUG.MIXUP_MODE = 'batch'
-_C.AUG.MIXCUT_AND_MIXUP = False
-_C.AUG.INTERPOLATION = 3
-_C.AUG.USE_TIMM = False
-_C.AUG.TIMM_AUG = CN(new_allowed=True)
-_C.AUG.TIMM_AUG.USE_LOADER = False
-_C.AUG.TIMM_AUG.USE_TRANSFORM = False
-
-_C.AUG.TRAIN = CN()
-_C.AUG.TRAIN.IMAGE_SIZE = [224, 224] # width * height, ex: 192 * 256
-_C.AUG.TRAIN.MAX_SIZE = None # the maximum size for longer edge after resizing
-_C.AUG.TEST = CN()
-_C.AUG.TEST.IMAGE_SIZE = [224, 224] # width * height, ex: 192 * 256
-_C.AUG.TEST.MAX_SIZE = None # the maximum size for longer edge after resizing
-_C.AUG.TEST.CENTER_CROP = False
-_C.AUG.TEST.INTERPOLATION = 3
-
-
-# -----------------------------------------------------------------------------
-# Dataset
-# -----------------------------------------------------------------------------
-_C.DATASETS = CN()
-# List of the dataset names for training. Must be registered in DatasetCatalog
-# Samples from these datasets will be merged and used as one dataset.
-_C.DATASETS.TRAIN = ()
-# List of the pre-computed proposal files for training, which must be consistent
-# with datasets listed in DATASETS.TRAIN.
-_C.DATASETS.PROPOSAL_FILES_TRAIN = ()
-# Number of top scoring precomputed proposals to keep for training
-_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN = 2000
-# List of the dataset names for testing. Must be registered in DatasetCatalog
-_C.DATASETS.TEST = ()
-# List of the pre-computed proposal files for test, which must be consistent
-# with datasets listed in DATASETS.TEST.
-_C.DATASETS.PROPOSAL_FILES_TEST = ()
-# Number of top scoring precomputed proposals to keep for test
-_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST = 1000
-################## Data Loading from MSR-CLIP ##################
-# List of dataset class names for training
-_C.DATASETS.FACTORY_TRAIN = ()
-# List of dataset folder for training
-_C.DATASETS.PATH_TRAIN = ()
-# List of the dataset names for auxilary training, as present in paths_catalog.py
-_C.DATASETS.AUX = ()
-# List of dataset class names for auxilary training
-_C.DATASETS.FACTORY_AUX = ()
-# List of dataset folder for auxilary training
-_C.DATASETS.PATH_AUX = ()
-# List of dataset class names for testing
-_C.DATASETS.FACTORY_TEST = ()
-# List of dataset folder for testing
-_C.DATASETS.PATH_TEST = ()
-# Labelmap file to convert to tsv or for demo purpose
-_C.DATASETS.LABELMAP_FILE = ''
-_C.DATASETS.ATTR_LABELMAP_FILE = ''
-_C.DATASETS.FILTERED_CLASSIFICATION_DATASETS = ''
-# hierarchy file for test time score aggregation (developed on OpenImages)
-_C.DATASETS.HIERARCHY_FILE = ''
-# List of box extra fields for training/testing
-# If given, will not infer from the other cfgs.
-_C.DATASETS.BOX_EXTRA_FIELDS = ()
-
-_C.DATASETS.NUM_CLASSES = 0
-_C.DATASETS.ROOT = ''
-_C.DATASETS.TRAIN_SET = 'train'
-_C.DATASETS.VAL_SET = ''
-_C.DATASETS.TEST_SET = 'val'
-
-# The maximum total input sequence length after WordPiece tokenization
-# Sequences longer than this will be truncated, and sequences shorter than this will be padded.
-_C.DATASETS.MAX_SEQ_LENGTH = 35
-
-# -----------------------------------------------------------------------------
-# DataLoader
-# -----------------------------------------------------------------------------
-_C.DATALOADER = CN()
-# Number of data loading threads
-_C.DATALOADER.NUM_WORKERS = 4
-# If True, each batch should contain only images for which the aspect ratio
-# is compatible. This groups portrait images together, and landscape images
-# are not batched with portrait images.
-_C.DATALOADER.ASPECT_RATIO_GROUPING = True
-# Options: TrainingSampler, RepeatFactorTrainingSampler
-_C.DATALOADER.SAMPLER_TRAIN = "TrainingSampler"
-# Repeat threshold for RepeatFactorTrainingSampler
-_C.DATALOADER.REPEAT_THRESHOLD = 0.0
-# Tf True, when working on datasets that have instance annotations, the
-# training dataloader will filter out images without associated annotations
-_C.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True
-
-# ---------------------------------------------------------------------------- #
-# CLIP options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.CLIP = CN()
-
-_C.MODEL.CLIP.CROP_REGION_TYPE = "" # options: "GT", "RPN"
-_C.MODEL.CLIP.BB_RPN_WEIGHTS = None # the weights of pretrained MaskRCNN
-_C.MODEL.CLIP.IMS_PER_BATCH_TEST = 8 # the #images during inference per batch
-
-_C.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER = False # if True, use the CLIP text embedding as the classifier's weights
-_C.MODEL.CLIP.TEXT_EMB_PATH = None # "/mnt/output_storage/trained_models/lvis_cls_emb/lvis_1203_cls_emb.pth"
-_C.MODEL.CLIP.OFFLINE_RPN_CONFIG = None # option: all configs of pretrained RPN
-_C.MODEL.CLIP.NO_BOX_DELTA = False # if True, during inference, no box delta will be applied to region proposals
-
-_C.MODEL.CLIP.BG_CLS_LOSS_WEIGHT = None # if not None, it is the loss weight for bg regions
-_C.MODEL.CLIP.ONLY_SAMPLE_FG_PROPOSALS = False # if True, during training, ignore all bg proposals and only sample fg proposals
-_C.MODEL.CLIP.MULTIPLY_RPN_SCORE = False # if True, during inference, multiply RPN scores with classification scores
-
-_C.MODEL.CLIP.OPENSET_TEST_NUM_CLASSES = None # if an integer, it is #all_cls in test
-_C.MODEL.CLIP.OPENSET_TEST_TEXT_EMB_PATH = None # if not None, enables the openset/zero-shot training, the category embeddings during test
-
-_C.MODEL.CLIP.CLSS_TEMP = None # if None, dot product wo normalization & temperature; if float, normalization plus temperature
-_C.MODEL.CLIP.RUN_CVPR_OVR = False # if True, train CVPR OVR model with their text embeddings
-_C.MODEL.CLIP.FOCAL_SCALED_LOSS = None # if not None (float value for gamma), apply focal loss scaling idea to standard cross-entropy loss
-
-_C.MODEL.CLIP.OFFLINE_RPN_NMS_THRESH = None # the threshold of NMS in offline RPN
-_C.MODEL.CLIP.PRETRAIN_IMG_TXT_LEVEL = True # if True, pretrain model using image-text level matching
-_C.MODEL.CLIP.PRETRAIN_ONLY_EOT = False # if True, use end-of-token emb to match region features, in image-text level matching
-_C.MODEL.CLIP.PRETRAIN_RPN_REGIONS = None # if not None, the number of RPN regions per image during pretraining
-_C.MODEL.CLIP.PRETRAIN_SAMPLE_REGIONS = None # if not None, the number of regions per image during pretraining after sampling, to avoid overfitting
-_C.MODEL.CLIP.GATHER_GPUS = False # if True, gather tensors across GPUS to increase batch size
-_C.MODEL.CLIP.GRID_REGIONS = False # if True, use grid boxes to extract grid features, instead of object proposals
-_C.MODEL.CLIP.CONCEPT_POOL_EMB = None # if not None, it provides the file path of embs of concept pool and thus enables region-concept matching
-_C.MODEL.CLIP.CONCEPT_THRES = None # if not None, the threshold to filter out the regions with low matching score with concept embs, dependent on temp (default: 0.01)
-
-_C.MODEL.CLIP.OFFLINE_RPN_LSJ_PRETRAINED = False # if True, use large-scale jittering (LSJ) pretrained RPN
-_C.MODEL.CLIP.TEACHER_RESNETS_DEPTH = 50 # the type of visual encoder of teacher model, sucha as ResNet 50, 101, 200 (a flag for 50x4)
-_C.MODEL.CLIP.TEACHER_CONCEPT_POOL_EMB = None # if not None, it uses the same concept embedding as student model; otherwise, uses a seperate embedding of teacher model
-_C.MODEL.CLIP.TEACHER_POOLER_RESOLUTION = 14 # RoIpooling resolution of teacher model
-
-_C.MODEL.CLIP.TEXT_EMB_DIM = 1024 # the dimension of precomputed class embeddings
-
-# ---------------------------------------------------------------------------- #
-# Backbone options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.BACKBONE = CN()
-
-_C.MODEL.BACKBONE.NAME = "build_resnet_backbone"
-# Freeze the first several stages so they are not trained.
-# There are 5 stages in ResNet. The first is a convolution, and the following
-# stages are each group of residual blocks.
-_C.MODEL.BACKBONE.FREEZE_AT = 2
-
-_C.MODEL.TEXT_BACKBONE = CN()
-_C.MODEL.TEXT_BACKBONE.NAME = "build_clip_swin_text_backbone"
-
-
-# ---------------------------------------------------------------------------- #
-# FPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.FPN = CN()
-# Names of the input feature maps to be used by FPN
-# They must have contiguous power of 2 strides
-# e.g., ["res2", "res3", "res4", "res5"]
-_C.MODEL.FPN.IN_FEATURES = []
-_C.MODEL.FPN.OUT_CHANNELS = 256
-
-# Options: "" (no norm), "GN"
-_C.MODEL.FPN.NORM = ""
-
-# Types for fusing the FPN top-down and lateral features. Can be either "sum" or "avg"
-_C.MODEL.FPN.FUSE_TYPE = "sum"
-
-
-# ---------------------------------------------------------------------------- #
-# Proposal generator options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.PROPOSAL_GENERATOR = CN()
-# Current proposal generators include "RPN", "RRPN" and "PrecomputedProposals"
-_C.MODEL.PROPOSAL_GENERATOR.NAME = "RPN"
-# Proposal height and width both need to be greater than MIN_SIZE
-# (a the scale used during training or inference)
-_C.MODEL.PROPOSAL_GENERATOR.MIN_SIZE = 0
-
-
-# ---------------------------------------------------------------------------- #
-# Anchor generator options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ANCHOR_GENERATOR = CN()
-# The generator can be any name in the ANCHOR_GENERATOR registry
-_C.MODEL.ANCHOR_GENERATOR.NAME = "DefaultAnchorGenerator"
-# Anchor sizes (i.e. sqrt of area) in absolute pixels w.r.t. the network input.
-# Format: list[list[float]]. SIZES[i] specifies the list of sizes to use for
-# IN_FEATURES[i]; len(SIZES) must be equal to len(IN_FEATURES) or 1.
-# When len(SIZES) == 1, SIZES[0] is used for all IN_FEATURES.
-_C.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64, 128, 256, 512]]
-# Anchor aspect ratios. For each area given in `SIZES`, anchors with different aspect
-# ratios are generated by an anchor generator.
-# Format: list[list[float]]. ASPECT_RATIOS[i] specifies the list of aspect ratios (H/W)
-# to use for IN_FEATURES[i]; len(ASPECT_RATIOS) == len(IN_FEATURES) must be true,
-# or len(ASPECT_RATIOS) == 1 is true and aspect ratio list ASPECT_RATIOS[0] is used
-# for all IN_FEATURES.
-_C.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.5, 1.0, 2.0]]
-# Anchor angles.
-# list[list[float]], the angle in degrees, for each input feature map.
-# ANGLES[i] specifies the list of angles for IN_FEATURES[i].
-_C.MODEL.ANCHOR_GENERATOR.ANGLES = [[-90, 0, 90]]
-# Relative offset between the center of the first anchor and the top-left corner of the image
-# Value has to be in [0, 1). Recommend to use 0.5, which means half stride.
-# The value is not expected to affect model accuracy.
-_C.MODEL.ANCHOR_GENERATOR.OFFSET = 0.0
-
-# ---------------------------------------------------------------------------- #
-# RPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RPN = CN()
-_C.MODEL.RPN.HEAD_NAME = "StandardRPNHead" # used by RPN_HEAD_REGISTRY
-
-# Names of the input feature maps to be used by RPN
-# e.g., ["p2", "p3", "p4", "p5", "p6"] for FPN
-_C.MODEL.RPN.IN_FEATURES = ["res4"]
-# Remove RPN anchors that go outside the image by BOUNDARY_THRESH pixels
-# Set to -1 or a large value, e.g. 100000, to disable pruning anchors
-_C.MODEL.RPN.BOUNDARY_THRESH = -1
-# IOU overlap ratios [BG_IOU_THRESHOLD, FG_IOU_THRESHOLD]
-# Minimum overlap required between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD
-# ==> positive RPN example: 1)
-# Maximum overlap allowed between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD
-# ==> negative RPN example: 0)
-# Anchors with overlap in between (BG_IOU_THRESHOLD <= IoU < FG_IOU_THRESHOLD)
-# are ignored (-1)
-_C.MODEL.RPN.IOU_THRESHOLDS = [0.3, 0.7]
-_C.MODEL.RPN.IOU_LABELS = [0, -1, 1]
-# Number of regions per image used to train RPN
-_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256
-# Target fraction of foreground (positive) examples per RPN minibatch
-_C.MODEL.RPN.POSITIVE_FRACTION = 0.5
-# Options are: "smooth_l1", "giou"
-_C.MODEL.RPN.BBOX_REG_LOSS_TYPE = "smooth_l1"
-_C.MODEL.RPN.BBOX_REG_LOSS_WEIGHT = 1.0
-# Weights on (dx, dy, dw, dh) for normalizing RPN anchor regression targets
-_C.MODEL.RPN.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
-# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1.
-_C.MODEL.RPN.SMOOTH_L1_BETA = 0.0
-_C.MODEL.RPN.LOSS_WEIGHT = 1.0
-# Number of top scoring RPN proposals to keep before applying NMS
-# When FPN is used, this is *per FPN level* (not total)
-_C.MODEL.RPN.PRE_NMS_TOPK_TRAIN = 12000
-_C.MODEL.RPN.PRE_NMS_TOPK_TEST = 6000
-# Number of top scoring RPN proposals to keep after applying NMS
-# When FPN is used, this limit is applied per level and then again to the union
-# of proposals from all levels
-# NOTE: When FPN is used, the meaning of this config is different from Detectron1.
-# It means per-batch topk in Detectron1, but per-image topk here.
-# See the "find_top_rpn_proposals" function for details.
-_C.MODEL.RPN.POST_NMS_TOPK_TRAIN = 2000
-_C.MODEL.RPN.POST_NMS_TOPK_TEST = 1000
-# NMS threshold used on RPN proposals
-_C.MODEL.RPN.NMS_THRESH = 0.7
-# Set this to -1 to use the same number of output channels as input channels.
-_C.MODEL.RPN.CONV_DIMS = [-1]
-
-# ---------------------------------------------------------------------------- #
-# ROI HEADS options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_HEADS = CN()
-_C.MODEL.ROI_HEADS.NAME = "Res5ROIHeads"
-# Number of foreground classes
-_C.MODEL.ROI_HEADS.NUM_CLASSES = 80
-# Names of the input feature maps to be used by ROI heads
-# Currently all heads (box, mask, ...) use the same input feature map list
-# e.g., ["p2", "p3", "p4", "p5"] is commonly used for FPN
-_C.MODEL.ROI_HEADS.IN_FEATURES = ["res4"]
-# IOU overlap ratios [IOU_THRESHOLD]
-# Overlap threshold for an RoI to be considered background (if < IOU_THRESHOLD)
-# Overlap threshold for an RoI to be considered foreground (if >= IOU_THRESHOLD)
-_C.MODEL.ROI_HEADS.IOU_THRESHOLDS = [0.5]
-_C.MODEL.ROI_HEADS.IOU_LABELS = [0, 1]
-# RoI minibatch size *per image* (number of regions of interest [ROIs])
-# Total number of RoIs per training minibatch =
-# ROI_HEADS.BATCH_SIZE_PER_IMAGE * SOLVER.IMS_PER_BATCH
-# E.g., a common configuration is: 512 * 16 = 8192
-_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
-# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0)
-_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25
-
-# Only used on test mode
-
-# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to
-# balance obtaining high recall with not having too many low precision
-# detections that will slow down inference post processing steps (like NMS)
-# A default threshold of 0.0 increases AP by ~0.2-0.3 but significantly slows down
-# inference.
-_C.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.05
-# Overlap threshold used for non-maximum suppression (suppress boxes with
-# IoU >= this threshold)
-_C.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.5
-# If True, augment proposals with ground-truth boxes before sampling proposals to
-# train ROI heads.
-_C.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT = True
-
-# Use soft NMS instead of standard NMS if set to True
-_C.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False
-# See soft NMS paper for definition of these options
-_C.MODEL.ROI_HEADS.SOFT_NMS_METHOD = "gaussian" # "linear"
-_C.MODEL.ROI_HEADS.SOFT_NMS_SIGMA = 0.5
-# For the linear_threshold we use NMS_THRESH_TEST
-_C.MODEL.ROI_HEADS.SOFT_NMS_PRUNE = 0.001
-
-# ---------------------------------------------------------------------------- #
-# Box Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_BOX_HEAD = CN()
-# C4 don't use head name option
-# Options for non-C4 models: FastRCNNConvFCHead,
-_C.MODEL.ROI_BOX_HEAD.NAME = ""
-# Options are: "smooth_l1", "giou"
-_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE = "smooth_l1"
-# The final scaling coefficient on the box regression loss, used to balance the magnitude of its
-# gradients with other losses in the model. See also `MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT`.
-_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT = 1.0
-# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets
-# These are empirically chosen to approximately lead to unit variance targets
-_C.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10.0, 10.0, 5.0, 5.0)
-# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1.
-_C.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA = 0.0
-_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-_C.MODEL.ROI_BOX_HEAD.NUM_FC = 0
-# Hidden layer dimension for FC layers in the RoI box head
-_C.MODEL.ROI_BOX_HEAD.FC_DIM = 1024
-_C.MODEL.ROI_BOX_HEAD.NUM_CONV = 0
-# Channel dimension for Conv layers in the RoI box head
-_C.MODEL.ROI_BOX_HEAD.CONV_DIM = 256
-# Normalization method for the convolution layers.
-# Options: "" (no norm), "GN", "SyncBN".
-_C.MODEL.ROI_BOX_HEAD.NORM = ""
-# Whether to use class agnostic for bbox regression
-_C.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG = False
-# If true, RoI heads use bounding boxes predicted by the box head rather than proposal boxes.
-_C.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = False
-
-# ---------------------------------------------------------------------------- #
-# Cascaded Box Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_BOX_CASCADE_HEAD = CN()
-# The number of cascade stages is implicitly defined by the length of the following two configs.
-_C.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS = (
- (10.0, 10.0, 5.0, 5.0),
- (20.0, 20.0, 10.0, 10.0),
- (30.0, 30.0, 15.0, 15.0),
-)
-_C.MODEL.ROI_BOX_CASCADE_HEAD.IOUS = (0.5, 0.6, 0.7)
-
-
-# ---------------------------------------------------------------------------- #
-# Mask Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_MASK_HEAD = CN()
-_C.MODEL.ROI_MASK_HEAD.NAME = "MaskRCNNConvUpsampleHead"
-_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_MASK_HEAD.NUM_CONV = 0 # The number of convs in the mask head
-_C.MODEL.ROI_MASK_HEAD.CONV_DIM = 256
-# Normalization method for the convolution layers.
-# Options: "" (no norm), "GN", "SyncBN".
-_C.MODEL.ROI_MASK_HEAD.NORM = ""
-# Whether to use class agnostic for mask prediction
-_C.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK = False
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-
-# ---------------------------------------------------------------------------- #
-# Keypoint Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_KEYPOINT_HEAD = CN()
-_C.MODEL.ROI_KEYPOINT_HEAD.NAME = "KRCNNConvDeconvUpsampleHead"
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS = tuple(512 for _ in range(8))
-_C.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 17 # 17 is the number of keypoints in COCO.
-
-# Images with too few (or no) keypoints are excluded from training.
-_C.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE = 1
-# Normalize by the total number of visible keypoints in the minibatch if True.
-# Otherwise, normalize by the total number of keypoints that could ever exist
-# in the minibatch.
-# The keypoint softmax loss is only calculated on visible keypoints.
-# Since the number of visible keypoints can vary significantly between
-# minibatches, this has the effect of up-weighting the importance of
-# minibatches with few visible keypoints. (Imagine the extreme case of
-# only one visible keypoint versus N: in the case of N, each one
-# contributes 1/N to the gradient compared to the single keypoint
-# determining the gradient direction). Instead, we can normalize the
-# loss by the total number of keypoints, if it were the case that all
-# keypoints were visible in a full minibatch. (Returning to the example,
-# this means that the one visible keypoint contributes as much as each
-# of the N keypoints.)
-_C.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS = True
-# Multi-task loss weight to use for keypoints
-# Recommended values:
-# - use 1.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is True
-# - use 4.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is False
-_C.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT = 1.0
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-# ---------------------------------------------------------------------------- #
-# Semantic Segmentation Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.SEM_SEG_HEAD = CN()
-_C.MODEL.SEM_SEG_HEAD.NAME = "SemSegFPNHead"
-_C.MODEL.SEM_SEG_HEAD.IN_FEATURES = ["p2", "p3", "p4", "p5"]
-# Label in the semantic segmentation ground truth that is ignored, i.e., no loss is calculated for
-# the correposnding pixel.
-_C.MODEL.SEM_SEG_HEAD.IGNORE_VALUE = 255
-# Number of classes in the semantic segmentation head
-_C.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 54
-# Number of channels in the 3x3 convs inside semantic-FPN heads.
-_C.MODEL.SEM_SEG_HEAD.CONVS_DIM = 128
-# Outputs from semantic-FPN heads are up-scaled to the COMMON_STRIDE stride.
-_C.MODEL.SEM_SEG_HEAD.COMMON_STRIDE = 4
-# Normalization method for the convolution layers. Options: "" (no norm), "GN".
-_C.MODEL.SEM_SEG_HEAD.NORM = "GN"
-_C.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT = 1.0
-
-_C.MODEL.PANOPTIC_FPN = CN()
-# Scaling of all losses from instance detection / segmentation head.
-_C.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT = 1.0
-
-# options when combining instance & semantic segmentation outputs
-_C.MODEL.PANOPTIC_FPN.COMBINE = CN({"ENABLED": True}) # "COMBINE.ENABLED" is deprecated & not used
-_C.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH = 0.5
-_C.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT = 4096
-_C.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = 0.5
-
-
-# ---------------------------------------------------------------------------- #
-# RetinaNet Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RETINANET = CN()
-
-# This is the number of foreground classes.
-_C.MODEL.RETINANET.NUM_CLASSES = 80
-
-_C.MODEL.RETINANET.IN_FEATURES = ["p3", "p4", "p5", "p6", "p7"]
-
-# Convolutions to use in the cls and bbox tower
-# NOTE: this doesn't include the last conv for logits
-_C.MODEL.RETINANET.NUM_CONVS = 4
-
-# IoU overlap ratio [bg, fg] for labeling anchors.
-# Anchors with < bg are labeled negative (0)
-# Anchors with >= bg and < fg are ignored (-1)
-# Anchors with >= fg are labeled positive (1)
-_C.MODEL.RETINANET.IOU_THRESHOLDS = [0.4, 0.5]
-_C.MODEL.RETINANET.IOU_LABELS = [0, -1, 1]
-
-# Prior prob for rare case (i.e. foreground) at the beginning of training.
-# This is used to set the bias for the logits layer of the classifier subnet.
-# This improves training stability in the case of heavy class imbalance.
-_C.MODEL.RETINANET.PRIOR_PROB = 0.01
-
-# Inference cls score threshold, only anchors with score > INFERENCE_TH are
-# considered for inference (to improve speed)
-_C.MODEL.RETINANET.SCORE_THRESH_TEST = 0.05
-# Select topk candidates before NMS
-_C.MODEL.RETINANET.TOPK_CANDIDATES_TEST = 1000
-_C.MODEL.RETINANET.NMS_THRESH_TEST = 0.5
-
-# Weights on (dx, dy, dw, dh) for normalizing Retinanet anchor regression targets
-_C.MODEL.RETINANET.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
-
-# Loss parameters
-_C.MODEL.RETINANET.FOCAL_LOSS_GAMMA = 2.0
-_C.MODEL.RETINANET.FOCAL_LOSS_ALPHA = 0.25
-_C.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA = 0.1
-# Options are: "smooth_l1", "giou"
-_C.MODEL.RETINANET.BBOX_REG_LOSS_TYPE = "smooth_l1"
-
-# One of BN, SyncBN, FrozenBN, GN
-# Only supports GN until unshared norm is implemented
-_C.MODEL.RETINANET.NORM = ""
-
-
-# ---------------------------------------------------------------------------- #
-# ResNe[X]t options (ResNets = {ResNet, ResNeXt}
-# Note that parts of a resnet may be used for both the backbone and the head
-# These options apply to both
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RESNETS = CN()
-
-_C.MODEL.RESNETS.DEPTH = 50
-_C.MODEL.RESNETS.OUT_FEATURES = ["res4"] # res4 for C4 backbone, res2..5 for FPN backbone
-
-# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt
-_C.MODEL.RESNETS.NUM_GROUPS = 1
-
-# Options: FrozenBN, GN, "SyncBN", "BN"
-_C.MODEL.RESNETS.NORM = "FrozenBN"
-
-# Baseline width of each group.
-# Scaling this parameters will scale the width of all bottleneck layers.
-_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64
-
-# Place the stride 2 conv on the 1x1 filter
-# Use True only for the original MSRA ResNet; use False for C2 and Torch models
-_C.MODEL.RESNETS.STRIDE_IN_1X1 = True
-
-# Apply dilation in stage "res5"
-_C.MODEL.RESNETS.RES5_DILATION = 1
-
-# Output width of res2. Scaling this parameters will scale the width of all 1x1 convs in ResNet
-# For R18 and R34, this needs to be set to 64
-_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256
-_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64
-
-# Apply Deformable Convolution in stages
-# Specify if apply deform_conv on Res2, Res3, Res4, Res5
-_C.MODEL.RESNETS.DEFORM_ON_PER_STAGE = [False, False, False, False]
-# Use True to use modulated deform_conv (DeformableV2, https://arxiv.org/abs/1811.11168);
-# Use False for DeformableV1.
-_C.MODEL.RESNETS.DEFORM_MODULATED = False
-# Number of groups in deformable conv.
-_C.MODEL.RESNETS.DEFORM_NUM_GROUPS = 1
-
-
-# ---------------------------------------------------------------------------- #
-# Swin options
-# Note that parts of a resnet may be used for both the backbone and the head
-# These options apply to both
-# ---------------------------------------------------------------------------- #
-_C.MODEL.SPEC = CN()
-_C.MODEL.SPEC.EMBED_DIM = 512
-
-_C.MODEL.SPEC.VISION = CN()
-_C.MODEL.SPEC.VISION.PATCH_SIZE = 4
-_C.MODEL.SPEC.VISION.IN_CHANS = 3
-_C.MODEL.SPEC.VISION.EMBED_DIM = 96
-_C.MODEL.SPEC.VISION.DEPTHS = [2, 2, 6, 2]
-_C.MODEL.SPEC.VISION.NUM_HEADS = [3, 6, 12, 24]
-_C.MODEL.SPEC.VISION.WINDOW_SIZE = 7
-_C.MODEL.SPEC.VISION.MLP_RATIO = 4.
-_C.MODEL.SPEC.VISION.DROP_RATE = .0
-_C.MODEL.SPEC.VISION.ATTN_DROP_RATE = .0
-_C.MODEL.SPEC.VISION.DROP_PATH_RATE = .0
-_C.MODEL.SPEC.VISION.QKV_BIAS = True
-_C.MODEL.SPEC.VISION.QK_SCALE = False
-_C.MODEL.SPEC.VISION.APE = False
-_C.MODEL.SPEC.VISION.PATCH_NORM = True
-_C.MODEL.SPEC.VISION.OUT_FEATURES = ["stage2", "stage3", "stage4", "stage5"]
-
-_C.MODEL.SPEC.TEXT = CN()
-_C.MODEL.SPEC.TEXT.NAME = 'transformer'
-_C.MODEL.SPEC.TEXT.LOAD_PRETRAINED = False
-_C.MODEL.SPEC.TEXT.PRETRAINED = ''
-_C.MODEL.SPEC.TEXT.TOKENIZER = 'clip'
-_C.MODEL.SPEC.TEXT.CONTEXT_LENGTH = 77
-_C.MODEL.SPEC.TEXT.WIDTH = 512
-_C.MODEL.SPEC.TEXT.HEADS = 8
-_C.MODEL.SPEC.TEXT.LAYERS = 12
-_C.MODEL.SPEC.TEXT.AUTOGRESSIVE = True
-
-# ---------------------------------------------------------------------------- #
-# Solver
-# ---------------------------------------------------------------------------- #
-_C.SOLVER = CN()
-
-# See detectron2/solver/build.py for LR scheduler options
-_C.SOLVER.LR_SCHEDULER_NAME = "WarmupMultiStepLR"
-
-_C.SOLVER.MAX_ITER = 40000
-
-_C.SOLVER.BASE_LR = 0.001
-
-_C.SOLVER.MOMENTUM = 0.9
-
-_C.SOLVER.NESTEROV = False
-
-_C.SOLVER.WEIGHT_DECAY = 0.0001
-# The weight decay that's applied to parameters of normalization layers
-# (typically the affine transformation)
-_C.SOLVER.WEIGHT_DECAY_NORM = 0.0
-
-_C.SOLVER.GAMMA = 0.1
-# The iteration number to decrease learning rate by GAMMA.
-_C.SOLVER.STEPS = (30000,)
-
-_C.SOLVER.WARMUP_FACTOR = 1.0 / 1000
-_C.SOLVER.WARMUP_ITERS = 1000
-_C.SOLVER.WARMUP_METHOD = "linear"
-
-# Save a checkpoint after every this number of iterations
-_C.SOLVER.CHECKPOINT_PERIOD = 5000
-
-# Number of images per batch across all machines. This is also the number
-# of training images per step (i.e. per iteration). If we use 16 GPUs
-# and IMS_PER_BATCH = 32, each GPU will see 2 images per batch.
-# May be adjusted automatically if REFERENCE_WORLD_SIZE is set.
-_C.SOLVER.IMS_PER_BATCH = 16
-
-# The reference number of workers (GPUs) this config is meant to train with.
-# It takes no effect when set to 0.
-# With a non-zero value, it will be used by DefaultTrainer to compute a desired
-# per-worker batch size, and then scale the other related configs (total batch size,
-# learning rate, etc) to match the per-worker batch size.
-# See documentation of `DefaultTrainer.auto_scale_workers` for details:
-_C.SOLVER.REFERENCE_WORLD_SIZE = 0
-
-# Detectron v1 (and previous detection code) used a 2x higher LR and 0 WD for
-# biases. This is not useful (at least for recent models). You should avoid
-# changing these and they exist only to reproduce Detectron v1 training if
-# desired.
-_C.SOLVER.BIAS_LR_FACTOR = 1.0
-_C.SOLVER.WEIGHT_DECAY_BIAS = _C.SOLVER.WEIGHT_DECAY
-
-# Gradient clipping
-_C.SOLVER.CLIP_GRADIENTS = CN({"ENABLED": False})
-# Type of gradient clipping, currently 2 values are supported:
-# - "value": the absolute values of elements of each gradients are clipped
-# - "norm": the norm of the gradient for each parameter is clipped thus
-# affecting all elements in the parameter
-_C.SOLVER.CLIP_GRADIENTS.CLIP_TYPE = "value"
-# Maximum absolute value used for clipping gradients
-_C.SOLVER.CLIP_GRADIENTS.CLIP_VALUE = 1.0
-# Floating point number p for L-p norm to be used with the "norm"
-# gradient clipping type; for L-inf, please specify .inf
-_C.SOLVER.CLIP_GRADIENTS.NORM_TYPE = 2.0
-
-# Enable automatic mixed precision for training
-# Note that this does not change model's inference behavior.
-# To use AMP in inference, run inference under autocast()
-_C.SOLVER.AMP = CN({"ENABLED": False})
-
-# ---------------------------------------------------------------------------- #
-# Specific test options
-# ---------------------------------------------------------------------------- #
-_C.TEST = CN()
-# For end-to-end tests to verify the expected accuracy.
-# Each item is [task, metric, value, tolerance]
-# e.g.: [['bbox', 'AP', 38.5, 0.2]]
-_C.TEST.EXPECTED_RESULTS = []
-# The period (in terms of steps) to evaluate the model during training.
-# Set to 0 to disable.
-_C.TEST.EVAL_PERIOD = 0
-# The sigmas used to calculate keypoint OKS. See http://cocodataset.org/#keypoints-eval
-# When empty, it will use the defaults in COCO.
-# Otherwise it should be a list[float] with the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS.
-_C.TEST.KEYPOINT_OKS_SIGMAS = []
-# Maximum number of detections to return per image during inference (100 is
-# based on the limit established for the COCO dataset).
-_C.TEST.DETECTIONS_PER_IMAGE = 100
-
-_C.TEST.AUG = CN({"ENABLED": False})
-_C.TEST.AUG.MIN_SIZES = (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
-_C.TEST.AUG.MAX_SIZE = 4000
-_C.TEST.AUG.FLIP = True
-
-_C.TEST.PRECISE_BN = CN({"ENABLED": False})
-_C.TEST.PRECISE_BN.NUM_ITER = 200
-
-# ---------------------------------------------------------------------------- #
-# Misc options
-# ---------------------------------------------------------------------------- #
-# Directory where output files are written
-_C.OUTPUT_DIR = "./output"
-# Set seed to negative to fully randomize everything.
-# Set seed to positive to use a fixed seed. Note that a fixed seed increases
-# reproducibility but does not guarantee fully deterministic behavior.
-# Disabling all parallelism further increases reproducibility.
-_C.SEED = -1
-# Benchmark different cudnn algorithms.
-# If input images have very different sizes, this option will have large overhead
-# for about 10k iterations. It usually hurts total time, but can benefit for certain models.
-# If input images have the same or similar sizes, benchmark is often helpful.
-_C.CUDNN_BENCHMARK = False
-# The period (in terms of steps) for minibatch visualization at train time.
-# Set to 0 to disable.
-_C.VIS_PERIOD = 0
-
-# global config is for quick hack purposes.
-# You can set them in command line or config files,
-# and access it with:
-#
-# from detectron2.config import global_cfg
-# print(global_cfg.HACK)
-#
-# Do not commit any configs into it.
-_C.GLOBAL = CN()
-_C.GLOBAL.HACK = 1.0
diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/export/__init__.py
deleted file mode 100644
index 78c27d64fa42760eeacd14d241cf28d58e3da490..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/export/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from .api import *
-from .flatten import TracingAdapter
-from .torchscript import scripting_with_instances, dump_torchscript_IR
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/Cletrason/Cletrason-toad-mario-movie/README.md b/spaces/Cletrason/Cletrason-toad-mario-movie/README.md
deleted file mode 100644
index 4740d8c7746c55ea060303bbbe0335d4e1a6791d..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-mario-movie/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Cletrason Toad Mario Movie
-emoji: 🐠
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CobaltZvc/Hyper_Bot/index.html b/spaces/CobaltZvc/Hyper_Bot/index.html
deleted file mode 100644
index 9b813bc6841faeb8d9baa1a38e5368f697753f9d..0000000000000000000000000000000000000000
--- a/spaces/CobaltZvc/Hyper_Bot/index.html
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-
- Example
-
-
-
-
-
-
-
-
diff --git a/spaces/CompVis/stable-diffusion-license/index.html b/spaces/CompVis/stable-diffusion-license/index.html
deleted file mode 100644
index 5dacb08ef3076530e5c3f13144d2668b22527d05..0000000000000000000000000000000000000000
--- a/spaces/CompVis/stable-diffusion-license/index.html
+++ /dev/null
@@ -1,242 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
CreativeML Open RAIL-M
dated August 22, 2022
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and have
the potential to transform the way artists, among other individuals,
conceive and benefit from AI or ML technologies as a tool for content
creation.
Notwithstanding the current and potential benefits that these artifacts
can bring to society at large, there are also concerns about potential
misuses of them, either due to their technical limitations or ethical
considerations.
In short, this license strives for both the open and responsible
downstream use of the accompanying model. When it comes to the open
character, we took inspiration from open source permissive licenses
regarding the grant of IP rights. Referring to the downstream responsible
use, we added use-based restrictions not permitting the use of the Model
in very specific scenarios, in order for the licensor to be able to
enforce the license in case potential misuses of the Model may occur. At
the same time, we strive to promote open and responsible research on
generative models for art and content generation.
Even though downstream derivative versions of the model could be released
under different licensing terms, the latter will always have to include -
at minimum - the same use-based restrictions as the ones in the original
license (this license). We believe in the intersection between open and
responsible AI development; thus, this License aims to strike a balance
between both in order to enable responsible open-science in the field of
AI.
This License governs the use of the model (and its derivatives) and is
informed by the model card associated with the model.
NOW THEREFORE, You and Licensor agree as follows:
1. Definitions
- "License" means the terms and conditions for use, reproduction, and
Distribution as defined in this document.
- "Data" means a collection of information and/or content extracted from
the dataset used with the Model, including to train, pretrain, or
otherwise evaluate the Model. The Data is not licensed under this
License.
- "Output" means the results of operating a Model as embodied in
informational content resulting therefrom.
- "Model" means any accompanying machine-learning based assemblies
(including checkpoints), consisting of learnt weights, parameters
(including optimizer states), corresponding to the model architecture as
-

embodied in the Complementary Material, that have been trained or tuned,
in whole or in part on the Data, using the Complementary Material.
- "Derivatives of the Model" means all modifications to the Model, works
based on the Model, or any other model which is created or initialized by
transfer of patterns of the weights, parameters, activations or output of
the Model, to the other model, in order to cause the other model to
perform similarly to the Model, including - but not limited to -
distillation methods entailing the use of intermediate data
representations or methods based on the generation of synthetic data by
the Model for training the other model.
- "Complementary Material" means the accompanying source code and scripts
used to define, run, load, benchmark or evaluate the Model, and used to
prepare data for training or evaluation, if any. This includes any
accompanying documentation, tutorials, examples, etc, if any.
- "Distribution" means any transmission, reproduction, publication or
other sharing of the Model or Derivatives of the Model to a third party,
including providing the Model as a hosted service made available by
electronic or other remote means - e.g. API-based or web access.
- "Licensor" means the copyright owner or entity authorized by the
copyright owner that is granting the License, including the persons or
entities that may have rights in the Model and/or distributing the Model.
- "You" (or "Your") means an individual or Legal Entity exercising
permissions granted by this License and/or making use of the Model for
whichever purpose and in any field of use, including usage of the Model
in an end-use application - e.g. chatbot, translator, image generator.
- "Third Parties" means individuals or legal entities that are not under
common control with Licensor or You.
- "Contribution" means any work of authorship, including the original
version of the Model and any modifications or additions to that Model or
Derivatives of the Model thereof, that is intentionally submitted to
Licensor for inclusion in the Model by the copyright owner or by an
individual or Legal Entity authorized to submit on behalf of the
copyright owner. For the purposes of this definition, "submitted" means
any form of electronic, verbal, or written communication sent to the
Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Model, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
- "Contributor" means Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Model.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model, Derivatives of the
Model and Complementary Material. The Model and Derivatives of the Model
are subject to additional terms as described in Section III.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright
license to reproduce, prepare, publicly display, publicly perform,
-

sublicense, and distribute the Complementary Material, the Model, and
Derivatives of the Model.
3. Grant of Patent License. Subject to the terms and conditions of this
License and where and as applicable, each Contributor hereby grants to
You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this paragraph) patent license to make,
have made, use, offer to sell, sell, import, and otherwise transfer the
Model and the Complementary Material, where such license applies only to
those patent claims licensable by such Contributor that are necessarily
infringed by their Contribution(s) alone or by combination of their
Contribution(s) with the Model to which such Contribution(s) was
submitted. If You institute patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the
Model and/or Complementary Material or a Contribution incorporated within
the Model and/or Complementary Material constitutes direct or
contributory patent infringement, then any patent licenses granted to You
under this License for the Model and/or Work shall terminate as of the
date such litigation is asserted or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
4. Distribution and Redistribution. You may host for Third Party remote
access purposes (e.g. software-as-a-service), reproduce and distribute
copies of the Model or Derivatives of the Model thereof in any medium,
with or without modifications, provided that You meet the following
conditions:
Use-based restrictions as referenced in paragraph 5 MUST be included as
an enforceable provision by You in any type of legal agreement (e.g. a
license) governing the use and/or distribution of the Model or
Derivatives of the Model, and You shall give notice to subsequent users
You Distribute to, that the Model or Derivatives of the Model are subject
to paragraph 5. This provision does not apply to the use of Complementary
Material.
You must give any Third Party recipients of the Model or Derivatives of
the Model a copy of this License;
You must cause any modified files to carry prominent notices stating that
You changed the files;
You must retain all copyright, patent, trademark, and attribution notices
excluding those notices that do not pertain to any part of the Model,
Derivatives of the Model.
You may add Your own copyright statement to Your modifications and may
provide additional or different license terms and conditions - respecting
paragraph 4.a. - for use, reproduction, or Distribution of Your
modifications, or for any such Derivatives of the Model as a whole,
provided Your use, reproduction, and Distribution of the Model otherwise
complies with the conditions stated in this License.
5. Use-based restrictions. The restrictions set forth in Attachment A are
considered Use-based restrictions. Therefore You cannot use the Model and
the Derivatives of the Model for the specified restricted uses. You may
use the Model subject to this License, including only for lawful purposes
and in accordance with the License. Use may include creating any content
with, finetuning, updating, running, training, evaluating and/or
reparametrizing the Model. You shall require all of Your users who use
-

the Model or a Derivative of the Model to comply with the terms of this
paragraph (paragraph 5).
6. The Output You Generate. Except as set forth herein, Licensor claims
no rights in the Output You generate using the Model. You are accountable
for the Output you generate and its subsequent uses. No use of the output
can contravene any provision as stated in the License.
Section IV: OTHER PROVISIONS
7. Updates and Runtime Restrictions. To the maximum extent permitted by
law, Licensor reserves the right to restrict (remotely or otherwise)
usage of the Model in violation of this License, update the Model through
electronic means, or modify the Output of the Model based on updates. You
shall undertake reasonable efforts to use the latest version of the
Model.
8. Trademarks and related. Nothing in this License permits You to make
use of Licensors’ trademarks, trade names, logos or to otherwise suggest
endorsement or misrepresent the relationship between the parties; and any
rights not expressly granted herein are reserved by the Licensors.
9. Disclaimer of Warranty. Unless required by applicable law or agreed to
in writing, Licensor provides the Model and the Complementary Material
(and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
You are solely responsible for determining the appropriateness of using
or redistributing the Model, Derivatives of the Model, and the
Complementary Material and assume any risks associated with Your exercise
of permissions under this License.
10. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise, unless
required by applicable law (such as deliberate and grossly negligent
acts) or agreed to in writing, shall any Contributor be liable to You for
damages, including any direct, indirect, special, incidental, or
consequential damages of any character arising as a result of this
License or out of the use or inability to use the Model and the
Complementary Material (including but not limited to damages for loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor has been
advised of the possibility of such damages.
11. Accepting Warranty or Additional Liability. While redistributing the
Model, Derivatives of the Model and the Complementary Material thereof,
You may choose to offer, and charge a fee for, acceptance of support,
warranty, indemnity, or other liability obligations and/or rights
consistent with this License. However, in accepting such obligations, You
may act only on Your own behalf and on Your sole responsibility, not on
behalf of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability incurred by,
or claims asserted against, such Contributor by reason of your accepting
any such warranty or additional liability.
12. If any provision of this License is held to be invalid, illegal or
unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
-

END OF TERMS AND CONDITIONS
Attachment A
Use Restrictions
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national, federal, state, local
or international law or regulation;
- For the purpose of exploiting, harming or attempting to exploit or harm
minors in any way;
- To generate or disseminate verifiably false information and/or content
with the purpose of harming others;
- To generate or disseminate personal identifiable information that can
be used to harm an individual;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an
individual’s legal rights or otherwise creates or modifies a binding,
enforceable obligation;
- For any use intended to or which has the effect of discriminating
against or harming individuals or groups based on online or offline
social behavior or known or predicted personal or personality
characteristics;
- To exploit any of the vulnerabilities of a specific group of persons
based on their age, social, physical or mental characteristics, in order
to materially distort the behavior of a person pertaining to that group
in a manner that causes or is likely to cause that person or another
person physical or psychological harm;
- For any use intended to or which has the effect of discriminating
against individuals or groups based on legally protected characteristics
or categories;
- To provide medical advice and medical results interpretation;
- To generate or disseminate information for the purpose to be used for
administration of justice, law enforcement, immigration or asylum
processes, such as predicting an individual will commit fraud/crime
commitment (e.g. by text profiling, drawing causal relationships between
assertions made in documents, indiscriminate and arbitrarily-targeted
use).
-
-
-

-
-
-
diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/__init__.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/models/__init__.py
deleted file mode 100644
index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import model modules for registry
-# scan all the files that end with '_model.py' under the model folder
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/parser.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/parser.py
deleted file mode 100644
index 70ae4c17eac8bb1e0deb7f8584e979be65dfd09b..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/parser.py
+++ /dev/null
@@ -1,321 +0,0 @@
-# SVG Path specification parser.
-# This is an adaptation from 'svg.path' by Lennart Regebro (@regebro),
-# modified so that the parser takes a FontTools Pen object instead of
-# returning a list of svg.path Path objects.
-# The original code can be found at:
-# https://github.com/regebro/svg.path/blob/4f9b6e3/src/svg/path/parser.py
-# Copyright (c) 2013-2014 Lennart Regebro
-# License: MIT
-
-from .arc import EllipticalArc
-import re
-
-
-COMMANDS = set("MmZzLlHhVvCcSsQqTtAa")
-ARC_COMMANDS = set("Aa")
-UPPERCASE = set("MZLHVCSQTA")
-
-COMMAND_RE = re.compile("([MmZzLlHhVvCcSsQqTtAa])")
-
-# https://www.w3.org/TR/css-syntax-3/#number-token-diagram
-# but -6.e-5 will be tokenized as "-6" then "-5" and confuse parsing
-FLOAT_RE = re.compile(
- r"[-+]?" # optional sign
- r"(?:"
- r"(?:0|[1-9][0-9]*)(?:\.[0-9]+)?(?:[eE][-+]?[0-9]+)?" # int/float
- r"|"
- r"(?:\.[0-9]+(?:[eE][-+]?[0-9]+)?)" # float with leading dot (e.g. '.42')
- r")"
-)
-BOOL_RE = re.compile("^[01]")
-SEPARATOR_RE = re.compile(f"[, \t]")
-
-
-def _tokenize_path(pathdef):
- arc_cmd = None
- for x in COMMAND_RE.split(pathdef):
- if x in COMMANDS:
- arc_cmd = x if x in ARC_COMMANDS else None
- yield x
- continue
-
- if arc_cmd:
- try:
- yield from _tokenize_arc_arguments(x)
- except ValueError as e:
- raise ValueError(f"Invalid arc command: '{arc_cmd}{x}'") from e
- else:
- for token in FLOAT_RE.findall(x):
- yield token
-
-
-ARC_ARGUMENT_TYPES = (
- ("rx", FLOAT_RE),
- ("ry", FLOAT_RE),
- ("x-axis-rotation", FLOAT_RE),
- ("large-arc-flag", BOOL_RE),
- ("sweep-flag", BOOL_RE),
- ("x", FLOAT_RE),
- ("y", FLOAT_RE),
-)
-
-
-def _tokenize_arc_arguments(arcdef):
- raw_args = [s for s in SEPARATOR_RE.split(arcdef) if s]
- if not raw_args:
- raise ValueError(f"Not enough arguments: '{arcdef}'")
- raw_args.reverse()
-
- i = 0
- while raw_args:
- arg = raw_args.pop()
-
- name, pattern = ARC_ARGUMENT_TYPES[i]
- match = pattern.search(arg)
- if not match:
- raise ValueError(f"Invalid argument for '{name}' parameter: {arg!r}")
-
- j, k = match.span()
- yield arg[j:k]
- arg = arg[k:]
-
- if arg:
- raw_args.append(arg)
-
- # wrap around every 7 consecutive arguments
- if i == 6:
- i = 0
- else:
- i += 1
-
- if i != 0:
- raise ValueError(f"Not enough arguments: '{arcdef}'")
-
-
-def parse_path(pathdef, pen, current_pos=(0, 0), arc_class=EllipticalArc):
- """Parse SVG path definition (i.e. "d" attribute of elements)
- and call a 'pen' object's moveTo, lineTo, curveTo, qCurveTo and closePath
- methods.
-
- If 'current_pos' (2-float tuple) is provided, the initial moveTo will
- be relative to that instead being absolute.
-
- If the pen has an "arcTo" method, it is called with the original values
- of the elliptical arc curve commands:
-
- pen.arcTo(rx, ry, rotation, arc_large, arc_sweep, (x, y))
-
- Otherwise, the arcs are approximated by series of cubic Bezier segments
- ("curveTo"), one every 90 degrees.
- """
- # In the SVG specs, initial movetos are absolute, even if
- # specified as 'm'. This is the default behavior here as well.
- # But if you pass in a current_pos variable, the initial moveto
- # will be relative to that current_pos. This is useful.
- current_pos = complex(*current_pos)
-
- elements = list(_tokenize_path(pathdef))
- # Reverse for easy use of .pop()
- elements.reverse()
-
- start_pos = None
- command = None
- last_control = None
-
- have_arcTo = hasattr(pen, "arcTo")
-
- while elements:
-
- if elements[-1] in COMMANDS:
- # New command.
- last_command = command # Used by S and T
- command = elements.pop()
- absolute = command in UPPERCASE
- command = command.upper()
- else:
- # If this element starts with numbers, it is an implicit command
- # and we don't change the command. Check that it's allowed:
- if command is None:
- raise ValueError(
- "Unallowed implicit command in %s, position %s"
- % (pathdef, len(pathdef.split()) - len(elements))
- )
- last_command = command # Used by S and T
-
- if command == "M":
- # Moveto command.
- x = elements.pop()
- y = elements.pop()
- pos = float(x) + float(y) * 1j
- if absolute:
- current_pos = pos
- else:
- current_pos += pos
-
- # M is not preceded by Z; it's an open subpath
- if start_pos is not None:
- pen.endPath()
-
- pen.moveTo((current_pos.real, current_pos.imag))
-
- # when M is called, reset start_pos
- # This behavior of Z is defined in svg spec:
- # http://www.w3.org/TR/SVG/paths.html#PathDataClosePathCommand
- start_pos = current_pos
-
- # Implicit moveto commands are treated as lineto commands.
- # So we set command to lineto here, in case there are
- # further implicit commands after this moveto.
- command = "L"
-
- elif command == "Z":
- # Close path
- if current_pos != start_pos:
- pen.lineTo((start_pos.real, start_pos.imag))
- pen.closePath()
- current_pos = start_pos
- start_pos = None
- command = None # You can't have implicit commands after closing.
-
- elif command == "L":
- x = elements.pop()
- y = elements.pop()
- pos = float(x) + float(y) * 1j
- if not absolute:
- pos += current_pos
- pen.lineTo((pos.real, pos.imag))
- current_pos = pos
-
- elif command == "H":
- x = elements.pop()
- pos = float(x) + current_pos.imag * 1j
- if not absolute:
- pos += current_pos.real
- pen.lineTo((pos.real, pos.imag))
- current_pos = pos
-
- elif command == "V":
- y = elements.pop()
- pos = current_pos.real + float(y) * 1j
- if not absolute:
- pos += current_pos.imag * 1j
- pen.lineTo((pos.real, pos.imag))
- current_pos = pos
-
- elif command == "C":
- control1 = float(elements.pop()) + float(elements.pop()) * 1j
- control2 = float(elements.pop()) + float(elements.pop()) * 1j
- end = float(elements.pop()) + float(elements.pop()) * 1j
-
- if not absolute:
- control1 += current_pos
- control2 += current_pos
- end += current_pos
-
- pen.curveTo(
- (control1.real, control1.imag),
- (control2.real, control2.imag),
- (end.real, end.imag),
- )
- current_pos = end
- last_control = control2
-
- elif command == "S":
- # Smooth curve. First control point is the "reflection" of
- # the second control point in the previous path.
-
- if last_command not in "CS":
- # If there is no previous command or if the previous command
- # was not an C, c, S or s, assume the first control point is
- # coincident with the current point.
- control1 = current_pos
- else:
- # The first control point is assumed to be the reflection of
- # the second control point on the previous command relative
- # to the current point.
- control1 = current_pos + current_pos - last_control
-
- control2 = float(elements.pop()) + float(elements.pop()) * 1j
- end = float(elements.pop()) + float(elements.pop()) * 1j
-
- if not absolute:
- control2 += current_pos
- end += current_pos
-
- pen.curveTo(
- (control1.real, control1.imag),
- (control2.real, control2.imag),
- (end.real, end.imag),
- )
- current_pos = end
- last_control = control2
-
- elif command == "Q":
- control = float(elements.pop()) + float(elements.pop()) * 1j
- end = float(elements.pop()) + float(elements.pop()) * 1j
-
- if not absolute:
- control += current_pos
- end += current_pos
-
- pen.qCurveTo((control.real, control.imag), (end.real, end.imag))
- current_pos = end
- last_control = control
-
- elif command == "T":
- # Smooth curve. Control point is the "reflection" of
- # the second control point in the previous path.
-
- if last_command not in "QT":
- # If there is no previous command or if the previous command
- # was not an Q, q, T or t, assume the first control point is
- # coincident with the current point.
- control = current_pos
- else:
- # The control point is assumed to be the reflection of
- # the control point on the previous command relative
- # to the current point.
- control = current_pos + current_pos - last_control
-
- end = float(elements.pop()) + float(elements.pop()) * 1j
-
- if not absolute:
- end += current_pos
-
- pen.qCurveTo((control.real, control.imag), (end.real, end.imag))
- current_pos = end
- last_control = control
-
- elif command == "A":
- rx = abs(float(elements.pop()))
- ry = abs(float(elements.pop()))
- rotation = float(elements.pop())
- arc_large = bool(int(elements.pop()))
- arc_sweep = bool(int(elements.pop()))
- end = float(elements.pop()) + float(elements.pop()) * 1j
-
- if not absolute:
- end += current_pos
-
- # if the pen supports arcs, pass the values unchanged, otherwise
- # approximate the arc with a series of cubic bezier curves
- if have_arcTo:
- pen.arcTo(
- rx,
- ry,
- rotation,
- arc_large,
- arc_sweep,
- (end.real, end.imag),
- )
- else:
- arc = arc_class(
- current_pos, rx, ry, rotation, arc_large, arc_sweep, end
- )
- arc.draw(pen)
-
- current_pos = end
-
- # no final Z command, it's an open path
- if start_pos is not None:
- pen.endPath()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_B_D_T_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_B_D_T_.py
deleted file mode 100644
index e9e2d5fde9cc5a72a17105d40e5c1c95ff09d824..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_B_D_T_.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Matt Fontaine
-
-
-from fontTools.misc.textTools import bytesjoin
-from fontTools.misc import sstruct
-from . import E_B_D_T_
-from .BitmapGlyphMetrics import (
- BigGlyphMetrics,
- bigGlyphMetricsFormat,
- SmallGlyphMetrics,
- smallGlyphMetricsFormat,
-)
-from .E_B_D_T_ import (
- BitmapGlyph,
- BitmapPlusSmallMetricsMixin,
- BitmapPlusBigMetricsMixin,
-)
-import struct
-
-
-class table_C_B_D_T_(E_B_D_T_.table_E_B_D_T_):
-
- # Change the data locator table being referenced.
- locatorName = "CBLC"
-
- # Modify the format class accessor for color bitmap use.
- def getImageFormatClass(self, imageFormat):
- try:
- return E_B_D_T_.table_E_B_D_T_.getImageFormatClass(self, imageFormat)
- except KeyError:
- return cbdt_bitmap_classes[imageFormat]
-
-
-# Helper method for removing export features not supported by color bitmaps.
-# Write data in the parent class will default to raw if an option is unsupported.
-def _removeUnsupportedForColor(dataFunctions):
- dataFunctions = dict(dataFunctions)
- del dataFunctions["row"]
- return dataFunctions
-
-
-class ColorBitmapGlyph(BitmapGlyph):
-
- fileExtension = ".png"
- xmlDataFunctions = _removeUnsupportedForColor(BitmapGlyph.xmlDataFunctions)
-
-
-class cbdt_bitmap_format_17(BitmapPlusSmallMetricsMixin, ColorBitmapGlyph):
- def decompile(self):
- self.metrics = SmallGlyphMetrics()
- dummy, data = sstruct.unpack2(smallGlyphMetricsFormat, self.data, self.metrics)
- (dataLen,) = struct.unpack(">L", data[:4])
- data = data[4:]
-
- # For the image data cut it to the size specified by dataLen.
- assert dataLen <= len(data), "Data overun in format 17"
- self.imageData = data[:dataLen]
-
- def compile(self, ttFont):
- dataList = []
- dataList.append(sstruct.pack(smallGlyphMetricsFormat, self.metrics))
- dataList.append(struct.pack(">L", len(self.imageData)))
- dataList.append(self.imageData)
- return bytesjoin(dataList)
-
-
-class cbdt_bitmap_format_18(BitmapPlusBigMetricsMixin, ColorBitmapGlyph):
- def decompile(self):
- self.metrics = BigGlyphMetrics()
- dummy, data = sstruct.unpack2(bigGlyphMetricsFormat, self.data, self.metrics)
- (dataLen,) = struct.unpack(">L", data[:4])
- data = data[4:]
-
- # For the image data cut it to the size specified by dataLen.
- assert dataLen <= len(data), "Data overun in format 18"
- self.imageData = data[:dataLen]
-
- def compile(self, ttFont):
- dataList = []
- dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics))
- dataList.append(struct.pack(">L", len(self.imageData)))
- dataList.append(self.imageData)
- return bytesjoin(dataList)
-
-
-class cbdt_bitmap_format_19(ColorBitmapGlyph):
- def decompile(self):
- (dataLen,) = struct.unpack(">L", self.data[:4])
- data = self.data[4:]
-
- assert dataLen <= len(data), "Data overun in format 19"
- self.imageData = data[:dataLen]
-
- def compile(self, ttFont):
- return struct.pack(">L", len(self.imageData)) + self.imageData
-
-
-# Dict for CBDT extended formats.
-cbdt_bitmap_classes = {
- 17: cbdt_bitmap_format_17,
- 18: cbdt_bitmap_format_18,
- 19: cbdt_bitmap_format_19,
-}
diff --git a/spaces/Devaholic/fruit-demo/utils/__init__.py b/spaces/Devaholic/fruit-demo/utils/__init__.py
deleted file mode 100644
index eadbc2edb1629f7748069e287827d8274c88e98c..0000000000000000000000000000000000000000
--- a/spaces/Devaholic/fruit-demo/utils/__init__.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from PIL import Image
-import os
-import base64
-from io import BytesIO
-import requests
-
-def get_labels() -> list:
- cur_dir = os.getcwd()
- labels = os.listdir(cur_dir + '/data/Training')
- return labels
-
-def remove_number(label: str) -> str:
- words = label.split()
- words = [word for word in words if not word.isdigit()]
- return ' '.join(words)
-
-def get_image_from_url(url: str):
- """
- Only accepts jpeg and png images or regular URL
- """
- try:
- if 'data:image/jpeg;base64,' in url:
- base_string = url.replace("data:image/jpeg;base64,", "")
- decoded_img = base64.b64decode(base_string)
- img = Image.open(BytesIO(decoded_img))
- return img
- elif 'data:image/png;base64,' in url:
- base_string = url.replace("data:image/png;base64,", "")
- decoded_img = base64.b64decode(base_string)
- img = Image.open(BytesIO(decoded_img))
- return img
- else:
- response = requests.get(url)
- img = Image.open(BytesIO(response.content))
- return img
- except Exception as e:
- print(e)
- return None
-
-def delete_in_folder(folder: str) -> None:
- """
- Delete all files in a folder
- """
- for file in os.listdir(folder):
- file_path = os.path.join(folder, file)
- try:
- if os.path.isfile(file_path):
- os.remove(file_path)
- except Exception as e:
- print(e)
- return None
-
-if __name__ == '__main__':
- print(get_labels())
\ No newline at end of file
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/index_func.py b/spaces/Dorado607/ChuanhuChatGPT/modules/index_func.py
deleted file mode 100644
index b03a3c48911c8184e2701fbac44157b98ad3e582..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/modules/index_func.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import os
-import logging
-
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def get_documents(file_src):
- from langchain.schema import Document
- from langchain.text_splitter import TokenTextSplitter
- text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30)
-
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filename)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- texts = [Document(page_content=pdftext,
- metadata={"source": filepath})]
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- from langchain.document_loaders import UnstructuredWordDocumentLoader
- loader = UnstructuredWordDocumentLoader(filepath)
- texts = loader.load()
- elif file_type == ".pptx":
- logging.debug("Loading PowerPoint...")
- from langchain.document_loaders import UnstructuredPowerPointLoader
- loader = UnstructuredPowerPointLoader(filepath)
- texts = loader.load()
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- from langchain.document_loaders import UnstructuredEPubLoader
- loader = UnstructuredEPubLoader(filepath)
- texts = loader.load()
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- texts = []
- for elem in text_list:
- texts.append(Document(page_content=elem,
- metadata={"source": filepath}))
- else:
- logging.debug("Loading text file...")
- from langchain.document_loaders import TextLoader
- loader = TextLoader(filepath, "utf8")
- texts = loader.load()
- except Exception as e:
- import traceback
- logging.error(f"Error loading file: {filename}")
- traceback.print_exc()
-
- texts = text_splitter.split_documents(texts)
- documents.extend(texts)
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.vectorstores import FAISS
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- index_name = get_index_name(file_src)
- index_path = f"./index/{index_name}"
- if local_embedding:
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- embeddings = HuggingFaceEmbeddings(
- model_name="sentence-transformers/distiluse-base-multilingual-cased-v2")
- else:
- from langchain.embeddings import OpenAIEmbeddings
- if os.environ.get("OPENAI_API_TYPE", "openai") == "openai":
- embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get(
- "OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key))
- else:
- embeddings = OpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure")
- if os.path.exists(index_path):
- logging.info("找到了缓存的索引文件,加载中……")
- return FAISS.load_local(index_path, embeddings)
- else:
- try:
- documents = get_documents(file_src)
- logging.info("构建索引中……")
- with retrieve_proxy():
- index = FAISS.from_documents(documents, embeddings)
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_local(index_path)
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- import traceback
- logging.error("索引构建失败!%s", e)
- traceback.print_exc()
- return None
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/utils/ImagesDataset.py b/spaces/DragGan/DragGan-Inversion/PTI/utils/ImagesDataset.py
deleted file mode 100644
index 4d36e8665270f4f6dee5a2d58a36c564e1543771..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/utils/ImagesDataset.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import os
-
-from torch.utils.data import Dataset
-from PIL import Image
-
-from PTI.utils.data_utils import make_dataset
-from torchvision import transforms
-
-
-class Image2Dataset(Dataset):
- def __init__(self, image) -> None:
- super().__init__()
- self.image = image
- self.transform = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
- ]
- )
-
- def __len__(self):
- return 1
-
- def __getitem__(self, index):
- return "customIMG", self.transform(self.image)
-
-
-class ImagesDataset(Dataset):
- def __init__(self, source_root, source_transform=None):
- self.source_paths = sorted(make_dataset(source_root))
- self.source_transform = source_transform
-
- def __len__(self):
- return len(self.source_paths)
-
- def __getitem__(self, index):
- fname, from_path = self.source_paths[index]
- from_im = Image.open(from_path).convert("RGB").resize([1024, 1024])
-
- if self.source_transform:
- from_im = self.source_transform(from_im)
-
- return fname, from_im
diff --git a/spaces/Dusan/clickbaitonator/fudge/constants.py b/spaces/Dusan/clickbaitonator/fudge/constants.py
deleted file mode 100644
index 928fd3a693882807aa5444052aa49c32f3cf476f..0000000000000000000000000000000000000000
--- a/spaces/Dusan/clickbaitonator/fudge/constants.py
+++ /dev/null
@@ -1,32 +0,0 @@
-PAD_TOKEN = '[PAD]'
-EOT_TOKEN = '<|endoftext|>'
-SEP = 50256 # just use the weird eot token
-
-TOPIC_MODEL_STRING = 'gpt2-medium'
-FORMALITY_MODEL_STRING = 'Helsinki-NLP/opus-mt-es-en'
-
-DIR_END_SPLIT_POSITIONS = 32
-
-TOPIC_VAL_SIZE = 100000
-FORMALITY_VAL_SIZE = 2000
-VOCAB_SIZE = 50000
-
-FORMALITY_MAX_LEN = 200
-
-GLOVE_PRINT_PROGRESS_FREQ = 1000000
-GLOVE_DIM = 300
-HIDDEN_DIM = 300
-RNN_DIM = 150
-
-MIN_SENTENCE_LENGTH = 3
-
-POETRY_LINE_SYLLABLES = 10
-MAX_SYLLABLES_PER_WORD = 10 # no way anything is more
-MAX_COUNT_SYLLABLE_DIST = 10
-MAX_COUNT_SYLLABLE_INPUT_LENGTH = 25 # for just a couplet, shouldn't need more
-COUNT_SYLLABLE_DIM = 100
-UNKNOWN_RHYME_GROUP = 'UNKNOWN_RHYME_GROUP'
-PHRASE_ENDS = '.?!'
-
-POETRY_BANNED_TOKENS = [198, 50256, 628, 220] # newlines and eos and such
-
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_utils.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_utils.py
deleted file mode 100644
index 7919b74905495b4b6f4aa957a1f0b5d7a174c782..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_utils.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-from realesrgan.utils import RealESRGANer
-
-
-def test_realesrganer():
- # initialize with default model
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus.pth',
- model=None,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=False)
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is False
- # initialize with user-defined model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth',
- model=model,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=True)
- # test attribute
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is True
-
- # ------------------ test pre_process ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 14, 14)
- # with modcrop
- restorer.scale = 1
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 16, 16)
-
- # ------------------ test process ---------------- #
- restorer.process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test post_process ---------------- #
- restorer.mod_scale = 4
- output = restorer.post_process()
- assert output.shape == (1, 3, 60, 60)
-
- # ------------------ test tile_process ---------------- #
- restorer.scale = 4
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- restorer.tile_process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test enhance ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (24, 24, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with 16-bit image---------------- #
- img = np.random.random((4, 4, 3)).astype(np.uint16) + 512
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with gray image---------------- #
- img = np.random.random((4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8)
- assert result[1] == 'L'
-
- # ------------------ test enhance with RGBA---------------- #
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
-
- # ------------------ test enhance with RGBA, alpha_upsampler---------------- #
- restorer.tile_size = 0
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2, alpha_upsampler=None)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract_feature_print.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract_feature_print.py
deleted file mode 100644
index f771dd9b8ba92262e6844e7b5781de43c342833a..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract_feature_print.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import os
-import sys
-import traceback
-
-os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
-
-device = sys.argv[1]
-n_part = int(sys.argv[2])
-i_part = int(sys.argv[3])
-if len(sys.argv) == 6:
- exp_dir = sys.argv[4]
- version = sys.argv[5]
-else:
- i_gpu = sys.argv[4]
- exp_dir = sys.argv[5]
- os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
- version = sys.argv[6]
-import fairseq
-import numpy as np
-import soundfile as sf
-import torch
-import torch.nn.functional as F
-
-if "privateuseone" not in device:
- device = "cpu"
- if torch.cuda.is_available():
- device = "cuda"
- elif torch.backends.mps.is_available():
- device = "mps"
-else:
- import torch_directml
-
- device = torch_directml.device(torch_directml.default_device())
-
- def forward_dml(ctx, x, scale):
- ctx.scale = scale
- res = x.clone().detach()
- return res
-
- fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml
-
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-
-def printt(strr):
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
-
-
-printt(sys.argv)
-model_path = "assets/hubert/hubert_base.pt"
-
-printt(exp_dir)
-wavPath = "%s/1_16k_wavs" % exp_dir
-outPath = (
- "%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir
-)
-os.makedirs(outPath, exist_ok=True)
-
-
-# wave must be 16k, hop_size=320
-def readwave(wav_path, normalize=False):
- wav, sr = sf.read(wav_path)
- assert sr == 16000
- feats = torch.from_numpy(wav).float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- if normalize:
- with torch.no_grad():
- feats = F.layer_norm(feats, feats.shape)
- feats = feats.view(1, -1)
- return feats
-
-
-# HuBERT model
-printt("load model(s) from {}".format(model_path))
-# if hubert model is exist
-if os.access(model_path, os.F_OK) == False:
- printt(
- "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main"
- % model_path
- )
- exit(0)
-models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
-)
-model = models[0]
-model = model.to(device)
-printt("move model to %s" % device)
-if device not in ["mps", "cpu"]:
- model = model.half()
-model.eval()
-
-todo = sorted(list(os.listdir(wavPath)))[i_part::n_part]
-n = max(1, len(todo) // 10) # 最多打印十条
-if len(todo) == 0:
- printt("no-feature-todo")
-else:
- printt("all-feature-%s" % len(todo))
- for idx, file in enumerate(todo):
- try:
- if file.endswith(".wav"):
- wav_path = "%s/%s" % (wavPath, file)
- out_path = "%s/%s" % (outPath, file.replace("wav", "npy"))
-
- if os.path.exists(out_path):
- continue
-
- feats = readwave(wav_path, normalize=saved_cfg.task.normalize)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.half().to(device)
- if device not in ["mps", "cpu"]
- else feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if version == "v1" else 12, # layer 9
- }
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = (
- model.final_proj(logits[0]) if version == "v1" else logits[0]
- )
-
- feats = feats.squeeze(0).float().cpu().numpy()
- if np.isnan(feats).sum() == 0:
- np.save(out_path, feats, allow_pickle=False)
- else:
- printt("%s-contains nan" % file)
- if idx % n == 0:
- printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape))
- except:
- printt(traceback.format_exc())
- printt("all-feature-done")
diff --git a/spaces/Felix123456/bingo/src/components/providers.tsx b/spaces/Felix123456/bingo/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/Felix123456/bingo/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/Fernando22/freegpt-webui/g4f/utils.py b/spaces/Fernando22/freegpt-webui/g4f/utils.py
deleted file mode 100644
index d5ab41c79b44ab81e1843d209cb342bd83dafb42..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/g4f/utils.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import browser_cookie3
-
-
-class Utils:
- browsers = [
- browser_cookie3.chrome, # 62.74% market share
- browser_cookie3.safari, # 24.12% market share
- browser_cookie3.firefox, # 4.56% market share
- browser_cookie3.edge, # 2.85% market share
- browser_cookie3.opera, # 1.69% market share
- browser_cookie3.brave, # 0.96% market share
- browser_cookie3.opera_gx, # 0.64% market share
- browser_cookie3.vivaldi, # 0.32% market share
- ]
-
- def get_cookies(domain: str, setName: str = None, setBrowser: str = False) -> dict:
- cookies = {}
-
- if setBrowser != False:
- for browser in Utils.browsers:
- if browser.__name__ == setBrowser:
- try:
- for c in browser(domain_name=domain):
- if c.name not in cookies:
- cookies = cookies | {c.name: c.value}
-
- except Exception as e:
- pass
-
- else:
- for browser in Utils.browsers:
- try:
- for c in browser(domain_name=domain):
- if c.name not in cookies:
- cookies = cookies | {c.name: c.value}
-
- except Exception as e:
- pass
-
- if setName:
- try:
- return {setName: cookies[setName]}
-
- except ValueError:
- print(f'Error: could not find {setName} cookie in any browser.')
- exit(1)
-
- else:
- return cookies
diff --git a/spaces/FredZhang7/paint-journey-demo/README.md b/spaces/FredZhang7/paint-journey-demo/README.md
deleted file mode 100644
index 3bcfdc0b34b81f44a3dd2b38881686fe407f2e14..0000000000000000000000000000000000000000
--- a/spaces/FredZhang7/paint-journey-demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Paint Journey Demo
-emoji: 😻
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/__init__.py.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/__init__.py.py
deleted file mode 100644
index 9f53b2d3f7025b2d71369dababa4e6f2a4affc48..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/__init__.py.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import os
-import sys
-import contextlib
-import torch
-import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-from .hijacks import ipex_hijacks
-from .attention import attention_init
-
-# pylint: disable=protected-access, missing-function-docstring, line-too-long
-
-def ipex_init(): # pylint: disable=too-many-statements
- try:
- #Replace cuda with xpu:
- torch.cuda.current_device = torch.xpu.current_device
- torch.cuda.current_stream = torch.xpu.current_stream
- torch.cuda.device = torch.xpu.device
- torch.cuda.device_count = torch.xpu.device_count
- torch.cuda.device_of = torch.xpu.device_of
- torch.cuda.getDeviceIdListForCard = torch.xpu.getDeviceIdListForCard
- torch.cuda.get_device_name = torch.xpu.get_device_name
- torch.cuda.get_device_properties = torch.xpu.get_device_properties
- torch.cuda.init = torch.xpu.init
- torch.cuda.is_available = torch.xpu.is_available
- torch.cuda.is_initialized = torch.xpu.is_initialized
- torch.cuda.is_current_stream_capturing = lambda: False
- torch.cuda.set_device = torch.xpu.set_device
- torch.cuda.stream = torch.xpu.stream
- torch.cuda.synchronize = torch.xpu.synchronize
- torch.cuda.Event = torch.xpu.Event
- torch.cuda.Stream = torch.xpu.Stream
- torch.cuda.FloatTensor = torch.xpu.FloatTensor
- torch.Tensor.cuda = torch.Tensor.xpu
- torch.Tensor.is_cuda = torch.Tensor.is_xpu
- torch.cuda._initialization_lock = torch.xpu.lazy_init._initialization_lock
- torch.cuda._initialized = torch.xpu.lazy_init._initialized
- torch.cuda._lazy_seed_tracker = torch.xpu.lazy_init._lazy_seed_tracker
- torch.cuda._queued_calls = torch.xpu.lazy_init._queued_calls
- torch.cuda._tls = torch.xpu.lazy_init._tls
- torch.cuda.threading = torch.xpu.lazy_init.threading
- torch.cuda.traceback = torch.xpu.lazy_init.traceback
- torch.cuda.Optional = torch.xpu.Optional
- torch.cuda.__cached__ = torch.xpu.__cached__
- torch.cuda.__loader__ = torch.xpu.__loader__
- torch.cuda.ComplexFloatStorage = torch.xpu.ComplexFloatStorage
- torch.cuda.Tuple = torch.xpu.Tuple
- torch.cuda.streams = torch.xpu.streams
- torch.cuda._lazy_new = torch.xpu._lazy_new
- torch.cuda.FloatStorage = torch.xpu.FloatStorage
- torch.cuda.Any = torch.xpu.Any
- torch.cuda.__doc__ = torch.xpu.__doc__
- torch.cuda.default_generators = torch.xpu.default_generators
- torch.cuda.HalfTensor = torch.xpu.HalfTensor
- torch.cuda._get_device_index = torch.xpu._get_device_index
- torch.cuda.__path__ = torch.xpu.__path__
- torch.cuda.Device = torch.xpu.Device
- torch.cuda.IntTensor = torch.xpu.IntTensor
- torch.cuda.ByteStorage = torch.xpu.ByteStorage
- torch.cuda.set_stream = torch.xpu.set_stream
- torch.cuda.BoolStorage = torch.xpu.BoolStorage
- torch.cuda.os = torch.xpu.os
- torch.cuda.torch = torch.xpu.torch
- torch.cuda.BFloat16Storage = torch.xpu.BFloat16Storage
- torch.cuda.Union = torch.xpu.Union
- torch.cuda.DoubleTensor = torch.xpu.DoubleTensor
- torch.cuda.ShortTensor = torch.xpu.ShortTensor
- torch.cuda.LongTensor = torch.xpu.LongTensor
- torch.cuda.IntStorage = torch.xpu.IntStorage
- torch.cuda.LongStorage = torch.xpu.LongStorage
- torch.cuda.__annotations__ = torch.xpu.__annotations__
- torch.cuda.__package__ = torch.xpu.__package__
- torch.cuda.__builtins__ = torch.xpu.__builtins__
- torch.cuda.CharTensor = torch.xpu.CharTensor
- torch.cuda.List = torch.xpu.List
- torch.cuda._lazy_init = torch.xpu._lazy_init
- torch.cuda.BFloat16Tensor = torch.xpu.BFloat16Tensor
- torch.cuda.DoubleStorage = torch.xpu.DoubleStorage
- torch.cuda.ByteTensor = torch.xpu.ByteTensor
- torch.cuda.StreamContext = torch.xpu.StreamContext
- torch.cuda.ComplexDoubleStorage = torch.xpu.ComplexDoubleStorage
- torch.cuda.ShortStorage = torch.xpu.ShortStorage
- torch.cuda._lazy_call = torch.xpu._lazy_call
- torch.cuda.HalfStorage = torch.xpu.HalfStorage
- torch.cuda.random = torch.xpu.random
- torch.cuda._device = torch.xpu._device
- torch.cuda.classproperty = torch.xpu.classproperty
- torch.cuda.__name__ = torch.xpu.__name__
- torch.cuda._device_t = torch.xpu._device_t
- torch.cuda.warnings = torch.xpu.warnings
- torch.cuda.__spec__ = torch.xpu.__spec__
- torch.cuda.BoolTensor = torch.xpu.BoolTensor
- torch.cuda.CharStorage = torch.xpu.CharStorage
- torch.cuda.__file__ = torch.xpu.__file__
- torch.cuda._is_in_bad_fork = torch.xpu.lazy_init._is_in_bad_fork
- #torch.cuda.is_current_stream_capturing = torch.xpu.is_current_stream_capturing
-
- #Memory:
- torch.cuda.memory = torch.xpu.memory
- if 'linux' in sys.platform and "WSL2" in os.popen("uname -a").read():
- torch.xpu.empty_cache = lambda: None
- torch.cuda.empty_cache = torch.xpu.empty_cache
- torch.cuda.memory_stats = torch.xpu.memory_stats
- torch.cuda.memory_summary = torch.xpu.memory_summary
- torch.cuda.memory_snapshot = torch.xpu.memory_snapshot
- torch.cuda.memory_allocated = torch.xpu.memory_allocated
- torch.cuda.max_memory_allocated = torch.xpu.max_memory_allocated
- torch.cuda.memory_reserved = torch.xpu.memory_reserved
- torch.cuda.memory_cached = torch.xpu.memory_reserved
- torch.cuda.max_memory_reserved = torch.xpu.max_memory_reserved
- torch.cuda.max_memory_cached = torch.xpu.max_memory_reserved
- torch.cuda.reset_peak_memory_stats = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_cached = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_allocated = torch.xpu.reset_peak_memory_stats
- torch.cuda.memory_stats_as_nested_dict = torch.xpu.memory_stats_as_nested_dict
- torch.cuda.reset_accumulated_memory_stats = torch.xpu.reset_accumulated_memory_stats
-
- #RNG:
- torch.cuda.get_rng_state = torch.xpu.get_rng_state
- torch.cuda.get_rng_state_all = torch.xpu.get_rng_state_all
- torch.cuda.set_rng_state = torch.xpu.set_rng_state
- torch.cuda.set_rng_state_all = torch.xpu.set_rng_state_all
- torch.cuda.manual_seed = torch.xpu.manual_seed
- torch.cuda.manual_seed_all = torch.xpu.manual_seed_all
- torch.cuda.seed = torch.xpu.seed
- torch.cuda.seed_all = torch.xpu.seed_all
- torch.cuda.initial_seed = torch.xpu.initial_seed
-
- #AMP:
- torch.cuda.amp = torch.xpu.amp
- if not hasattr(torch.cuda.amp, "common"):
- torch.cuda.amp.common = contextlib.nullcontext()
- torch.cuda.amp.common.amp_definitely_not_available = lambda: False
- try:
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- try:
- from .gradscaler import gradscaler_init # pylint: disable=import-outside-toplevel, import-error
- gradscaler_init()
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- torch.cuda.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler
-
- #C
- torch._C._cuda_getCurrentRawStream = ipex._C._getCurrentStream
- ipex._C._DeviceProperties.major = 2023
- ipex._C._DeviceProperties.minor = 2
-
- #Fix functions with ipex:
- torch.cuda.mem_get_info = lambda device=None: [(torch.xpu.get_device_properties(device).total_memory - torch.xpu.memory_allocated(device)), torch.xpu.get_device_properties(device).total_memory]
- torch._utils._get_available_device_type = lambda: "xpu"
- torch.has_cuda = True
- torch.cuda.has_half = True
- torch.cuda.is_bf16_supported = lambda *args, **kwargs: True
- torch.cuda.is_fp16_supported = lambda *args, **kwargs: True
- torch.version.cuda = "11.7"
- torch.cuda.get_device_capability = lambda *args, **kwargs: [11,7]
- torch.cuda.get_device_properties.major = 11
- torch.cuda.get_device_properties.minor = 7
- torch.cuda.ipc_collect = lambda *args, **kwargs: None
- torch.cuda.utilization = lambda *args, **kwargs: 0
-
- ipex_hijacks()
- attention_init()
- except Exception as e:
- return False, e
- return True, None
\ No newline at end of file
diff --git a/spaces/GT-RIPL/GPT-K/model/qformer.py b/spaces/GT-RIPL/GPT-K/model/qformer.py
deleted file mode 100644
index e71b12375e10511858a9c505dc795181e6ce5603..0000000000000000000000000000000000000000
--- a/spaces/GT-RIPL/GPT-K/model/qformer.py
+++ /dev/null
@@ -1,1216 +0,0 @@
-"""
- * Copyright (c) 2023, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
- * Based on huggingface code base
- * https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/bert
-"""
-
-import math
-import os
-import warnings
-from dataclasses import dataclass
-from typing import Optional, Tuple, Dict, Any
-
-import torch
-from torch import Tensor, device, dtype, nn
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import CrossEntropyLoss
-import torch.nn.functional as F
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- ModelOutput,
-)
-from transformers.modeling_outputs import (
- BaseModelOutputWithPastAndCrossAttentions,
- BaseModelOutputWithPoolingAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- NextSentencePredictorOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from transformers.modeling_utils import (
- PreTrainedModel,
- apply_chunking_to_forward,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import logging
-from transformers.models.bert.configuration_bert import BertConfig
-
-logger = logging.get_logger(__name__)
-
-
-class BertEmbeddings(nn.Module):
- """Construct the embeddings from word and position embeddings."""
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(
- config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id
- )
- self.position_embeddings = nn.Embedding(
- config.max_position_embeddings, config.hidden_size
- )
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer(
- "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))
- )
- self.position_embedding_type = getattr(
- config, "position_embedding_type", "absolute"
- )
-
- self.config = config
-
- def forward(
- self,
- input_ids=None,
- position_ids=None,
- query_embeds=None,
- past_key_values_length=0,
- ):
- if input_ids is not None:
- seq_length = input_ids.size()[1]
- else:
- seq_length = 0
-
- if position_ids is None:
- position_ids = self.position_ids[
- :, past_key_values_length : seq_length + past_key_values_length
- ].clone()
-
- if input_ids is not None:
- embeddings = self.word_embeddings(input_ids)
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings = embeddings + position_embeddings
-
- if query_embeds is not None:
- embeddings = torch.cat((query_embeds, embeddings), dim=1)
- else:
- embeddings = query_embeds
-
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class BertSelfAttention(nn.Module):
- def __init__(self, config, is_cross_attention):
- super().__init__()
- self.config = config
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(
- config, "embedding_size"
- ):
- raise ValueError(
- "The hidden size (%d) is not a multiple of the number of attention "
- "heads (%d)" % (config.hidden_size, config.num_attention_heads)
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- if is_cross_attention:
- self.key = nn.Linear(config.encoder_width, self.all_head_size)
- self.value = nn.Linear(config.encoder_width, self.all_head_size)
- else:
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.position_embedding_type = getattr(
- config, "position_embedding_type", "absolute"
- )
- if (
- self.position_embedding_type == "relative_key"
- or self.position_embedding_type == "relative_key_query"
- ):
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(
- 2 * config.max_position_embeddings - 1, self.attention_head_size
- )
- self.save_attention = False
-
- def save_attn_gradients(self, attn_gradients):
- self.attn_gradients = attn_gradients
-
- def get_attn_gradients(self):
- return self.attn_gradients
-
- def save_attention_map(self, attention_map):
- self.attention_map = attention_map
-
- def get_attention_map(self):
- return self.attention_map
-
- def transpose_for_scores(self, x):
- new_x_shape = x.size()[:-1] + (
- self.num_attention_heads,
- self.attention_head_size,
- )
- x = x.view(*new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
-
- # If this is instantiated as a cross-attention module, the keys
- # and values come from an encoder; the attention mask needs to be
- # such that the encoder's padding tokens are not attended to.
- is_cross_attention = encoder_hidden_states is not None
-
- if is_cross_attention:
- key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
- value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
- attention_mask = encoder_attention_mask
- elif past_key_value is not None:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
- key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
- value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
- else:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
-
- mixed_query_layer = self.query(hidden_states)
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
-
- past_key_value = (key_layer, value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
-
- if (
- self.position_embedding_type == "relative_key"
- or self.position_embedding_type == "relative_key_query"
- ):
- seq_length = hidden_states.size()[1]
- position_ids_l = torch.arange(
- seq_length, dtype=torch.long, device=hidden_states.device
- ).view(-1, 1)
- position_ids_r = torch.arange(
- seq_length, dtype=torch.long, device=hidden_states.device
- ).view(1, -1)
- distance = position_ids_l - position_ids_r
- positional_embedding = self.distance_embedding(
- distance + self.max_position_embeddings - 1
- )
- positional_embedding = positional_embedding.to(
- dtype=query_layer.dtype
- ) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum(
- "bhld,lrd->bhlr", query_layer, positional_embedding
- )
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum(
- "bhld,lrd->bhlr", query_layer, positional_embedding
- )
- relative_position_scores_key = torch.einsum(
- "bhrd,lrd->bhlr", key_layer, positional_embedding
- )
- attention_scores = (
- attention_scores
- + relative_position_scores_query
- + relative_position_scores_key
- )
-
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
-
- if is_cross_attention and self.save_attention:
- self.save_attention_map(attention_probs)
- attention_probs.register_hook(self.save_attn_gradients)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs_dropped = self.dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs_dropped = attention_probs_dropped * head_mask
-
- context_layer = torch.matmul(attention_probs_dropped, value_layer)
-
- context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
- new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
- context_layer = context_layer.view(*new_context_layer_shape)
-
- outputs = (
- (context_layer, attention_probs) if output_attentions else (context_layer,)
- )
-
- outputs = outputs + (past_key_value,)
- return outputs
-
-
-class BertSelfOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states, input_tensor):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertAttention(nn.Module):
- def __init__(self, config, is_cross_attention=False):
- super().__init__()
- self.self = BertSelfAttention(config, is_cross_attention)
- self.output = BertSelfOutput(config)
- self.pruned_heads = set()
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads,
- self.self.num_attention_heads,
- self.self.attention_head_size,
- self.pruned_heads,
- )
-
- # Prune linear layers
- self.self.query = prune_linear_layer(self.self.query, index)
- self.self.key = prune_linear_layer(self.self.key, index)
- self.self.value = prune_linear_layer(self.self.value, index)
- self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
- self.self.all_head_size = (
- self.self.attention_head_size * self.self.num_attention_heads
- )
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
- self_outputs = self.self(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- )
- attention_output = self.output(self_outputs[0], hidden_states)
-
- outputs = (attention_output,) + self_outputs[
- 1:
- ] # add attentions if we output them
- return outputs
-
-
-class BertIntermediate(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
- if isinstance(config.hidden_act, str):
- self.intermediate_act_fn = ACT2FN[config.hidden_act]
- else:
- self.intermediate_act_fn = config.hidden_act
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.intermediate_act_fn(hidden_states)
- return hidden_states
-
-
-class BertOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states, input_tensor):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertLayer(nn.Module):
- def __init__(self, config, layer_num):
- super().__init__()
- self.config = config
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.attention = BertAttention(config)
- self.layer_num = layer_num
- if (
- self.config.add_cross_attention
- and layer_num % self.config.cross_attention_freq == 0
- ):
- self.crossattention = BertAttention(
- config, is_cross_attention=self.config.add_cross_attention
- )
- self.has_cross_attention = True
- else:
- self.has_cross_attention = False
- self.intermediate = BertIntermediate(config)
- self.output = BertOutput(config)
-
- self.intermediate_query = BertIntermediate(config)
- self.output_query = BertOutput(config)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- query_length=0,
- ):
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
- self_attn_past_key_value = (
- past_key_value[:2] if past_key_value is not None else None
- )
- self_attention_outputs = self.attention(
- hidden_states,
- attention_mask,
- head_mask,
- output_attentions=output_attentions,
- past_key_value=self_attn_past_key_value,
- )
- attention_output = self_attention_outputs[0]
- outputs = self_attention_outputs[1:-1]
-
- present_key_value = self_attention_outputs[-1]
-
- if query_length > 0:
- query_attention_output = attention_output[:, :query_length, :]
-
- if self.has_cross_attention:
- assert (
- encoder_hidden_states is not None
- ), "encoder_hidden_states must be given for cross-attention layers"
- cross_attention_outputs = self.crossattention(
- query_attention_output,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- output_attentions=output_attentions,
- )
- query_attention_output = cross_attention_outputs[0]
- outputs = (
- outputs + cross_attention_outputs[1:-1]
- ) # add cross attentions if we output attention weights
-
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk_query,
- self.chunk_size_feed_forward,
- self.seq_len_dim,
- query_attention_output,
- )
- if attention_output.shape[1] > query_length:
- layer_output_text = apply_chunking_to_forward(
- self.feed_forward_chunk,
- self.chunk_size_feed_forward,
- self.seq_len_dim,
- attention_output[:, query_length:, :],
- )
- layer_output = torch.cat([layer_output, layer_output_text], dim=1)
- else:
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk,
- self.chunk_size_feed_forward,
- self.seq_len_dim,
- attention_output,
- )
- outputs = (layer_output,) + outputs
-
- outputs = outputs + (present_key_value,)
-
- return outputs
-
- def feed_forward_chunk(self, attention_output):
- intermediate_output = self.intermediate(attention_output)
- layer_output = self.output(intermediate_output, attention_output)
- return layer_output
-
- def feed_forward_chunk_query(self, attention_output):
- intermediate_output = self.intermediate_query(attention_output)
- layer_output = self.output_query(intermediate_output, attention_output)
- return layer_output
-
-
-class BertEncoder(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.config = config
- self.layer = nn.ModuleList(
- [BertLayer(config, i) for i in range(config.num_hidden_layers)]
- )
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=False,
- output_hidden_states=False,
- return_dict=True,
- query_length=0,
- ):
- all_hidden_states = () if output_hidden_states else None
- all_self_attentions = () if output_attentions else None
- all_cross_attentions = (
- () if output_attentions and self.config.add_cross_attention else None
- )
-
- next_decoder_cache = () if use_cache else None
-
- for i in range(self.config.num_hidden_layers):
- layer_module = self.layer[i]
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- layer_head_mask = head_mask[i] if head_mask is not None else None
- past_key_value = past_key_values[i] if past_key_values is not None else None
-
- if getattr(self.config, "gradient_checkpointing", False) and self.training:
-
- if use_cache:
- logger.warn(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(
- *inputs, past_key_value, output_attentions, query_length
- )
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- query_length,
- )
-
- hidden_states = layer_outputs[0]
- if use_cache:
- next_decoder_cache += (layer_outputs[-1],)
- if output_attentions:
- all_self_attentions = all_self_attentions + (layer_outputs[1],)
- all_cross_attentions = all_cross_attentions + (layer_outputs[2],)
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- next_decoder_cache,
- all_hidden_states,
- all_self_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=next_decoder_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-class BertPooler(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.activation = nn.Tanh()
-
- def forward(self, hidden_states):
- # We "pool" the model by simply taking the hidden state corresponding
- # to the first token.
- first_token_tensor = hidden_states[:, 0]
- pooled_output = self.dense(first_token_tensor)
- pooled_output = self.activation(pooled_output)
- return pooled_output
-
-
-class BertPredictionHeadTransform(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- if isinstance(config.hidden_act, str):
- self.transform_act_fn = ACT2FN[config.hidden_act]
- else:
- self.transform_act_fn = config.hidden_act
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.transform_act_fn(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- return hidden_states
-
-
-class BertLMPredictionHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.transform = BertPredictionHeadTransform(config)
-
- # The output weights are the same as the input embeddings, but there is
- # an output-only bias for each token.
- self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
-
- # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.transform(hidden_states)
- hidden_states = self.decoder(hidden_states)
- return hidden_states
-
-
-class BertOnlyMLMHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.predictions = BertLMPredictionHead(config)
-
- def forward(self, sequence_output):
- prediction_scores = self.predictions(sequence_output)
- return prediction_scores
-
-
-class BertPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = BertConfig
- base_model_prefix = "bert"
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """Initialize the weights"""
- if isinstance(module, (nn.Linear, nn.Embedding)):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
-
-class BertModel(BertPreTrainedModel):
- """
- The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
- cross-attention is added between the self-attention layers, following the architecture described in `Attention is
- all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
- Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
- argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an
- input to the forward pass.
- """
-
- def __init__(self, config, add_pooling_layer=False):
- super().__init__(config)
- self.config = config
-
- self.embeddings = BertEmbeddings(config)
-
- self.encoder = BertEncoder(config)
-
- self.pooler = BertPooler(config) if add_pooling_layer else None
-
- self.init_weights()
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
- def get_extended_attention_mask(
- self,
- attention_mask: Tensor,
- input_shape: Tuple[int],
- device: device,
- is_decoder: bool,
- has_query: bool = False,
- ) -> Tensor:
- """
- Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
-
- Arguments:
- attention_mask (:obj:`torch.Tensor`):
- Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
- input_shape (:obj:`Tuple[int]`):
- The shape of the input to the model.
- device: (:obj:`torch.device`):
- The device of the input to the model.
-
- Returns:
- :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`.
- """
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- if attention_mask.dim() == 3:
- extended_attention_mask = attention_mask[:, None, :, :]
- elif attention_mask.dim() == 2:
- # Provided a padding mask of dimensions [batch_size, seq_length]
- # - if the model is a decoder, apply a causal mask in addition to the padding mask
- # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if is_decoder:
- batch_size, seq_length = input_shape
-
- seq_ids = torch.arange(seq_length, device=device)
- causal_mask = (
- seq_ids[None, None, :].repeat(batch_size, seq_length, 1)
- <= seq_ids[None, :, None]
- )
-
- # add a prefix ones mask to the causal mask
- # causal and attention masks must have same type with pytorch version < 1.3
- causal_mask = causal_mask.to(attention_mask.dtype)
-
- if causal_mask.shape[1] < attention_mask.shape[1]:
- prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1]
- if has_query: # UniLM style attention mask
- causal_mask = torch.cat(
- [
- torch.zeros(
- (batch_size, prefix_seq_len, seq_length),
- device=device,
- dtype=causal_mask.dtype,
- ),
- causal_mask,
- ],
- axis=1,
- )
- causal_mask = torch.cat(
- [
- torch.ones(
- (batch_size, causal_mask.shape[1], prefix_seq_len),
- device=device,
- dtype=causal_mask.dtype,
- ),
- causal_mask,
- ],
- axis=-1,
- )
- extended_attention_mask = (
- causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
- )
- else:
- extended_attention_mask = attention_mask[:, None, None, :]
- else:
- raise ValueError(
- "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(
- input_shape, attention_mask.shape
- )
- )
-
- # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
- # masked positions, this operation will create a tensor which is 0.0 for
- # positions we want to attend and -10000.0 for masked positions.
- # Since we are adding it to the raw scores before the softmax, this is
- # effectively the same as removing these entirely.
- extended_attention_mask = extended_attention_mask.to(
- dtype=self.dtype
- ) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- return extended_attention_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- head_mask=None,
- query_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- is_decoder=False,
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = (
- output_attentions
- if output_attentions is not None
- else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- # use_cache = use_cache if use_cache is not None else self.config.use_cache
-
- if input_ids is None:
- assert (
- query_embeds is not None
- ), "You have to specify query_embeds when input_ids is None"
-
- # past_key_values_length
- past_key_values_length = (
- past_key_values[0][0].shape[2] - self.config.query_length
- if past_key_values is not None
- else 0
- )
-
- query_length = query_embeds.shape[1] if query_embeds is not None else 0
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- query_embeds=query_embeds,
- past_key_values_length=past_key_values_length,
- )
-
- input_shape = embedding_output.size()[:-1]
- batch_size, seq_length = input_shape
- device = embedding_output.device
-
- if attention_mask is None:
- attention_mask = torch.ones(
- ((batch_size, seq_length + past_key_values_length)), device=device
- )
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- if is_decoder:
- extended_attention_mask = self.get_extended_attention_mask(
- attention_mask,
- input_ids.shape,
- device,
- is_decoder,
- has_query=(query_embeds is not None),
- )
- else:
- extended_attention_mask = self.get_extended_attention_mask(
- attention_mask, input_shape, device, is_decoder
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if encoder_hidden_states is not None:
- if type(encoder_hidden_states) == list:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[
- 0
- ].size()
- else:
- (
- encoder_batch_size,
- encoder_sequence_length,
- _,
- ) = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
-
- if type(encoder_attention_mask) == list:
- encoder_extended_attention_mask = [
- self.invert_attention_mask(mask) for mask in encoder_attention_mask
- ]
- elif encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(
- encoder_attention_mask
- )
- else:
- encoder_extended_attention_mask = self.invert_attention_mask(
- encoder_attention_mask
- )
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- query_length=query_length,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = (
- self.pooler(sequence_output) if self.pooler is not None else None
- )
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-class BertLMHeadModel(BertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
- _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
-
- def __init__(self, config):
- super().__init__(config)
-
- self.bert = BertModel(config, add_pooling_layer=False)
- self.cls = BertOnlyMLMHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.cls.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.cls.predictions.decoder = new_embeddings
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- head_mask=None,
- query_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- labels=None,
- past_key_values=None,
- use_cache=True,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- return_logits=False,
- is_decoder=True,
- reduction="mean",
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
- ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are
- ignored (masked), the loss is only computed for the tokens with labels n ``[0, ..., config.vocab_size]``
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- Returns:
- Example::
- >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig
- >>> import torch
- >>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
- >>> config = BertConfig.from_pretrained("bert-base-cased")
- >>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config)
- >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
- >>> outputs = model(**inputs)
- >>> prediction_logits = outputs.logits
- """
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
- if labels is not None:
- use_cache = False
- if past_key_values is not None:
- query_embeds = None
-
- outputs = self.bert(
- input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- head_mask=head_mask,
- query_embeds=query_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- is_decoder=is_decoder,
- )
-
- sequence_output = outputs[0]
- if query_embeds is not None:
- sequence_output = outputs[0][:, query_embeds.shape[1] :, :]
-
- prediction_scores = self.cls(sequence_output)
-
- if return_logits:
- return prediction_scores[:, :-1, :].contiguous()
-
- lm_loss = None
- if labels is not None:
- # we are doing next-token prediction; shift prediction scores and input ids by one
- shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
- labels = labels[:, 1:].contiguous()
- loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
- lm_loss = loss_fct(
- shifted_prediction_scores.view(-1, self.config.vocab_size),
- labels.view(-1),
- )
- if reduction == "none":
- lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1)
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return ((lm_loss,) + output) if lm_loss is not None else output
-
- return CausalLMOutputWithCrossAttentions(
- loss=lm_loss,
- logits=prediction_scores,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- cross_attentions=outputs.cross_attentions,
- )
-
- def prepare_inputs_for_generation(
- self, input_ids, query_embeds, past=None, attention_mask=None, **model_kwargs
- ):
- # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
- if attention_mask is None:
- attention_mask = input_ids.new_ones(input_ids.shape)
- query_mask = input_ids.new_ones(query_embeds.shape[:-1])
- attention_mask = torch.cat([query_mask, attention_mask], dim=-1)
-
- # cut decoder_input_ids if past is used
- if past is not None:
- input_ids = input_ids[:, -1:]
-
- return {
- "input_ids": input_ids,
- "query_embeds": query_embeds,
- "attention_mask": attention_mask,
- "past_key_values": past,
- "encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None),
- "encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None),
- "is_decoder": True,
- }
-
- def _reorder_cache(self, past, beam_idx):
- reordered_past = ()
- for layer_past in past:
- reordered_past += (
- tuple(
- past_state.index_select(0, beam_idx) for past_state in layer_past
- ),
- )
- return reordered_past
-
-
-class BertForMaskedLM(BertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
- _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
-
- def __init__(self, config):
- super().__init__(config)
-
- self.bert = BertModel(config, add_pooling_layer=False)
- self.cls = BertOnlyMLMHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.cls.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.cls.predictions.decoder = new_embeddings
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- head_mask=None,
- query_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- return_logits=False,
- is_decoder=False,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
- """
-
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- outputs = self.bert(
- input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- head_mask=head_mask,
- query_embeds=query_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- is_decoder=is_decoder,
- )
-
- if query_embeds is not None:
- sequence_output = outputs[0][:, query_embeds.shape[1] :, :]
- prediction_scores = self.cls(sequence_output)
-
- if return_logits:
- return prediction_scores
-
- masked_lm_loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss() # -100 index = padding token
- masked_lm_loss = loss_fct(
- prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)
- )
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return (
- ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
- )
-
- return MaskedLMOutput(
- loss=masked_lm_loss,
- logits=prediction_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/__init__.py b/spaces/Gen-Sim/Gen-Sim/cliport/__init__.py
deleted file mode 100644
index d4cedd82c66da3bb9e701277035398b9c7528b5b..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-"""Package init."""
-
-from cliport import agents
-from cliport import models
-from cliport import tasks
-from cliport.dataset import RavensDataset
-from cliport.environments.environment import Environment
diff --git a/spaces/GenerationsAI/GenAi-Pix2Pix-Video/style.css b/spaces/GenerationsAI/GenAi-Pix2Pix-Video/style.css
deleted file mode 100644
index 3cf565d3e03852436a405cf632d1d22433bb4087..0000000000000000000000000000000000000000
--- a/spaces/GenerationsAI/GenAi-Pix2Pix-Video/style.css
+++ /dev/null
@@ -1,101 +0,0 @@
-#col-container {max-width: 820px; margin-left: auto; margin-right: auto;}
-#duplicate-container{
- display: flex;
- justify-content: space-between;
- align-items: center;
- line-height: 1em;
- flex-direction: row-reverse;
- font-size:1em;
-}
-a, a:hover, a:visited {
- text-decoration-line: underline;
- font-weight: 600;
- color: #1f2937 !important;
-}
-
-.dark a, .dark a:hover, .dark a:visited {
- color: #f3f4f6 !important;
-}
-
-.footer {
- margin-bottom: 45px;
- margin-top: 10px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-
-.footer>p {
- font-size: .8rem!important;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(26px);
- background: white;
-}
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-
-div#may-like-container > p {
- font-size: .8em;
- margin-bottom: 4px;
-}
-
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-
-#share-btn-container {
- display: flex;
- padding-left: 0.5rem !important;
- padding-right: 0.5rem !important;
- background-color: #000000;
- justify-content: center;
- align-items: center;
- border-radius: 9999px !important;
- max-width: 13rem;
-}
-
-#share-btn-container:hover {
- background-color: #060606;
-}
-
-#share-btn {
- all: initial;
- color: #ffffff;
- font-weight: 600;
- cursor:pointer;
- font-family: 'IBM Plex Sans', sans-serif;
- margin-left: 0.5rem !important;
- padding-top: 0.5rem !important;
- padding-bottom: 0.5rem !important;
- right:0;
-}
-
-#share-btn * {
- all: unset;
-}
-
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-
-#share-btn-container .wrap {
- display: none !important;
-}
-
-#share-btn-container.hidden {
- display: none!important;
-}
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/coco_instance_semantic.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/coco_instance_semantic.py
deleted file mode 100644
index f7c072ec92731af85952840128f6527bc799913a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/coco_instance_semantic.py
+++ /dev/null
@@ -1,53 +0,0 @@
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='SegRescale', scale_factor=1 / 8),
- dict(type='DefaultFormatBundle'),
- dict(
- type='Collect',
- keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- seg_prefix=data_root + 'stuffthingmaps/train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py
deleted file mode 100644
index 12a9d17e5592ade405605e3ffb2d4d2fa632d03e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = './mask_rcnn_r101_fpn_gn-all_2x_coco.py'
-
-# learning policy
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/easydict.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/easydict.py
deleted file mode 100644
index 0188f524b87eef75c175772ff262b93b47919ba7..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/easydict.py
+++ /dev/null
@@ -1,126 +0,0 @@
-'''
-From https://github.com/makinacorpus/easydict.
-'''
-
-class EasyDict(dict):
- """
- Get attributes
-
- >>> d = EasyDict({'foo':3})
- >>> d['foo']
- 3
- >>> d.foo
- 3
- >>> d.bar
- Traceback (most recent call last):
- ...
- AttributeError: 'EasyDict' object has no attribute 'bar'
-
- Works recursively
-
- >>> d = EasyDict({'foo':3, 'bar':{'x':1, 'y':2}})
- >>> isinstance(d.bar, dict)
- True
- >>> d.bar.x
- 1
-
- Bullet-proof
-
- >>> EasyDict({})
- {}
- >>> EasyDict(d={})
- {}
- >>> EasyDict(None)
- {}
- >>> d = {'a': 1}
- >>> EasyDict(**d)
- {'a': 1}
-
- Set attributes
-
- >>> d = EasyDict()
- >>> d.foo = 3
- >>> d.foo
- 3
- >>> d.bar = {'prop': 'value'}
- >>> d.bar.prop
- 'value'
- >>> d
- {'foo': 3, 'bar': {'prop': 'value'}}
- >>> d.bar.prop = 'newer'
- >>> d.bar.prop
- 'newer'
-
-
- Values extraction
-
- >>> d = EasyDict({'foo':0, 'bar':[{'x':1, 'y':2}, {'x':3, 'y':4}]})
- >>> isinstance(d.bar, list)
- True
- >>> from operator import attrgetter
- >>> map(attrgetter('x'), d.bar)
- [1, 3]
- >>> map(attrgetter('y'), d.bar)
- [2, 4]
- >>> d = EasyDict()
- >>> d.keys()
- []
- >>> d = EasyDict(foo=3, bar=dict(x=1, y=2))
- >>> d.foo
- 3
- >>> d.bar.x
- 1
-
- Still like a dict though
-
- >>> o = EasyDict({'clean':True})
- >>> o.items()
- [('clean', True)]
-
- And like a class
-
- >>> class Flower(EasyDict):
- ... power = 1
- ...
- >>> f = Flower()
- >>> f.power
- 1
- >>> f = Flower({'height': 12})
- >>> f.height
- 12
- >>> f['power']
- 1
- >>> sorted(f.keys())
- ['height', 'power']
- """
- def __init__(self, d=None, **kwargs):
- if d is None:
- d = {}
- if kwargs:
- d.update(**kwargs)
- for k, v in d.items():
- setattr(self, k, v)
- # Class attributes
- for k in self.__class__.__dict__.keys():
- if not (k.startswith('__') and k.endswith('__')):
- setattr(self, k, getattr(self, k))
-
- def __setattr__(self, name, value):
- if isinstance(value, (list, tuple)):
- value = [self.__class__(x)
- if isinstance(x, dict) else x for x in value]
- elif isinstance(value, dict) and not isinstance(value, self.__class__):
- value = self.__class__(value)
- super(EasyDict, self).__setattr__(name, value)
- super(EasyDict, self).__setitem__(name, value)
-
- __setitem__ = __setattr__
-
-def load_json(filename):
- import json
- with open(filename) as f:
- return EasyDict(json.load(f))
-
-if __name__ == "__main__":
- import doctest
- doctest.testmod()
diff --git a/spaces/Hallucinate/demo/k_diffusion/sampling.py b/spaces/Hallucinate/demo/k_diffusion/sampling.py
deleted file mode 100644
index f050f88e43bf5d0073cddbbb9f085f7137835fd1..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/k_diffusion/sampling.py
+++ /dev/null
@@ -1,607 +0,0 @@
-import math
-
-from scipy import integrate
-import torch
-from torch import nn
-from torchdiffeq import odeint
-import torchsde
-from tqdm.auto import trange, tqdm
-
-from . import utils
-
-
-def append_zero(x):
- return torch.cat([x, x.new_zeros([1])])
-
-
-def get_sigmas_karras(n, sigma_min, sigma_max, rho=7., device='cpu'):
- """Constructs the noise schedule of Karras et al. (2022)."""
- ramp = torch.linspace(0, 1, n)
- min_inv_rho = sigma_min ** (1 / rho)
- max_inv_rho = sigma_max ** (1 / rho)
- sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
- return append_zero(sigmas).to(device)
-
-
-def get_sigmas_exponential(n, sigma_min, sigma_max, device='cpu'):
- """Constructs an exponential noise schedule."""
- sigmas = torch.linspace(math.log(sigma_max), math.log(sigma_min), n, device=device).exp()
- return append_zero(sigmas)
-
-
-def get_sigmas_polyexponential(n, sigma_min, sigma_max, rho=1., device='cpu'):
- """Constructs an polynomial in log sigma noise schedule."""
- ramp = torch.linspace(1, 0, n, device=device) ** rho
- sigmas = torch.exp(ramp * (math.log(sigma_max) - math.log(sigma_min)) + math.log(sigma_min))
- return append_zero(sigmas)
-
-
-def get_sigmas_vp(n, beta_d=19.9, beta_min=0.1, eps_s=1e-3, device='cpu'):
- """Constructs a continuous VP noise schedule."""
- t = torch.linspace(1, eps_s, n, device=device)
- sigmas = torch.sqrt(torch.exp(beta_d * t ** 2 / 2 + beta_min * t) - 1)
- return append_zero(sigmas)
-
-
-def to_d(x, sigma, denoised):
- """Converts a denoiser output to a Karras ODE derivative."""
- return (x - denoised) / utils.append_dims(sigma, x.ndim)
-
-
-def get_ancestral_step(sigma_from, sigma_to, eta=1.):
- """Calculates the noise level (sigma_down) to step down to and the amount
- of noise to add (sigma_up) when doing an ancestral sampling step."""
- if not eta:
- return sigma_to, 0.
- sigma_up = min(sigma_to, eta * (sigma_to ** 2 * (sigma_from ** 2 - sigma_to ** 2) / sigma_from ** 2) ** 0.5)
- sigma_down = (sigma_to ** 2 - sigma_up ** 2) ** 0.5
- return sigma_down, sigma_up
-
-
-def default_noise_sampler(x):
- return lambda sigma, sigma_next: torch.randn_like(x)
-
-
-class BatchedBrownianTree:
- """A wrapper around torchsde.BrownianTree that enables batches of entropy."""
-
- def __init__(self, x, t0, t1, seed=None, **kwargs):
- t0, t1, self.sign = self.sort(t0, t1)
- w0 = kwargs.get('w0', torch.zeros_like(x))
- if seed is None:
- seed = torch.randint(0, 2 ** 63 - 1, []).item()
- self.batched = True
- try:
- assert len(seed) == x.shape[0]
- w0 = w0[0]
- except TypeError:
- seed = [seed]
- self.batched = False
- self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
-
- @staticmethod
- def sort(a, b):
- return (a, b, 1) if a < b else (b, a, -1)
-
- def __call__(self, t0, t1):
- t0, t1, sign = self.sort(t0, t1)
- w = torch.stack([tree(t0, t1) for tree in self.trees]) * (self.sign * sign)
- return w if self.batched else w[0]
-
-
-class BrownianTreeNoiseSampler:
- """A noise sampler backed by a torchsde.BrownianTree.
-
- Args:
- x (Tensor): The tensor whose shape, device and dtype to use to generate
- random samples.
- sigma_min (float): The low end of the valid interval.
- sigma_max (float): The high end of the valid interval.
- seed (int or List[int]): The random seed. If a list of seeds is
- supplied instead of a single integer, then the noise sampler will
- use one BrownianTree per batch item, each with its own seed.
- transform (callable): A function that maps sigma to the sampler's
- internal timestep.
- """
-
- def __init__(self, x, sigma_min, sigma_max, seed=None, transform=lambda x: x):
- self.transform = transform
- t0, t1 = self.transform(torch.as_tensor(sigma_min)), self.transform(torch.as_tensor(sigma_max))
- self.tree = BatchedBrownianTree(x, t0, t1, seed)
-
- def __call__(self, sigma, sigma_next):
- t0, t1 = self.transform(torch.as_tensor(sigma)), self.transform(torch.as_tensor(sigma_next))
- return self.tree(t0, t1) / (t1 - t0).abs().sqrt()
-
-
-@torch.no_grad()
-def sample_euler(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
- """Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- for i in trange(len(sigmas) - 1, disable=disable):
- gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
- eps = torch.randn_like(x) * s_noise
- sigma_hat = sigmas[i] * (gamma + 1)
- if gamma > 0:
- x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
- denoised = model(x, sigma_hat * s_in, **extra_args)
- d = to_d(x, sigma_hat, denoised)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
- dt = sigmas[i + 1] - sigma_hat
- # Euler method
- x = x + d * dt
- return x
-
-
-@torch.no_grad()
-def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
- """Ancestral sampling with Euler method steps."""
- extra_args = {} if extra_args is None else extra_args
- noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
- s_in = x.new_ones([x.shape[0]])
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- d = to_d(x, sigmas[i], denoised)
- # Euler method
- dt = sigma_down - sigmas[i]
- x = x + d * dt
- if sigmas[i + 1] > 0:
- x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
- return x
-
-
-@torch.no_grad()
-def sample_heun(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
- """Implements Algorithm 2 (Heun steps) from Karras et al. (2022)."""
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- for i in trange(len(sigmas) - 1, disable=disable):
- gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
- eps = torch.randn_like(x) * s_noise
- sigma_hat = sigmas[i] * (gamma + 1)
- if gamma > 0:
- x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
- denoised = model(x, sigma_hat * s_in, **extra_args)
- d = to_d(x, sigma_hat, denoised)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
- dt = sigmas[i + 1] - sigma_hat
- if sigmas[i + 1] == 0:
- # Euler method
- x = x + d * dt
- else:
- # Heun's method
- x_2 = x + d * dt
- denoised_2 = model(x_2, sigmas[i + 1] * s_in, **extra_args)
- d_2 = to_d(x_2, sigmas[i + 1], denoised_2)
- d_prime = (d + d_2) / 2
- x = x + d_prime * dt
- return x
-
-
-@torch.no_grad()
-def sample_dpm_2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
- """A sampler inspired by DPM-Solver-2 and Algorithm 2 from Karras et al. (2022)."""
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- for i in trange(len(sigmas) - 1, disable=disable):
- gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
- eps = torch.randn_like(x) * s_noise
- sigma_hat = sigmas[i] * (gamma + 1)
- if gamma > 0:
- x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
- denoised = model(x, sigma_hat * s_in, **extra_args)
- d = to_d(x, sigma_hat, denoised)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
- if sigmas[i + 1] == 0:
- # Euler method
- dt = sigmas[i + 1] - sigma_hat
- x = x + d * dt
- else:
- # DPM-Solver-2
- sigma_mid = sigma_hat.log().lerp(sigmas[i + 1].log(), 0.5).exp()
- dt_1 = sigma_mid - sigma_hat
- dt_2 = sigmas[i + 1] - sigma_hat
- x_2 = x + d * dt_1
- denoised_2 = model(x_2, sigma_mid * s_in, **extra_args)
- d_2 = to_d(x_2, sigma_mid, denoised_2)
- x = x + d_2 * dt_2
- return x
-
-
-@torch.no_grad()
-def sample_dpm_2_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
- """Ancestral sampling with DPM-Solver second-order steps."""
- extra_args = {} if extra_args is None else extra_args
- noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
- s_in = x.new_ones([x.shape[0]])
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- d = to_d(x, sigmas[i], denoised)
- if sigma_down == 0:
- # Euler method
- dt = sigma_down - sigmas[i]
- x = x + d * dt
- else:
- # DPM-Solver-2
- sigma_mid = sigmas[i].log().lerp(sigma_down.log(), 0.5).exp()
- dt_1 = sigma_mid - sigmas[i]
- dt_2 = sigma_down - sigmas[i]
- x_2 = x + d * dt_1
- denoised_2 = model(x_2, sigma_mid * s_in, **extra_args)
- d_2 = to_d(x_2, sigma_mid, denoised_2)
- x = x + d_2 * dt_2
- x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
- return x
-
-
-def linear_multistep_coeff(order, t, i, j):
- if order - 1 > i:
- raise ValueError(f'Order {order} too high for step {i}')
- def fn(tau):
- prod = 1.
- for k in range(order):
- if j == k:
- continue
- prod *= (tau - t[i - k]) / (t[i - j] - t[i - k])
- return prod
- return integrate.quad(fn, t[i], t[i + 1], epsrel=1e-4)[0]
-
-
-@torch.no_grad()
-def sample_lms(model, x, sigmas, extra_args=None, callback=None, disable=None, order=4):
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- sigmas_cpu = sigmas.detach().cpu().numpy()
- ds = []
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- d = to_d(x, sigmas[i], denoised)
- ds.append(d)
- if len(ds) > order:
- ds.pop(0)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- cur_order = min(i + 1, order)
- coeffs = [linear_multistep_coeff(cur_order, sigmas_cpu, i, j) for j in range(cur_order)]
- x = x + sum(coeff * d for coeff, d in zip(coeffs, reversed(ds)))
- return x
-
-
-@torch.no_grad()
-def log_likelihood(model, x, sigma_min, sigma_max, extra_args=None, atol=1e-4, rtol=1e-4):
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- v = torch.randint_like(x, 2) * 2 - 1
- fevals = 0
- def ode_fn(sigma, x):
- nonlocal fevals
- with torch.enable_grad():
- x = x[0].detach().requires_grad_()
- denoised = model(x, sigma * s_in, **extra_args)
- d = to_d(x, sigma, denoised)
- fevals += 1
- grad = torch.autograd.grad((d * v).sum(), x)[0]
- d_ll = (v * grad).flatten(1).sum(1)
- return d.detach(), d_ll
- x_min = x, x.new_zeros([x.shape[0]])
- t = x.new_tensor([sigma_min, sigma_max])
- sol = odeint(ode_fn, x_min, t, atol=atol, rtol=rtol, method='dopri5')
- latent, delta_ll = sol[0][-1], sol[1][-1]
- ll_prior = torch.distributions.Normal(0, sigma_max).log_prob(latent).flatten(1).sum(1)
- return ll_prior + delta_ll, {'fevals': fevals}
-
-
-class PIDStepSizeController:
- """A PID controller for ODE adaptive step size control."""
- def __init__(self, h, pcoeff, icoeff, dcoeff, order=1, accept_safety=0.81, eps=1e-8):
- self.h = h
- self.b1 = (pcoeff + icoeff + dcoeff) / order
- self.b2 = -(pcoeff + 2 * dcoeff) / order
- self.b3 = dcoeff / order
- self.accept_safety = accept_safety
- self.eps = eps
- self.errs = []
-
- def limiter(self, x):
- return 1 + math.atan(x - 1)
-
- def propose_step(self, error):
- inv_error = 1 / (float(error) + self.eps)
- if not self.errs:
- self.errs = [inv_error, inv_error, inv_error]
- self.errs[0] = inv_error
- factor = self.errs[0] ** self.b1 * self.errs[1] ** self.b2 * self.errs[2] ** self.b3
- factor = self.limiter(factor)
- accept = factor >= self.accept_safety
- if accept:
- self.errs[2] = self.errs[1]
- self.errs[1] = self.errs[0]
- self.h *= factor
- return accept
-
-
-class DPMSolver(nn.Module):
- """DPM-Solver. See https://arxiv.org/abs/2206.00927."""
-
- def __init__(self, model, extra_args=None, eps_callback=None, info_callback=None):
- super().__init__()
- self.model = model
- self.extra_args = {} if extra_args is None else extra_args
- self.eps_callback = eps_callback
- self.info_callback = info_callback
-
- def t(self, sigma):
- return -sigma.log()
-
- def sigma(self, t):
- return t.neg().exp()
-
- def eps(self, eps_cache, key, x, t, *args, **kwargs):
- if key in eps_cache:
- return eps_cache[key], eps_cache
- sigma = self.sigma(t) * x.new_ones([x.shape[0]])
- eps = (x - self.model(x, sigma, *args, **self.extra_args, **kwargs)) / self.sigma(t)
- if self.eps_callback is not None:
- self.eps_callback()
- return eps, {key: eps, **eps_cache}
-
- def dpm_solver_1_step(self, x, t, t_next, eps_cache=None):
- eps_cache = {} if eps_cache is None else eps_cache
- h = t_next - t
- eps, eps_cache = self.eps(eps_cache, 'eps', x, t)
- x_1 = x - self.sigma(t_next) * h.expm1() * eps
- return x_1, eps_cache
-
- def dpm_solver_2_step(self, x, t, t_next, r1=1 / 2, eps_cache=None):
- eps_cache = {} if eps_cache is None else eps_cache
- h = t_next - t
- eps, eps_cache = self.eps(eps_cache, 'eps', x, t)
- s1 = t + r1 * h
- u1 = x - self.sigma(s1) * (r1 * h).expm1() * eps
- eps_r1, eps_cache = self.eps(eps_cache, 'eps_r1', u1, s1)
- x_2 = x - self.sigma(t_next) * h.expm1() * eps - self.sigma(t_next) / (2 * r1) * h.expm1() * (eps_r1 - eps)
- return x_2, eps_cache
-
- def dpm_solver_3_step(self, x, t, t_next, r1=1 / 3, r2=2 / 3, eps_cache=None):
- eps_cache = {} if eps_cache is None else eps_cache
- h = t_next - t
- eps, eps_cache = self.eps(eps_cache, 'eps', x, t)
- s1 = t + r1 * h
- s2 = t + r2 * h
- u1 = x - self.sigma(s1) * (r1 * h).expm1() * eps
- eps_r1, eps_cache = self.eps(eps_cache, 'eps_r1', u1, s1)
- u2 = x - self.sigma(s2) * (r2 * h).expm1() * eps - self.sigma(s2) * (r2 / r1) * ((r2 * h).expm1() / (r2 * h) - 1) * (eps_r1 - eps)
- eps_r2, eps_cache = self.eps(eps_cache, 'eps_r2', u2, s2)
- x_3 = x - self.sigma(t_next) * h.expm1() * eps - self.sigma(t_next) / r2 * (h.expm1() / h - 1) * (eps_r2 - eps)
- return x_3, eps_cache
-
- def dpm_solver_fast(self, x, t_start, t_end, nfe, eta=0., s_noise=1., noise_sampler=None):
- noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
- if not t_end > t_start and eta:
- raise ValueError('eta must be 0 for reverse sampling')
-
- m = math.floor(nfe / 3) + 1
- ts = torch.linspace(t_start, t_end, m + 1, device=x.device)
-
- if nfe % 3 == 0:
- orders = [3] * (m - 2) + [2, 1]
- else:
- orders = [3] * (m - 1) + [nfe % 3]
-
- for i in range(len(orders)):
- eps_cache = {}
- t, t_next = ts[i], ts[i + 1]
- if eta:
- sd, su = get_ancestral_step(self.sigma(t), self.sigma(t_next), eta)
- t_next_ = torch.minimum(t_end, self.t(sd))
- su = (self.sigma(t_next) ** 2 - self.sigma(t_next_) ** 2) ** 0.5
- else:
- t_next_, su = t_next, 0.
-
- eps, eps_cache = self.eps(eps_cache, 'eps', x, t)
- denoised = x - self.sigma(t) * eps
- if self.info_callback is not None:
- self.info_callback({'x': x, 'i': i, 't': ts[i], 't_up': t, 'denoised': denoised})
-
- if orders[i] == 1:
- x, eps_cache = self.dpm_solver_1_step(x, t, t_next_, eps_cache=eps_cache)
- elif orders[i] == 2:
- x, eps_cache = self.dpm_solver_2_step(x, t, t_next_, eps_cache=eps_cache)
- else:
- x, eps_cache = self.dpm_solver_3_step(x, t, t_next_, eps_cache=eps_cache)
-
- x = x + su * s_noise * noise_sampler(self.sigma(t), self.sigma(t_next))
-
- return x
-
- def dpm_solver_adaptive(self, x, t_start, t_end, order=3, rtol=0.05, atol=0.0078, h_init=0.05, pcoeff=0., icoeff=1., dcoeff=0., accept_safety=0.81, eta=0., s_noise=1., noise_sampler=None):
- noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
- if order not in {2, 3}:
- raise ValueError('order should be 2 or 3')
- forward = t_end > t_start
- if not forward and eta:
- raise ValueError('eta must be 0 for reverse sampling')
- h_init = abs(h_init) * (1 if forward else -1)
- atol = torch.tensor(atol)
- rtol = torch.tensor(rtol)
- s = t_start
- x_prev = x
- accept = True
- pid = PIDStepSizeController(h_init, pcoeff, icoeff, dcoeff, 1.5 if eta else order, accept_safety)
- info = {'steps': 0, 'nfe': 0, 'n_accept': 0, 'n_reject': 0}
-
- while s < t_end - 1e-5 if forward else s > t_end + 1e-5:
- eps_cache = {}
- t = torch.minimum(t_end, s + pid.h) if forward else torch.maximum(t_end, s + pid.h)
- if eta:
- sd, su = get_ancestral_step(self.sigma(s), self.sigma(t), eta)
- t_ = torch.minimum(t_end, self.t(sd))
- su = (self.sigma(t) ** 2 - self.sigma(t_) ** 2) ** 0.5
- else:
- t_, su = t, 0.
-
- eps, eps_cache = self.eps(eps_cache, 'eps', x, s)
- denoised = x - self.sigma(s) * eps
-
- if order == 2:
- x_low, eps_cache = self.dpm_solver_1_step(x, s, t_, eps_cache=eps_cache)
- x_high, eps_cache = self.dpm_solver_2_step(x, s, t_, eps_cache=eps_cache)
- else:
- x_low, eps_cache = self.dpm_solver_2_step(x, s, t_, r1=1 / 3, eps_cache=eps_cache)
- x_high, eps_cache = self.dpm_solver_3_step(x, s, t_, eps_cache=eps_cache)
- delta = torch.maximum(atol, rtol * torch.maximum(x_low.abs(), x_prev.abs()))
- error = torch.linalg.norm((x_low - x_high) / delta) / x.numel() ** 0.5
- accept = pid.propose_step(error)
- if accept:
- x_prev = x_low
- x = x_high + su * s_noise * noise_sampler(self.sigma(s), self.sigma(t))
- s = t
- info['n_accept'] += 1
- else:
- info['n_reject'] += 1
- info['nfe'] += order
- info['steps'] += 1
-
- if self.info_callback is not None:
- self.info_callback({'x': x, 'i': info['steps'] - 1, 't': s, 't_up': s, 'denoised': denoised, 'error': error, 'h': pid.h, **info})
-
- return x, info
-
-
-@torch.no_grad()
-def sample_dpm_fast(model, x, sigma_min, sigma_max, n, extra_args=None, callback=None, disable=None, eta=0., s_noise=1., noise_sampler=None):
- """DPM-Solver-Fast (fixed step size). See https://arxiv.org/abs/2206.00927."""
- if sigma_min <= 0 or sigma_max <= 0:
- raise ValueError('sigma_min and sigma_max must not be 0')
- with tqdm(total=n, disable=disable) as pbar:
- dpm_solver = DPMSolver(model, extra_args, eps_callback=pbar.update)
- if callback is not None:
- dpm_solver.info_callback = lambda info: callback({'sigma': dpm_solver.sigma(info['t']), 'sigma_hat': dpm_solver.sigma(info['t_up']), **info})
- return dpm_solver.dpm_solver_fast(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), n, eta, s_noise, noise_sampler)
-
-
-@torch.no_grad()
-def sample_dpm_adaptive(model, x, sigma_min, sigma_max, extra_args=None, callback=None, disable=None, order=3, rtol=0.05, atol=0.0078, h_init=0.05, pcoeff=0., icoeff=1., dcoeff=0., accept_safety=0.81, eta=0., s_noise=1., noise_sampler=None, return_info=False):
- """DPM-Solver-12 and 23 (adaptive step size). See https://arxiv.org/abs/2206.00927."""
- if sigma_min <= 0 or sigma_max <= 0:
- raise ValueError('sigma_min and sigma_max must not be 0')
- with tqdm(disable=disable) as pbar:
- dpm_solver = DPMSolver(model, extra_args, eps_callback=pbar.update)
- if callback is not None:
- dpm_solver.info_callback = lambda info: callback({'sigma': dpm_solver.sigma(info['t']), 'sigma_hat': dpm_solver.sigma(info['t_up']), **info})
- x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
- if return_info:
- return x, info
- return x
-
-
-@torch.no_grad()
-def sample_dpmpp_2s_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
- """Ancestral sampling with DPM-Solver++(2S) second-order steps."""
- extra_args = {} if extra_args is None else extra_args
- noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
- s_in = x.new_ones([x.shape[0]])
- sigma_fn = lambda t: t.neg().exp()
- t_fn = lambda sigma: sigma.log().neg()
-
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- if sigma_down == 0:
- # Euler method
- d = to_d(x, sigmas[i], denoised)
- dt = sigma_down - sigmas[i]
- x = x + d * dt
- else:
- # DPM-Solver++(2S)
- t, t_next = t_fn(sigmas[i]), t_fn(sigma_down)
- r = 1 / 2
- h = t_next - t
- s = t + r * h
- x_2 = (sigma_fn(s) / sigma_fn(t)) * x - (-h * r).expm1() * denoised
- denoised_2 = model(x_2, sigma_fn(s) * s_in, **extra_args)
- x = (sigma_fn(t_next) / sigma_fn(t)) * x - (-h).expm1() * denoised_2
- # Noise addition
- if sigmas[i + 1] > 0:
- x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
- return x
-
-
-@torch.no_grad()
-def sample_dpmpp_sde(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None, r=1 / 2):
- """DPM-Solver++ (stochastic)."""
- sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
- noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max) if noise_sampler is None else noise_sampler
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- sigma_fn = lambda t: t.neg().exp()
- t_fn = lambda sigma: sigma.log().neg()
-
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- if sigmas[i + 1] == 0:
- # Euler method
- d = to_d(x, sigmas[i], denoised)
- dt = sigmas[i + 1] - sigmas[i]
- x = x + d * dt
- else:
- # DPM-Solver++
- t, t_next = t_fn(sigmas[i]), t_fn(sigmas[i + 1])
- h = t_next - t
- s = t + h * r
- fac = 1 / (2 * r)
-
- # Step 1
- sd, su = get_ancestral_step(sigma_fn(t), sigma_fn(s), eta)
- s_ = t_fn(sd)
- x_2 = (sigma_fn(s_) / sigma_fn(t)) * x - (t - s_).expm1() * denoised
- x_2 = x_2 + noise_sampler(sigma_fn(t), sigma_fn(s)) * s_noise * su
- denoised_2 = model(x_2, sigma_fn(s) * s_in, **extra_args)
-
- # Step 2
- sd, su = get_ancestral_step(sigma_fn(t), sigma_fn(t_next), eta)
- t_next_ = t_fn(sd)
- denoised_d = (1 - fac) * denoised + fac * denoised_2
- x = (sigma_fn(t_next_) / sigma_fn(t)) * x - (t - t_next_).expm1() * denoised_d
- x = x + noise_sampler(sigma_fn(t), sigma_fn(t_next)) * s_noise * su
- return x
-
-
-@torch.no_grad()
-def sample_dpmpp_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):
- """DPM-Solver++(2M)."""
- extra_args = {} if extra_args is None else extra_args
- s_in = x.new_ones([x.shape[0]])
- sigma_fn = lambda t: t.neg().exp()
- t_fn = lambda sigma: sigma.log().neg()
- old_denoised = None
-
- for i in trange(len(sigmas) - 1, disable=disable):
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- if callback is not None:
- callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
- t, t_next = t_fn(sigmas[i]), t_fn(sigmas[i + 1])
- h = t_next - t
- if old_denoised is None or sigmas[i + 1] == 0:
- x = (sigma_fn(t_next) / sigma_fn(t)) * x - (-h).expm1() * denoised
- else:
- h_last = t - t_fn(sigmas[i - 1])
- r = h_last / h
- denoised_d = (1 + 1 / (2 * r)) * denoised - (1 / (2 * r)) * old_denoised
- x = (sigma_fn(t_next) / sigma_fn(t)) * x - (-h).expm1() * denoised_d
- old_denoised = denoised
- return x
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_generate.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_generate.py
deleted file mode 100644
index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_generate.py
+++ /dev/null
@@ -1,397 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Generate n-best translations using a trained model.
-"""
-
-import os
-import subprocess
-from contextlib import redirect_stdout
-
-from fairseq import options
-from fairseq_cli import generate, preprocess
-
-from examples.noisychannel import rerank_options, rerank_utils
-
-
-def gen_and_reprocess_nbest(args):
- if args.score_dict_dir is None:
- args.score_dict_dir = args.data
- if args.prefix_len is not None:
- assert (
- args.right_to_left1 is False
- ), "prefix length not compatible with right to left models"
- assert (
- args.right_to_left2 is False
- ), "prefix length not compatible with right to left models"
-
- if args.nbest_list is not None:
- assert args.score_model2 is None
-
- if args.backwards1:
- scorer1_src = args.target_lang
- scorer1_tgt = args.source_lang
- else:
- scorer1_src = args.source_lang
- scorer1_tgt = args.target_lang
-
- store_data = (
- os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name
- )
- if not os.path.exists(store_data):
- os.makedirs(store_data)
-
- (
- pre_gen,
- left_to_right_preprocessed_dir,
- right_to_left_preprocessed_dir,
- backwards_preprocessed_dir,
- lm_preprocessed_dir,
- ) = rerank_utils.get_directories(
- args.data_dir_name,
- args.num_rescore,
- args.gen_subset,
- args.gen_model_name,
- args.shard_id,
- args.num_shards,
- args.sampling,
- args.prefix_len,
- args.target_prefix_frac,
- args.source_prefix_frac,
- )
- assert not (
- args.right_to_left1 and args.backwards1
- ), "backwards right to left not supported"
- assert not (
- args.right_to_left2 and args.backwards2
- ), "backwards right to left not supported"
- assert not (
- args.prefix_len is not None and args.target_prefix_frac is not None
- ), "target prefix frac and target prefix len incompatible"
-
- # make directory to store generation results
- if not os.path.exists(pre_gen):
- os.makedirs(pre_gen)
-
- rerank1_is_gen = (
- args.gen_model == args.score_model1 and args.source_prefix_frac is None
- )
- rerank2_is_gen = (
- args.gen_model == args.score_model2 and args.source_prefix_frac is None
- )
-
- if args.nbest_list is not None:
- rerank2_is_gen = True
-
- # make directories to store preprossed nbest list for reranking
- if not os.path.exists(left_to_right_preprocessed_dir):
- os.makedirs(left_to_right_preprocessed_dir)
- if not os.path.exists(right_to_left_preprocessed_dir):
- os.makedirs(right_to_left_preprocessed_dir)
- if not os.path.exists(lm_preprocessed_dir):
- os.makedirs(lm_preprocessed_dir)
- if not os.path.exists(backwards_preprocessed_dir):
- os.makedirs(backwards_preprocessed_dir)
-
- score1_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model1_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards1,
- )
- if args.score_model2 is not None:
- score2_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model2_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards2,
- )
-
- predictions_bpe_file = pre_gen + "/generate_output_bpe.txt"
-
- using_nbest = args.nbest_list is not None
-
- if using_nbest:
- print("Using predefined n-best list from interactive.py")
- predictions_bpe_file = args.nbest_list
-
- else:
- if not os.path.isfile(predictions_bpe_file):
- print("STEP 1: generate predictions using the p(T|S) model with bpe")
- print(args.data)
- param1 = [
- args.data,
- "--path",
- args.gen_model,
- "--shard-id",
- str(args.shard_id),
- "--num-shards",
- str(args.num_shards),
- "--nbest",
- str(args.num_rescore),
- "--batch-size",
- str(args.batch_size),
- "--beam",
- str(args.num_rescore),
- "--batch-size",
- str(args.num_rescore),
- "--gen-subset",
- args.gen_subset,
- "--source-lang",
- args.source_lang,
- "--target-lang",
- args.target_lang,
- ]
- if args.sampling:
- param1 += ["--sampling"]
-
- gen_parser = options.get_generation_parser()
- input_args = options.parse_args_and_arch(gen_parser, param1)
-
- print(input_args)
- with open(predictions_bpe_file, "w") as f:
- with redirect_stdout(f):
- generate.main(input_args)
-
- gen_output = rerank_utils.BitextOutputFromGen(
- predictions_bpe_file,
- bpe_symbol=args.post_process,
- nbest=using_nbest,
- prefix_len=args.prefix_len,
- target_prefix_frac=args.target_prefix_frac,
- )
-
- if args.diff_bpe:
- rerank_utils.write_reprocessed(
- gen_output.no_bpe_source,
- gen_output.no_bpe_hypo,
- gen_output.no_bpe_target,
- pre_gen + "/source_gen_bpe." + args.source_lang,
- pre_gen + "/target_gen_bpe." + args.target_lang,
- pre_gen + "/reference_gen_bpe." + args.target_lang,
- )
- bitext_bpe = args.rescore_bpe_code
- bpe_src_param = [
- "-c",
- bitext_bpe,
- "--input",
- pre_gen + "/source_gen_bpe." + args.source_lang,
- "--output",
- pre_gen + "/rescore_data." + args.source_lang,
- ]
- bpe_tgt_param = [
- "-c",
- bitext_bpe,
- "--input",
- pre_gen + "/target_gen_bpe." + args.target_lang,
- "--output",
- pre_gen + "/rescore_data." + args.target_lang,
- ]
-
- subprocess.call(
- [
- "python",
- os.path.join(
- os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py"
- ),
- ]
- + bpe_src_param,
- shell=False,
- )
-
- subprocess.call(
- [
- "python",
- os.path.join(
- os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py"
- ),
- ]
- + bpe_tgt_param,
- shell=False,
- )
-
- if (not os.path.isfile(score1_file) and not rerank1_is_gen) or (
- args.score_model2 is not None
- and not os.path.isfile(score2_file)
- and not rerank2_is_gen
- ):
- print(
- "STEP 2: process the output of generate.py so we have clean text files with the translations"
- )
-
- rescore_file = "/rescore_data"
- if args.prefix_len is not None:
- prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len)
- if args.target_prefix_frac is not None:
- target_prefix_frac_rescore_file = (
- rescore_file + "target_prefix_frac" + str(args.target_prefix_frac)
- )
- if args.source_prefix_frac is not None:
- source_prefix_frac_rescore_file = (
- rescore_file + "source_prefix_frac" + str(args.source_prefix_frac)
- )
-
- if not args.right_to_left1 or not args.right_to_left2:
- if not args.diff_bpe:
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + rescore_file + "." + args.source_lang,
- pre_gen + rescore_file + "." + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- )
- if args.prefix_len is not None:
- bw_rescore_file = prefix_len_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + prefix_len_rescore_file + "." + args.source_lang,
- pre_gen + prefix_len_rescore_file + "." + args.target_lang,
- pre_gen + "/reference_file",
- prefix_len=args.prefix_len,
- bpe_symbol=args.post_process,
- )
- elif args.target_prefix_frac is not None:
- bw_rescore_file = target_prefix_frac_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen
- + target_prefix_frac_rescore_file
- + "."
- + args.source_lang,
- pre_gen
- + target_prefix_frac_rescore_file
- + "."
- + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- target_prefix_frac=args.target_prefix_frac,
- )
- else:
- bw_rescore_file = rescore_file
-
- if args.source_prefix_frac is not None:
- fw_rescore_file = source_prefix_frac_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen
- + source_prefix_frac_rescore_file
- + "."
- + args.source_lang,
- pre_gen
- + source_prefix_frac_rescore_file
- + "."
- + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- source_prefix_frac=args.source_prefix_frac,
- )
- else:
- fw_rescore_file = rescore_file
-
- if args.right_to_left1 or args.right_to_left2:
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + "/right_to_left_rescore_data." + args.source_lang,
- pre_gen + "/right_to_left_rescore_data." + args.target_lang,
- pre_gen + "/right_to_left_reference_file",
- right_to_left=True,
- bpe_symbol=args.post_process,
- )
-
- print("STEP 3: binarize the translations")
- if (
- not args.right_to_left1
- or args.score_model2 is not None
- and not args.right_to_left2
- or not rerank1_is_gen
- ):
-
- if args.backwards1 or args.backwards2:
- if args.backwards_score_dict_dir is not None:
- bw_dict = args.backwards_score_dict_dir
- else:
- bw_dict = args.score_dict_dir
- bw_preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + bw_rescore_file,
- "--srcdict",
- bw_dict + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- bw_dict + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- backwards_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(bw_preprocess_param)
- preprocess.main(input_args)
-
- preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + fw_rescore_file,
- "--srcdict",
- args.score_dict_dir + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- args.score_dict_dir + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- left_to_right_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_param)
- preprocess.main(input_args)
-
- if args.right_to_left1 or args.right_to_left2:
- preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + "/right_to_left_rescore_data",
- "--srcdict",
- args.score_dict_dir + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- args.score_dict_dir + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- right_to_left_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_param)
- preprocess.main(input_args)
-
- return gen_output
-
-
-def cli_main():
- parser = rerank_options.get_reranking_parser()
- args = options.parse_args_and_arch(parser)
- gen_and_reprocess_nbest(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/speech_to_text_dataset.py
deleted file mode 100644
index 164bf413e4fd41b895348c9ef0bb57421843eb17..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/speech_to_text_dataset.py
+++ /dev/null
@@ -1,525 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import csv
-import io
-import logging
-import re
-from collections import defaultdict
-from pathlib import Path
-from typing import Dict, List, Optional
-from dataclasses import dataclass
-
-import numpy as np
-import torch
-from fairseq.data import (
- ConcatDataset,
- Dictionary,
- FairseqDataset,
- ResamplingDataset,
- data_utils as fairseq_data_utils,
-)
-from fairseq.data.audio.audio_utils import (
- get_fbank,
- get_waveform,
- read_from_stored_zip,
- is_npy_data,
- is_sf_audio_data,
- parse_path,
- FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS,
-)
-from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform
-from fairseq.data.audio.data_cfg import S2TDataConfig
-
-
-logger = logging.getLogger(__name__)
-
-
-def get_features_from_npy_or_audio(path):
- ext = Path(path).suffix
- if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS:
- raise ValueError(f'Unsupported file format for "{path}"')
- return np.load(path) if ext == ".npy" else get_fbank(path)
-
-
-def get_features_or_waveform_from_stored_zip(
- path, byte_offset, byte_size, need_waveform=False, use_sample_rate=None,
-):
- assert path.endswith(".zip")
- data = read_from_stored_zip(path, byte_offset, byte_size)
- f = io.BytesIO(data)
- if is_npy_data(data):
- features_or_waveform = np.load(f)
- elif is_sf_audio_data(data):
- features_or_waveform = \
- get_waveform(
- f, always_2d=False, output_sample_rate=use_sample_rate
- )[0] if need_waveform else get_fbank(f)
- else:
- raise ValueError(f'Unknown file format for "{path}"')
- return features_or_waveform
-
-
-def get_features_or_waveform(
- path: str, need_waveform=False, use_sample_rate=None
-):
- """Get speech features from .npy file or waveform from .wav/.flac file.
- The file may be inside an uncompressed ZIP file and is accessed via byte
- offset and length.
-
- Args:
- path (str): File path in the format of "<.npy/.wav/.flac path>" or
- "::".
- need_waveform (bool): return waveform instead of features.
- use_sample_rate (int): change sample rate for the input wave file
-
- Returns:
- features_or_waveform (numpy.ndarray): speech features or waveform.
- """
- _path, slice_ptr = parse_path(path)
- if len(slice_ptr) == 0:
- if need_waveform:
- return get_waveform(
- _path, always_2d=False, output_sample_rate=use_sample_rate
- )[0]
- return get_features_from_npy_or_audio(_path)
- elif len(slice_ptr) == 2:
- features_or_waveform = get_features_or_waveform_from_stored_zip(
- _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform,
- use_sample_rate=use_sample_rate
- )
- else:
- raise ValueError(f"Invalid path: {path}")
-
- return features_or_waveform
-
-
-def _collate_frames(
- frames: List[torch.Tensor], is_audio_input: bool = False
-) -> torch.Tensor:
- """
- Convert a list of 2D frames into a padded 3D tensor
- Args:
- frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is
- length of i-th frame and f_dim is static dimension of features
- Returns:
- 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i]
- """
- max_len = max(frame.size(0) for frame in frames)
- if is_audio_input:
- out = frames[0].new_zeros((len(frames), max_len))
- else:
- out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1)))
- for i, v in enumerate(frames):
- out[i, : v.size(0)] = v
- return out
-
-
-@dataclass
-class SpeechToTextDatasetItem(object):
- index: int
- source: torch.Tensor
- target: Optional[torch.Tensor] = None
- speaker_id: Optional[int] = None
-
-
-class SpeechToTextDataset(FairseqDataset):
- LANG_TAG_TEMPLATE = ""
-
- def __init__(
- self,
- split: str,
- is_train_split: bool,
- cfg: S2TDataConfig,
- audio_paths: List[str],
- n_frames: List[int],
- src_texts: Optional[List[str]] = None,
- tgt_texts: Optional[List[str]] = None,
- speakers: Optional[List[str]] = None,
- src_langs: Optional[List[str]] = None,
- tgt_langs: Optional[List[str]] = None,
- ids: Optional[List[str]] = None,
- tgt_dict: Optional[Dictionary] = None,
- pre_tokenizer=None,
- bpe_tokenizer=None,
- n_frames_per_step=1,
- speaker_to_id=None
- ):
- self.split, self.is_train_split = split, is_train_split
- self.cfg = cfg
- self.audio_paths, self.n_frames = audio_paths, n_frames
- self.n_samples = len(audio_paths)
- assert len(n_frames) == self.n_samples > 0
- assert src_texts is None or len(src_texts) == self.n_samples
- assert tgt_texts is None or len(tgt_texts) == self.n_samples
- assert speakers is None or len(speakers) == self.n_samples
- assert src_langs is None or len(src_langs) == self.n_samples
- assert tgt_langs is None or len(tgt_langs) == self.n_samples
- assert ids is None or len(ids) == self.n_samples
- assert (tgt_dict is None and tgt_texts is None) or (
- tgt_dict is not None and tgt_texts is not None
- )
- self.src_texts, self.tgt_texts = src_texts, tgt_texts
- self.src_langs, self.tgt_langs = src_langs, tgt_langs
- self.speakers = speakers
- self.tgt_dict = tgt_dict
- self.check_tgt_lang_tag()
- self.ids = ids
- self.shuffle = cfg.shuffle if is_train_split else False
-
- self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict(
- self.cfg.get_feature_transforms(split, is_train_split)
- )
-
- self.pre_tokenizer = pre_tokenizer
- self.bpe_tokenizer = bpe_tokenizer
- self.n_frames_per_step = n_frames_per_step
- self.speaker_to_id = speaker_to_id
-
- self.tgt_lens = self.get_tgt_lens_and_check_oov()
-
- logger.info(self.__repr__())
-
- def get_tgt_lens_and_check_oov(self):
- if self.tgt_texts is None:
- return [0 for _ in range(self.n_samples)]
- tgt_lens = []
- n_tokens, n_oov_tokens = 0, 0
- for i in range(self.n_samples):
- tokenized = self.get_tokenized_tgt_text(i).split(" ")
- oov_tokens = [
- t
- for t in tokenized
- if self.tgt_dict.index(t) == self.tgt_dict.unk_index
- ]
- n_tokens += len(tokenized)
- n_oov_tokens += len(oov_tokens)
- tgt_lens.append(len(tokenized))
- logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV")
- return tgt_lens
-
- def __repr__(self):
- return (
- self.__class__.__name__
- + f'(split="{self.split}", n_samples={self.n_samples:_}, '
- f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, "
- f"shuffle={self.shuffle}, transforms={self.feature_transforms}, "
- f"n_frames_per_step={self.n_frames_per_step}"
- )
-
- @classmethod
- def is_lang_tag(cls, token):
- pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)")
- return re.match(pattern, token)
-
- def check_tgt_lang_tag(self):
- if self.cfg.prepend_tgt_lang_tag:
- assert self.tgt_langs is not None and self.tgt_dict is not None
- tgt_lang_tags = [
- self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs)
- ]
- assert all(t in self.tgt_dict for t in tgt_lang_tags)
-
- @classmethod
- def tokenize(cls, tokenizer, text: str):
- return text if tokenizer is None else tokenizer.encode(text)
-
- def get_tokenized_tgt_text(self, index: int):
- text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index])
- text = self.tokenize(self.bpe_tokenizer, text)
- return text
-
- def pack_frames(self, feature: torch.Tensor):
- if self.n_frames_per_step == 1:
- return feature
- n_packed_frames = feature.shape[0] // self.n_frames_per_step
- feature = feature[:self.n_frames_per_step * n_packed_frames]
- return feature.reshape(n_packed_frames, -1)
-
- @classmethod
- def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary):
- lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang))
- assert lang_tag_idx != dictionary.unk()
- return lang_tag_idx
-
- def __getitem__(self, index: int) -> SpeechToTextDatasetItem:
- source = get_features_or_waveform(
- self.audio_paths[index],
- need_waveform=self.cfg.use_audio_input,
- use_sample_rate=self.cfg.use_sample_rate,
- )
- if self.feature_transforms is not None:
- assert not self.cfg.use_audio_input
- source = self.feature_transforms(source)
- source = torch.from_numpy(source).float()
- source = self.pack_frames(source)
-
- target = None
- if self.tgt_texts is not None:
- tokenized = self.get_tokenized_tgt_text(index)
- target = self.tgt_dict.encode_line(
- tokenized, add_if_not_exist=False, append_eos=True
- ).long()
- if self.cfg.prepend_tgt_lang_tag:
- lang_tag_idx = self.get_lang_tag_idx(
- self.tgt_langs[index], self.tgt_dict
- )
- target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0)
-
- speaker_id = None
- if self.speaker_to_id is not None:
- speaker_id = self.speaker_to_id[self.speakers[index]]
- return SpeechToTextDatasetItem(
- index=index, source=source, target=target, speaker_id=speaker_id
- )
-
- def __len__(self):
- return self.n_samples
-
- def collater(
- self, samples: List[SpeechToTextDatasetItem], return_order: bool = False
- ) -> Dict:
- if len(samples) == 0:
- return {}
- indices = torch.tensor([x.index for x in samples], dtype=torch.long)
- frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input)
- # sort samples by descending number of frames
- n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long)
- n_frames, order = n_frames.sort(descending=True)
- indices = indices.index_select(0, order)
- frames = frames.index_select(0, order)
-
- target, target_lengths = None, None
- prev_output_tokens = None
- ntokens = None
- if self.tgt_texts is not None:
- target = fairseq_data_utils.collate_tokens(
- [x.target for x in samples],
- self.tgt_dict.pad(),
- self.tgt_dict.eos(),
- left_pad=False,
- move_eos_to_beginning=False,
- )
- target = target.index_select(0, order)
- target_lengths = torch.tensor(
- [x.target.size(0) for x in samples], dtype=torch.long
- ).index_select(0, order)
- prev_output_tokens = fairseq_data_utils.collate_tokens(
- [x.target for x in samples],
- self.tgt_dict.pad(),
- self.tgt_dict.eos(),
- left_pad=False,
- move_eos_to_beginning=True,
- )
- prev_output_tokens = prev_output_tokens.index_select(0, order)
- ntokens = sum(x.target.size(0) for x in samples)
-
- speaker = None
- if self.speaker_to_id is not None:
- speaker = torch.tensor(
- [s.speaker_id for s in samples], dtype=torch.long
- ).index_select(0, order).view(-1, 1)
-
- net_input = {
- "src_tokens": frames,
- "src_lengths": n_frames,
- "prev_output_tokens": prev_output_tokens,
- }
- out = {
- "id": indices,
- "net_input": net_input,
- "speaker": speaker,
- "target": target,
- "target_lengths": target_lengths,
- "ntokens": ntokens,
- "nsentences": len(samples),
- }
- if return_order:
- out["order"] = order
- return out
-
- def num_tokens(self, index):
- return self.n_frames[index]
-
- def size(self, index):
- return self.n_frames[index], self.tgt_lens[index]
-
- @property
- def sizes(self):
- return np.array(self.n_frames)
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return True
-
- def ordered_indices(self):
- if self.shuffle:
- order = [np.random.permutation(len(self))]
- else:
- order = [np.arange(len(self))]
- # first by descending order of # of frames then by original/random order
- order.append([-n for n in self.n_frames])
- return np.lexsort(order)
-
- def prefetch(self, indices):
- raise False
-
-
-class SpeechToTextDatasetCreator(object):
- # mandatory columns
- KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames"
- KEY_TGT_TEXT = "tgt_text"
- # optional columns
- KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text"
- KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang"
- # default values
- DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = ""
-
- @classmethod
- def _from_list(
- cls,
- split_name: str,
- is_train_split,
- samples: List[Dict],
- cfg: S2TDataConfig,
- tgt_dict,
- pre_tokenizer,
- bpe_tokenizer,
- n_frames_per_step,
- speaker_to_id
- ) -> SpeechToTextDataset:
- audio_root = Path(cfg.audio_root)
- ids = [s[cls.KEY_ID] for s in samples]
- audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples]
- n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples]
- tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples]
- src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples]
- speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples]
- src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples]
- tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples]
- return SpeechToTextDataset(
- split_name,
- is_train_split,
- cfg,
- audio_paths,
- n_frames,
- src_texts=src_texts,
- tgt_texts=tgt_texts,
- speakers=speakers,
- src_langs=src_langs,
- tgt_langs=tgt_langs,
- ids=ids,
- tgt_dict=tgt_dict,
- pre_tokenizer=pre_tokenizer,
- bpe_tokenizer=bpe_tokenizer,
- n_frames_per_step=n_frames_per_step,
- speaker_to_id=speaker_to_id
- )
-
- @classmethod
- def get_size_ratios(
- cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0
- ) -> List[float]:
- """Size ratios for temperature-based sampling
- (https://arxiv.org/abs/1907.05019)"""
-
- id_to_lp, lp_to_sz = {}, defaultdict(int)
- for ds in datasets:
- lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)}
- assert len(lang_pairs) == 1
- lang_pair = list(lang_pairs)[0]
- id_to_lp[ds.split] = lang_pair
- lp_to_sz[lang_pair] += sum(ds.n_frames)
-
- sz_sum = sum(v for v in lp_to_sz.values())
- lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()}
- lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()}
- prob_sum = sum(v for v in lp_to_tgt_prob.values())
- lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()}
- lp_to_sz_ratio = {
- k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items()
- }
- size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets]
-
- p_formatted = {
- k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz
- }
- logger.info(f"sampling probability balancing: {p_formatted}")
- sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)}
- logger.info(f"balanced sampling size ratio: {sr_formatted}")
- return size_ratio
-
- @classmethod
- def _load_samples_from_tsv(cls, root: str, split: str):
- tsv_path = Path(root) / f"{split}.tsv"
- if not tsv_path.is_file():
- raise FileNotFoundError(f"Dataset not found: {tsv_path}")
- with open(tsv_path) as f:
- reader = csv.DictReader(
- f,
- delimiter="\t",
- quotechar=None,
- doublequote=False,
- lineterminator="\n",
- quoting=csv.QUOTE_NONE,
- )
- samples = [dict(e) for e in reader]
- if len(samples) == 0:
- raise ValueError(f"Empty manifest: {tsv_path}")
- return samples
-
- @classmethod
- def _from_tsv(
- cls,
- root: str,
- cfg: S2TDataConfig,
- split: str,
- tgt_dict,
- is_train_split: bool,
- pre_tokenizer,
- bpe_tokenizer,
- n_frames_per_step,
- speaker_to_id
- ) -> SpeechToTextDataset:
- samples = cls._load_samples_from_tsv(root, split)
- return cls._from_list(
- split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer,
- bpe_tokenizer, n_frames_per_step, speaker_to_id
- )
-
- @classmethod
- def from_tsv(
- cls,
- root: str,
- cfg: S2TDataConfig,
- splits: str,
- tgt_dict,
- pre_tokenizer,
- bpe_tokenizer,
- is_train_split: bool,
- epoch: int,
- seed: int,
- n_frames_per_step: int = 1,
- speaker_to_id=None
- ) -> SpeechToTextDataset:
- datasets = [
- cls._from_tsv(
- root, cfg, split, tgt_dict, is_train_split, pre_tokenizer,
- bpe_tokenizer, n_frames_per_step, speaker_to_id
- )
- for split in splits.split(",")
- ]
-
- if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0:
- # temperature-based sampling
- size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha)
- datasets = [
- ResamplingDataset(
- d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0)
- )
- for r, d in zip(size_ratios, datasets)
- ]
-
- return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0]
diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py
deleted file mode 100644
index 9e1e5cb8cd85c88892371851917ec721c2c4b08e..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import os
-from glob import glob
-import re
-import string
-import argparse
-
-import random
-random.seed(42)
-
-def replace_extra_chars(line):
- line = line.replace("(", "").replace(
- ")", ""
- ) # .replace('\u200d', ' ').replace('\ufeff', ' ').replace('\u200c', ' ').replace('\u200e', ' ')
- # line = line.replace('“', ' ').replace('”', ' ').replace(':', ' ')
-
- return line.strip()
-
-
-def write_txt(content, filename):
- with open(filename, "w+", encoding="utf-8") as f:
- f.write(content)
-
-
-def save_train_test_valid_split(annotations_txt, num_samples_valid, num_samples_test):
- with open(annotations_txt, encoding="utf-8") as f:
- all_lines = [line.strip() for line in f.readlines()]
- test_val_indices = random.sample(
- range(len(all_lines)), num_samples_valid + num_samples_test
- )
- valid_ix = test_val_indices[:num_samples_valid]
- test_ix = test_val_indices[num_samples_valid:]
- train = [line for i, line in enumerate(all_lines) if i not in test_val_indices]
- valid = [line for i, line in enumerate(all_lines) if i in valid_ix]
- test = [line for i, line in enumerate(all_lines) if i in test_ix]
-
- print(f"Num samples in train: {len(train)}")
- print(f"Num samples in valid: {len(valid)}")
- print(f"Num samples in test: {len(test)}")
-
- out_dir_path = "/".join(annotations_txt.split("/")[:-1])
- with open(os.path.join(out_dir_path, "train.txt"), "w+", encoding="utf-8") as f:
- for line in train:
- print(line, file=f)
- with open(os.path.join(out_dir_path, "valid.txt"), "w+", encoding="utf-8") as f:
- for line in valid:
- print(line, file=f)
- with open(os.path.join(out_dir_path, "test.txt"), "w+", encoding="utf-8") as f:
- for line in test:
- print(line, file=f)
- print(f"train, test and valid txts saved in {out_dir_path}")
-
-
-def save_txts_from_txt_done_data(
- text_path,
- wav_path_for_annotations_txt,
- out_path_for_txts,
- num_samples_valid,
- num_samples_test,
-):
- outfile = os.path.join(out_path_for_txts, "annotations.txt")
- with open(text_path) as file:
- file_lines = file.readlines()
-
- # print(file_lines[0])
-
- file_lines = [replace_extra_chars(line) for line in file_lines]
- # print(file_lines[0])
-
- fnames, ftexts = [], []
- for line in file_lines:
- elems = line.split('"')
- fnames.append(elems[0].strip())
- ftexts.append(elems[1].strip())
-
- all_chars = list(set("".join(ftexts)))
- punct_with_space = [i for i in all_chars if i in list(string.punctuation)] + [" "]
- chars = [i for i in all_chars if i not in punct_with_space if i.strip()]
- chars = "".join(chars)
- punct_with_space = "".join(punct_with_space)
-
- with open('../../config/glow/base_blank.json', 'r') as jfile:
- json_config = json.load(jfile)
-
- json_config["data"]["chars"] = chars
- json_config["data"]["punc"] = punct_with_space
- json_config["data"]["training_files"]=out_path_for_txts + '/train.txt'
- json_config["data"]["validation_files"] = out_path_for_txts + '/valid.txt'
- new_config_name = out_path_for_txts.split('/')[-1]
- with open(f'../../config/glow/{new_config_name}.json','w+') as jfile:
- json.dump(json_config, jfile)
-
- print(f"Characters: {chars}")
- print(f"Punctuation: {punct_with_space}")
- print(f"Config file is stored at ../../config/glow/{new_config_name}.json")
-
- outfile_f = open(outfile, "w+", encoding="utf-8")
- for f, t in zip(fnames, ftexts):
- print(
- os.path.join(wav_path_for_annotations_txt, f) + ".wav",
- t,
- sep="|",
- file=outfile_f,
- )
- outfile_f.close()
- write_txt(punct_with_space, os.path.join(out_path_for_txts, "punc.txt"))
- write_txt(chars, os.path.join(out_path_for_txts, "chars.txt"))
-
- save_train_test_valid_split(
- annotations_txt=outfile,
- num_samples_valid=num_samples_valid,
- num_samples_test=num_samples_test,
- )
-
-
-
-
-if __name__ == "__main__":
-
-
- parser = argparse.ArgumentParser()
- parser.add_argument("-i", "--text-path", type=str, required=True)
- parser.add_argument("-o", "--output-path", type=str, required=True)
- parser.add_argument("-w", "--wav-path", type=str, required=True)
- parser.add_argument("-v", "--valid-samples", type=int, default = 100)
- parser.add_argument("-t", "--test-samples", type=int, default = 10)
- args = parser.parse_args()
-
- save_txts_from_txt_done_data(
- args.text_path,
- args.wav_path,
- args.output_path,
- args.valid_samples,
- args.test_samples,
- )
diff --git a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/compiler/Utils.py b/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/compiler/Utils.py
deleted file mode 100644
index d84bae6559c8e752d4c034663cae22dd7b631952..0000000000000000000000000000000000000000
--- a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/compiler/Utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-__author__ = 'Taneem Jan, taneemishere.github.io'
-
-import string
-import random
-
-
-class Utils:
- @staticmethod
- def get_random_text(length_text=10, space_number=1, with_upper_case=True):
- results = []
- while len(results) < length_text:
- char = random.choice(string.ascii_letters[:26])
- results.append(char)
- if with_upper_case:
- results[0] = results[0].upper()
-
- current_spaces = []
- while len(current_spaces) < space_number:
- space_pos = random.randint(2, length_text - 3)
- if space_pos in current_spaces:
- break
- results[space_pos] = " "
- if with_upper_case:
- results[space_pos + 1] = results[space_pos - 1].upper()
-
- current_spaces.append(space_pos)
-
- return ''.join(results)
-
- @staticmethod
- def get_ios_id(length=10):
- results = []
-
- while len(results) < length:
- char = random.choice(string.digits + string.ascii_letters)
- results.append(char)
-
- results[3] = "-"
- results[6] = "-"
-
- return ''.join(results)
-
- @staticmethod
- def get_android_id(length=10):
- results = []
-
- while len(results) < length:
- char = random.choice(string.ascii_letters)
- results.append(char)
-
- return ''.join(results)
diff --git a/spaces/HgMenon/Transcribe_V0.2/tests/segments_test.py b/spaces/HgMenon/Transcribe_V0.2/tests/segments_test.py
deleted file mode 100644
index d829f1c77f74b3c96513fe4965d532cf2d1dceb4..0000000000000000000000000000000000000000
--- a/spaces/HgMenon/Transcribe_V0.2/tests/segments_test.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import sys
-import unittest
-
-sys.path.append('../whisper-webui')
-
-from src.segments import merge_timestamps
-
-class TestSegments(unittest.TestCase):
- def __init__(self, *args, **kwargs):
- super(TestSegments, self).__init__(*args, **kwargs)
-
- def test_merge_segments(self):
- segments = [
- {'start': 10.0, 'end': 20.0},
- {'start': 22.0, 'end': 27.0},
- {'start': 31.0, 'end': 35.0},
- {'start': 45.0, 'end': 60.0},
- {'start': 61.0, 'end': 65.0},
- {'start': 68.0, 'end': 98.0},
- {'start': 100.0, 'end': 102.0},
- {'start': 110.0, 'end': 112.0}
- ]
-
- result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1)
-
- self.assertListEqual(result, [
- {'start': 9.0, 'end': 36.0},
- {'start': 44.0, 'end': 66.0},
- {'start': 67.0, 'end': 99.0},
- {'start': 99.0, 'end': 103.0},
- {'start': 109.0, 'end': 113.0}
- ])
-
- def test_overlap_next(self):
- segments = [
- {'start': 5.0, 'end': 39.182},
- {'start': 39.986, 'end': 40.814}
- ]
-
- result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1)
-
- self.assertListEqual(result, [
- {'start': 4.0, 'end': 39.584},
- {'start': 39.584, 'end': 41.814}
- ])
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64f1ca39.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64f1ca39.js
deleted file mode 100644
index be6128db34c2cd62551dfa89e85f5aaa1cce955d..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64f1ca39.js
+++ /dev/null
@@ -1,5 +0,0 @@
-import{S as me,i as ce,s as be,w as Pe,b as c,f as y,g as E,x as j,n as S,e as P,d as A,l as R,y as de,z as Ve,t as C,h as N,A as re,B as ue,a as z,C as ul,c as q,m as G,j as F,k as T,o as Z,D as ge,E as he,F as He,G as Ml,H as Fl,I as Me,J as Il,_ as Ue,K as X,L as ol,M as Re,N as Tl,O as _l,P as Bl,Q as Dl,X as Ll,R as Ul,T as Cl,U as Nl,V as Ol}from"./index.396f4a72.js";import{U as Kl}from"./Upload.5d0148e8.js";import{M as jl}from"./ModifyUpload.2cfe71e4.js";import{B as dl}from"./BlockLabel.37da86a3.js";import{n as zl}from"./utils.27234e1d.js";function Ql(l){let e,i,n,a;return{c(){e=Pe("svg"),i=Pe("path"),n=Pe("circle"),a=Pe("circle"),c(i,"d","M9 18V5l12-2v13"),c(n,"cx","6"),c(n,"cy","18"),c(n,"r","3"),c(a,"cx","18"),c(a,"cy","16"),c(a,"r","3"),c(e,"xmlns","http://www.w3.org/2000/svg"),c(e,"width","100%"),c(e,"height","100%"),c(e,"viewBox","0 0 24 24"),c(e,"fill","none"),c(e,"stroke","currentColor"),c(e,"stroke-width","1.5"),c(e,"stroke-linecap","round"),c(e,"stroke-linejoin","round"),c(e,"class","feather feather-music")},m(f,t){y(f,e,t),E(e,i),E(e,n),E(e,a)},p:j,i:j,o:j,d(f){f&&S(e)}}}class Be extends me{constructor(e){super(),ce(this,e,null,Ql,be,{})}}function Ce(l,e,i){const n=l.slice();return n[27]=e[i],n[29]=i,n}function Ne(l){let e,i,n,a,f=(l[6]==="label"||l[7]==="label")&&Oe(l);return{c(){e=P("span"),f&&f.c(),c(e,"class","pip first"),c(e,"style",i=l[14]+": 0%;"),A(e,"selected",l[17](l[0])),A(e,"in-range",l[16](l[0]))},m(t,u){y(t,e,u),f&&f.m(e,null),n||(a=[R(e,"click",function(){de(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}),R(e,"touchend",Ve(function(){de(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}))],n=!0)},p(t,u){l=t,l[6]==="label"||l[7]==="label"?f?f.p(l,u):(f=Oe(l),f.c(),f.m(e,null)):f&&(f.d(1),f=null),u&16384&&i!==(i=l[14]+": 0%;")&&c(e,"style",i),u&131073&&A(e,"selected",l[17](l[0])),u&65537&&A(e,"in-range",l[16](l[0]))},d(t){t&&S(e),f&&f.d(),n=!1,re(a)}}}function Oe(l){let e,i=l[12](l[0],0,0)+"",n,a=l[10]&&Ke(l),f=l[11]&&je(l);return{c(){e=P("span"),a&&a.c(),n=C(i),f&&f.c(),c(e,"class","pipVal")},m(t,u){y(t,e,u),a&&a.m(e,null),E(e,n),f&&f.m(e,null)},p(t,u){t[10]?a?a.p(t,u):(a=Ke(t),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u&4097&&i!==(i=t[12](t[0],0,0)+"")&&N(n,i),t[11]?f?f.p(t,u):(f=je(t),f.c(),f.m(e,null)):f&&(f.d(1),f=null)},d(t){t&&S(e),a&&a.d(),f&&f.d()}}}function Ke(l){let e,i;return{c(){e=P("span"),i=C(l[10]),c(e,"class","pipVal-prefix")},m(n,a){y(n,e,a),E(e,i)},p(n,a){a&1024&&N(i,n[10])},d(n){n&&S(e)}}}function je(l){let e,i;return{c(){e=P("span"),i=C(l[11]),c(e,"class","pipVal-suffix")},m(n,a){y(n,e,a),E(e,i)},p(n,a){a&2048&&N(i,n[11])},d(n){n&&S(e)}}}function ze(l){let e,i=Array(l[19]+1),n=[];for(let a=0;ad}=e,{focus:U=void 0}=e,{orientationStart:Y=void 0}=e,{percentOf:ee=void 0}=e,{moveHandle:W=void 0}=e;function ae(d){W(void 0,d)}return l.$$set=d=>{"range"in d&&i(21,_=d.range),"min"in d&&i(0,g=d.min),"max"in d&&i(1,o=d.max),"step"in d&&i(22,s=d.step),"values"in d&&i(23,m=d.values),"vertical"in d&&i(2,k=d.vertical),"reversed"in d&&i(3,p=d.reversed),"hoverable"in d&&i(4,h=d.hoverable),"disabled"in d&&i(5,V=d.disabled),"pipstep"in d&&i(24,w=d.pipstep),"all"in d&&i(6,I=d.all),"first"in d&&i(7,Q=d.first),"last"in d&&i(8,D=d.last),"rest"in d&&i(9,O=d.rest),"prefix"in d&&i(10,B=d.prefix),"suffix"in d&&i(11,J=d.suffix),"formatter"in d&&i(12,$=d.formatter),"focus"in d&&i(13,U=d.focus),"orientationStart"in d&&i(14,Y=d.orientationStart),"percentOf"in d&&i(15,ee=d.percentOf),"moveHandle"in d&&i(25,W=d.moveHandle)},l.$$.update=()=>{l.$$.dirty&20971527&&i(26,n=w||((o-g)/s>=(k?50:100)?(o-g)/(k?10:20):1)),l.$$.dirty&71303171&&i(19,a=parseInt((o-g)/(s*n),10)),l.$$.dirty&71303169&&i(18,f=function(d){return g+d*s*n}),l.$$.dirty&8388608&&i(17,t=function(d){return m.some(se=>se===d)}),l.$$.dirty&10485760&&i(16,u=function(d){if(_==="min")return m[0]>d;if(_==="max")return m[0]d})},[g,o,k,p,h,V,I,Q,D,O,B,J,$,U,Y,ee,u,t,f,a,ae,_,s,m,w,W,n]}class ql extends me{constructor(e){super(),ce(this,e,Xl,Yl,be,{range:21,min:0,max:1,step:22,values:23,vertical:2,reversed:3,hoverable:4,disabled:5,pipstep:24,all:6,first:7,last:8,rest:9,prefix:10,suffix:11,formatter:12,focus:13,orientationStart:14,percentOf:15,moveHandle:25})}}function $e(l,e,i){const n=l.slice();return n[63]=e[i],n[65]=i,n}function el(l){let e,i=l[21](l[63],l[65],l[23](l[63]))+"",n,a=l[18]&&ll(l),f=l[19]&&nl(l);return{c(){e=P("span"),a&&a.c(),n=C(i),f&&f.c(),c(e,"class","rangeFloat")},m(t,u){y(t,e,u),a&&a.m(e,null),E(e,n),f&&f.m(e,null)},p(t,u){t[18]?a?a.p(t,u):(a=ll(t),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u[0]&10485761&&i!==(i=t[21](t[63],t[65],t[23](t[63]))+"")&&N(n,i),t[19]?f?f.p(t,u):(f=nl(t),f.c(),f.m(e,null)):f&&(f.d(1),f=null)},d(t){t&&S(e),a&&a.d(),f&&f.d()}}}function ll(l){let e,i;return{c(){e=P("span"),i=C(l[18]),c(e,"class","rangeFloat-prefix")},m(n,a){y(n,e,a),E(e,i)},p(n,a){a[0]&262144&&N(i,n[18])},d(n){n&&S(e)}}}function nl(l){let e,i;return{c(){e=P("span"),i=C(l[19]),c(e,"class","rangeFloat-suffix")},m(n,a){y(n,e,a),E(e,i)},p(n,a){a[0]&524288&&N(i,n[19])},d(n){n&&S(e)}}}function il(l){let e,i,n,a,f,t,u,_,g,o,s,m,k,p=l[7]&&el(l);return{c(){e=P("span"),i=P("span"),n=z(),p&&p.c(),c(i,"class","rangeNub"),c(e,"role","slider"),c(e,"class","rangeHandle"),c(e,"data-handle",a=l[65]),c(e,"style",f=l[28]+": "+l[29][l[65]]+"%; z-index: "+(l[26]===l[65]?3:2)+";"),c(e,"aria-valuemin",t=l[2]===!0&&l[65]===1?l[0][0]:l[3]),c(e,"aria-valuemax",u=l[2]===!0&&l[65]===0?l[0][1]:l[4]),c(e,"aria-valuenow",_=l[63]),c(e,"aria-valuetext",g=""+(l[18]+l[21](l[63],l[65],l[23](l[63]))+l[19])),c(e,"aria-orientation",o=l[6]?"vertical":"horizontal"),c(e,"aria-disabled",l[10]),c(e,"disabled",l[10]),c(e,"tabindex",s=l[10]?-1:0),A(e,"active",l[24]&&l[26]===l[65]),A(e,"press",l[25]&&l[26]===l[65])},m(h,V){y(h,e,V),E(e,i),E(e,n),p&&p.m(e,null),m||(k=[R(e,"blur",l[33]),R(e,"focus",l[34]),R(e,"keydown",l[35])],m=!0)},p(h,V){h[7]?p?p.p(h,V):(p=el(h),p.c(),p.m(e,null)):p&&(p.d(1),p=null),V[0]&872415232&&f!==(f=h[28]+": "+h[29][h[65]]+"%; z-index: "+(h[26]===h[65]?3:2)+";")&&c(e,"style",f),V[0]&13&&t!==(t=h[2]===!0&&h[65]===1?h[0][0]:h[3])&&c(e,"aria-valuemin",t),V[0]&21&&u!==(u=h[2]===!0&&h[65]===0?h[0][1]:h[4])&&c(e,"aria-valuemax",u),V[0]&1&&_!==(_=h[63])&&c(e,"aria-valuenow",_),V[0]&11272193&&g!==(g=""+(h[18]+h[21](h[63],h[65],h[23](h[63]))+h[19]))&&c(e,"aria-valuetext",g),V[0]&64&&o!==(o=h[6]?"vertical":"horizontal")&&c(e,"aria-orientation",o),V[0]&1024&&c(e,"aria-disabled",h[10]),V[0]&1024&&c(e,"disabled",h[10]),V[0]&1024&&s!==(s=h[10]?-1:0)&&c(e,"tabindex",s),V[0]&83886080&&A(e,"active",h[24]&&h[26]===h[65]),V[0]&100663296&&A(e,"press",h[25]&&h[26]===h[65])},d(h){h&&S(e),p&&p.d(),m=!1,re(k)}}}function al(l){let e,i;return{c(){e=P("span"),c(e,"class","rangeBar"),c(e,"style",i=l[28]+": "+l[31](l[29])+"%; "+l[27]+": "+l[32](l[29])+"%;")},m(n,a){y(n,e,a)},p(n,a){a[0]&939524096&&i!==(i=n[28]+": "+n[31](n[29])+"%; "+n[27]+": "+n[32](n[29])+"%;")&&c(e,"style",i)},d(n){n&&S(e)}}}function fl(l){let e,i;return e=new ql({props:{values:l[0],min:l[3],max:l[4],step:l[5],range:l[2],vertical:l[6],reversed:l[8],orientationStart:l[28],hoverable:l[9],disabled:l[10],all:l[13],first:l[14],last:l[15],rest:l[16],pipstep:l[12],prefix:l[18],suffix:l[19],formatter:l[20],focus:l[24],percentOf:l[23],moveHandle:l[30]}}),{c(){q(e.$$.fragment)},m(n,a){G(e,n,a),i=!0},p(n,a){const f={};a[0]&1&&(f.values=n[0]),a[0]&8&&(f.min=n[3]),a[0]&16&&(f.max=n[4]),a[0]&32&&(f.step=n[5]),a[0]&4&&(f.range=n[2]),a[0]&64&&(f.vertical=n[6]),a[0]&256&&(f.reversed=n[8]),a[0]&268435456&&(f.orientationStart=n[28]),a[0]&512&&(f.hoverable=n[9]),a[0]&1024&&(f.disabled=n[10]),a[0]&8192&&(f.all=n[13]),a[0]&16384&&(f.first=n[14]),a[0]&32768&&(f.last=n[15]),a[0]&65536&&(f.rest=n[16]),a[0]&4096&&(f.pipstep=n[12]),a[0]&262144&&(f.prefix=n[18]),a[0]&524288&&(f.suffix=n[19]),a[0]&1048576&&(f.formatter=n[20]),a[0]&16777216&&(f.focus=n[24]),a[0]&8388608&&(f.percentOf=n[23]),e.$set(f)},i(n){i||(F(e.$$.fragment,n),i=!0)},o(n){T(e.$$.fragment,n),i=!1},d(n){Z(e,n)}}}function Gl(l){let e,i,n,a,f,t,u=l[0],_=[];for(let s=0;s{o=null}),he()),(!a||m[0]&131072)&&c(e,"id",s[17]),m[0]&4&&A(e,"range",s[2]),m[0]&1024&&A(e,"disabled",s[10]),m[0]&512&&A(e,"hoverable",s[9]),m[0]&64&&A(e,"vertical",s[6]),m[0]&256&&A(e,"reversed",s[8]),m[0]&16777216&&A(e,"focus",s[24]),m[0]&4&&A(e,"min",s[2]==="min"),m[0]&4&&A(e,"max",s[2]==="max"),m[0]&2048&&A(e,"pips",s[11]),m[0]&122880&&A(e,"pip-labels",s[13]==="label"||s[14]==="label"||s[15]==="label"||s[16]==="label")},i(s){a||(F(o),a=!0)},o(s){T(o),a=!1},d(s){s&&S(e),ul(_,s),g&&g.d(),o&&o.d(),l[49](null),f=!1,re(t)}}}function tl(l){if(!l)return-1;for(var e=0;l=l.previousElementSibling;)e++;return e}function Te(l){return l.type.includes("touch")?l.touches[0]:l}function Zl(l,e,i){let n,a,f,t,u,_,g=j,o=()=>(g(),g=Fl(ne,r=>i(29,_=r)),ne);l.$$.on_destroy.push(()=>g());let{slider:s}=e,{range:m=!1}=e,{pushy:k=!1}=e,{min:p=0}=e,{max:h=100}=e,{step:V=1}=e,{values:w=[(h+p)/2]}=e,{vertical:I=!1}=e,{float:Q=!1}=e,{reversed:D=!1}=e,{hoverable:O=!0}=e,{disabled:B=!1}=e,{pips:J=!1}=e,{pipstep:$=void 0}=e,{all:U=void 0}=e,{first:Y=void 0}=e,{last:ee=void 0}=e,{rest:W=void 0}=e,{id:ae=void 0}=e,{prefix:d=""}=e,{suffix:se=""}=e,{formatter:ke=(r,v,M)=>r}=e,{handleFormatter:Ee=ke}=e,{precision:x=2}=e,{springValues:pe={stiffness:.15,damping:.4}}=e;const we=He();let ve=0,le=!1,fe=!1,te=!1,Ae=!1,b=w.length-1,L,K,ne;function Fe(r){const v=s.querySelectorAll(".handle"),M=Array.prototype.includes.call(v,r),H=Array.prototype.some.call(v,ie=>ie.contains(r));return M||H}function Ie(r){return m==="min"||m==="max"?r.slice(0,1):m?r.slice(0,2):r}function oe(){return s.getBoundingClientRect()}function ye(r){const v=oe();let M=0,H=0,ie=0;I?(M=r.clientY-v.top,H=M/v.height*100,H=D?H:100-H):(M=r.clientX-v.left,H=M/v.width*100,H=D?100-H:H),ie=(h-p)/100*H+p;let Le;return m===!0&&w[0]===w[1]?ie>w[1]?1:0:(Le=w.indexOf([...w].sort((Rl,Hl)=>Math.abs(ie-Rl)-Math.abs(ie-Hl))[0]),Le)}function Se(r){const v=oe();let M=0,H=0,ie=0;I?(M=r.clientY-v.top,H=M/v.height*100,H=D?H:100-H):(M=r.clientX-v.left,H=M/v.width*100,H=D?100-H:H),ie=(h-p)/100*H+p,_e(b,ie)}function _e(r,v){return v=f(v),typeof r>"u"&&(r=b),m&&(r===0&&v>w[1]?k?i(0,w[1]=v,w):v=w[1]:r===1&&vf(r))})}function De(){!B&&we("stop",{activeHandle:b,startValue:L,value:w[b],values:w.map(r=>f(r))})}function El(){!B&&we("change",{activeHandle:b,startValue:L,previousValue:typeof K>"u"?L:K,value:w[b],values:w.map(r=>f(r))})}function Pl(r){Me[r?"unshift":"push"](()=>{s=r,i(1,s)})}return l.$$set=r=>{"slider"in r&&i(1,s=r.slider),"range"in r&&i(2,m=r.range),"pushy"in r&&i(43,k=r.pushy),"min"in r&&i(3,p=r.min),"max"in r&&i(4,h=r.max),"step"in r&&i(5,V=r.step),"values"in r&&i(0,w=r.values),"vertical"in r&&i(6,I=r.vertical),"float"in r&&i(7,Q=r.float),"reversed"in r&&i(8,D=r.reversed),"hoverable"in r&&i(9,O=r.hoverable),"disabled"in r&&i(10,B=r.disabled),"pips"in r&&i(11,J=r.pips),"pipstep"in r&&i(12,$=r.pipstep),"all"in r&&i(13,U=r.all),"first"in r&&i(14,Y=r.first),"last"in r&&i(15,ee=r.last),"rest"in r&&i(16,W=r.rest),"id"in r&&i(17,ae=r.id),"prefix"in r&&i(18,d=r.prefix),"suffix"in r&&i(19,se=r.suffix),"formatter"in r&&i(20,ke=r.formatter),"handleFormatter"in r&&i(21,Ee=r.handleFormatter),"precision"in r&&i(44,x=r.precision),"springValues"in r&&i(45,pe=r.springValues)},l.$$.update=()=>{l.$$.dirty[0]&24&&i(48,a=function(r){return r<=p?p:r>=h?h:r}),l.$$.dirty[0]&56|l.$$.dirty[1]&139264&&i(47,f=function(r){if(r<=p)return p;if(r>=h)return h;let v=(r-p)%V,M=r-v;return Math.abs(v)*2>=V&&(M+=v>0?V:-V),M=a(M),parseFloat(M.toFixed(x))}),l.$$.dirty[0]&24|l.$$.dirty[1]&8192&&i(23,n=function(r){let v=(r-p)/(h-p)*100;return isNaN(v)||v<=0?0:v>=100?100:parseFloat(v.toFixed(x))}),l.$$.dirty[0]&12582937|l.$$.dirty[1]&114688&&(Array.isArray(w)||(i(0,w=[(h+p)/2]),console.error("'values' prop should be an Array (https://github.com/simeydotme/svelte-range-slider-pips#slider-props)")),i(0,w=Ie(w.map(r=>f(r)))),ve!==w.length?o(i(22,ne=Ml(w.map(r=>n(r)),pe))):ne.set(w.map(r=>n(r))),i(46,ve=w.length)),l.$$.dirty[0]&320&&i(28,t=I?D?"top":"bottom":D?"right":"left"),l.$$.dirty[0]&320&&i(27,u=I?D?"bottom":"top":D?"left":"right")},[w,s,m,p,h,V,I,Q,D,O,B,J,$,U,Y,ee,W,ae,d,se,ke,Ee,ne,n,le,te,b,u,t,_,_e,ml,cl,bl,gl,hl,kl,pl,wl,vl,Al,yl,Sl,k,x,pe,ve,f,a,Pl]}class Jl extends me{constructor(e){super(),ce(this,e,Zl,Gl,be,{slider:1,range:2,pushy:43,min:3,max:4,step:5,values:0,vertical:6,float:7,reversed:8,hoverable:9,disabled:10,pips:11,pipstep:12,all:13,first:14,last:15,rest:16,id:17,prefix:18,suffix:19,formatter:20,handleFormatter:21,precision:44,springValues:45},null,[-1,-1,-1])}}function Wl(l){let e,i,n,a,f,t,u,_,g;e=new jl({props:{editable:!0,absolute:!1}}),e.$on("clear",l[15]),e.$on("edit",l[28]);let o=l[10]==="edit"&&l[11]?.duration&&sl(l);return{c(){q(e.$$.fragment),i=z(),n=P("audio"),f=z(),o&&o.c(),t=ue(),c(n,"class","w-full h-14 p-2"),n.controls=!0,c(n,"preload","metadata"),Re(n.src,a=l[1].data)||c(n,"src",a)},m(s,m){G(e,s,m),y(s,i,m),y(s,n,m),l[29](n),y(s,f,m),o&&o.m(s,m),y(s,t,m),u=!0,_||(g=[Tl(l[16].call(null,n)),R(n,"play",l[24]),R(n,"pause",l[25]),R(n,"ended",l[26])],_=!0)},p(s,m){(!u||m[0]&2&&!Re(n.src,a=s[1].data))&&c(n,"src",a),s[10]==="edit"&&s[11]?.duration?o?(o.p(s,m),m[0]&3072&&F(o,1)):(o=sl(s),o.c(),F(o,1),o.m(t.parentNode,t)):o&&(ge(),T(o,1,1,()=>{o=null}),he())},i(s){u||(F(e.$$.fragment,s),F(o),u=!0)},o(s){T(e.$$.fragment,s),T(o),u=!1},d(s){Z(e,s),s&&S(i),s&&S(n),l[29](null),s&&S(f),o&&o.d(s),s&&S(t),_=!1,re(g)}}}function xl(l){let e,i,n,a;const f=[en,$l],t=[];function u(_,g){return _[4]==="microphone"?0:_[4]==="upload"?1:-1}return~(e=u(l))&&(i=t[e]=f[e](l)),{c(){i&&i.c(),n=ue()},m(_,g){~e&&t[e].m(_,g),y(_,n,g),a=!0},p(_,g){let o=e;e=u(_),e===o?~e&&t[e].p(_,g):(i&&(ge(),T(t[o],1,1,()=>{t[o]=null}),he()),~e?(i=t[e],i?i.p(_,g):(i=t[e]=f[e](_),i.c()),F(i,1),i.m(n.parentNode,n)):i=null)},i(_){a||(F(i),a=!0)},o(_){T(i),a=!1},d(_){~e&&t[e].d(_),_&&S(n)}}}function sl(l){let e,i,n;function a(t){l[30](t)}let f={range:!0,min:0,max:100,step:1};return l[12]!==void 0&&(f.values=l[12]),e=new Jl({props:f}),Me.push(()=>_l(e,"values",a)),e.$on("change",l[17]),{c(){q(e.$$.fragment)},m(t,u){G(e,t,u),n=!0},p(t,u){const _={};!i&&u[0]&4096&&(i=!0,_.values=t[12],ol(()=>i=!1)),e.$set(_)},i(t){n||(F(e.$$.fragment,t),n=!0)},o(t){T(e.$$.fragment,t),n=!1},d(t){Z(e,t)}}}function $l(l){let e,i,n;function a(t){l[27](t)}let f={filetype:"audio/*",$$slots:{default:[ln]},$$scope:{ctx:l}};return l[0]!==void 0&&(f.dragging=l[0]),e=new Kl({props:f}),Me.push(()=>_l(e,"dragging",a)),e.$on("load",l[18]),{c(){q(e.$$.fragment)},m(t,u){G(e,t,u),n=!0},p(t,u){const _={};u[0]&448|u[1]&512&&(_.$$scope={dirty:u,ctx:t}),!i&&u[0]&1&&(i=!0,_.dragging=t[0],ol(()=>i=!1)),e.$set(_)},i(t){n||(F(e.$$.fragment,t),n=!0)},o(t){T(e.$$.fragment,t),n=!1},d(t){Z(e,t)}}}function en(l){let e;function i(f,t){return f[9]?an:nn}let n=i(l),a=n(l);return{c(){e=P("div"),a.c(),c(e,"class","mt-6 p-2")},m(f,t){y(f,e,t),a.m(e,null)},p(f,t){n===(n=i(f))&&a?a.p(f,t):(a.d(1),a=n(f),a&&(a.c(),a.m(e,null)))},i:j,o:j,d(f){f&&S(e),a.d()}}}function ln(l){let e,i,n,a,f,t,u,_,g;return{c(){e=P("div"),i=C(l[6]),n=z(),a=P("span"),f=C("- "),t=C(l[7]),u=C(" -"),_=z(),g=C(l[8]),c(a,"class","text-gray-300"),c(e,"class","flex flex-col")},m(o,s){y(o,e,s),E(e,i),E(e,n),E(e,a),E(a,f),E(a,t),E(a,u),E(e,_),E(e,g)},p(o,s){s[0]&64&&N(i,o[6]),s[0]&128&&N(t,o[7]),s[0]&256&&N(g,o[8])},d(o){o&&S(e)}}}function nn(l){let e,i,n;return{c(){e=P("button"),e.innerHTML=`
- Record from microphone
`,c(e,"class","gr-button text-gray-800")},m(a,f){y(a,e,f),i||(n=R(e,"click",l[13]),i=!0)},p:j,d(a){a&&S(e),i=!1,n()}}}function an(l){let e,i,n;return{c(){e=P("button"),e.innerHTML=`
-
- Stop recording
`,c(e,"class","gr-button !bg-red-500/10")},m(a,f){y(a,e,f),i||(n=R(e,"click",l[14]),i=!0)},p:j,d(a){a&&S(e),i=!1,n()}}}function fn(l){let e,i,n,a,f,t;e=new dl({props:{show_label:l[3],Icon:Be,label:l[2]||"Audio"}});const u=[xl,Wl],_=[];function g(o,s){return o[1]===null||o[5]?0:1}return n=g(l),a=_[n]=u[n](l),{c(){q(e.$$.fragment),i=z(),a.c(),f=ue()},m(o,s){G(e,o,s),y(o,i,s),_[n].m(o,s),y(o,f,s),t=!0},p(o,s){const m={};s[0]&8&&(m.show_label=o[3]),s[0]&4&&(m.label=o[2]||"Audio"),e.$set(m);let k=n;n=g(o),n===k?_[n].p(o,s):(ge(),T(_[k],1,1,()=>{_[k]=null}),he(),a=_[n],a?a.p(o,s):(a=_[n]=u[n](o),a.c()),F(a,1),a.m(f.parentNode,f))},i(o){t||(F(e.$$.fragment,o),F(a),t=!0)},o(o){T(e.$$.fragment,o),T(a),t=!1},d(o){Z(e,o),o&&S(i),_[n].d(o),o&&S(f)}}}const tn=500,rl=44;function sn(l){return new Promise((e,i)=>{let n=new FileReader;n.onerror=i,n.onload=()=>e(n.result),n.readAsDataURL(l)})}function rn(l,e,i){let{value:n=null}=e,{label:a}=e,{show_label:f}=e,{name:t}=e,{source:u}=e,{pending:_=!1}=e,{streaming:g=!1}=e,{drop_text:o="Drop an audio file"}=e,{or_text:s="or"}=e,{upload_text:m="click to upload"}=e,k=!1,p,h="",V,w=[],I=!1,Q,D=!1,O=[0,100],B=[],J;function $(){J=[Ue(()=>import("./module.2849491a.js"),["assets/module.2849491a.js","assets/module.e2741a44.js"]),Ue(()=>import("./module.d8037460.js"),["assets/module.d8037460.js","assets/module.e2741a44.js"])]}g&&$();const U=He(),Y=async(b,L)=>{let K=new Blob(b,{type:"audio/wav"});i(1,n={data:await sn(K),name:t}),U(L,n)};async function ee(){let b;try{b=await navigator.mediaDevices.getUserMedia({audio:!0})}catch(L){if(L instanceof DOMException&&L.name=="NotAllowedError"){U("error","Please allow access to the microphone for recording.");return}else throw L}if(b!=null){if(g){const[{MediaRecorder:L,register:K},{connect:ne}]=await Promise.all(J);await K(await ne()),p=new L(b,{mimeType:"audio/wav"});async function Fe(Ie){let oe=await Ie.data.arrayBuffer(),ye=new Uint8Array(oe);if(V||(i(21,V=new Uint8Array(oe.slice(0,rl))),ye=new Uint8Array(oe.slice(rl))),_)w.push(ye);else{let Se=[V].concat(w,[ye]);Y(Se,"stream"),i(22,w=[])}}p.addEventListener("dataavailable",Fe)}else p=new MediaRecorder(b),p.addEventListener("dataavailable",L=>{B.push(L.data)}),p.addEventListener("stop",async()=>{i(9,k=!1),await Y(B,"change"),B=[]});D=!0}}async function W(){i(9,k=!0),D||await ee(),i(21,V=void 0),g?p.start(tn):p.start()}Il(()=>{p&&p.state!=="inactive"&&p.stop()});const ae=async()=>{p.stop(),g&&(i(9,k=!1),_&&i(23,I=!0))};function d(){U("change"),U("clear"),i(10,h=""),i(1,n=null)}function se(b){function L(){const K=O[0]/100*b.duration,ne=O[1]/100*b.duration;b.currentTimene&&(b.currentTime=K,b.pause())}return b.addEventListener("timeupdate",L),{destroy:()=>b.removeEventListener("timeupdate",L)}}function ke({detail:{values:b}}){!n||(U("change",{data:n.data,name:t,crop_min:b[0],crop_max:b[1]}),U("edit"))}function Ee({detail:b}){i(1,n=b),U("change",{data:b.data,name:b.name}),U("upload",b)}let{dragging:x=!1}=e;function pe(b){X.call(this,l,b)}function we(b){X.call(this,l,b)}function ve(b){X.call(this,l,b)}function le(b){x=b,i(0,x)}const fe=()=>i(10,h="edit");function te(b){Me[b?"unshift":"push"](()=>{Q=b,i(11,Q)})}function Ae(b){O=b,i(12,O)}return l.$$set=b=>{"value"in b&&i(1,n=b.value),"label"in b&&i(2,a=b.label),"show_label"in b&&i(3,f=b.show_label),"name"in b&&i(19,t=b.name),"source"in b&&i(4,u=b.source),"pending"in b&&i(20,_=b.pending),"streaming"in b&&i(5,g=b.streaming),"drop_text"in b&&i(6,o=b.drop_text),"or_text"in b&&i(7,s=b.or_text),"upload_text"in b&&i(8,m=b.upload_text),"dragging"in b&&i(0,x=b.dragging)},l.$$.update=()=>{if(l.$$.dirty[0]&15728640&&I&&_===!1&&(i(23,I=!1),V&&w)){let b=[V].concat(w);i(22,w=[]),Y(b,"stream")}l.$$.dirty[0]&1&&U("drag",x)},[x,n,a,f,u,g,o,s,m,k,h,Q,O,W,ae,d,se,ke,Ee,t,_,V,w,I,pe,we,ve,le,fe,te,Ae]}class un extends me{constructor(e){super(),ce(this,e,rn,fn,be,{value:1,label:2,show_label:3,name:19,source:4,pending:20,streaming:5,drop_text:6,or_text:7,upload_text:8,dragging:0},null,[-1,-1])}}function on(l){let e,i,n,a;return{c(){e=P("audio"),c(e,"class","w-full h-14 p-2 mt-7"),e.controls=!0,c(e,"preload","metadata"),Re(e.src,i=l[0].data)||c(e,"src",i)},m(f,t){y(f,e,t),n||(a=[R(e,"play",l[4]),R(e,"pause",l[5]),R(e,"ended",l[6])],n=!0)},p(f,t){t&1&&!Re(e.src,i=f[0].data)&&c(e,"src",i)},i:j,o:j,d(f){f&&S(e),n=!1,re(a)}}}function _n(l){let e,i,n,a;return n=new Be({}),{c(){e=P("div"),i=P("div"),q(n.$$.fragment),c(i,"class","h-5 dark:text-white opacity-50"),c(e,"class","h-full min-h-[8rem] flex justify-center items-center")},m(f,t){y(f,e,t),E(e,i),G(n,i,null),a=!0},p:j,i(f){a||(F(n.$$.fragment,f),a=!0)},o(f){T(n.$$.fragment,f),a=!1},d(f){f&&S(e),Z(n)}}}function dn(l){let e,i,n,a,f,t;e=new dl({props:{show_label:l[2],Icon:Be,label:l[1]||"Audio"}});const u=[_n,on],_=[];function g(o,s){return o[0]===null?0:1}return n=g(l),a=_[n]=u[n](l),{c(){q(e.$$.fragment),i=z(),a.c(),f=ue()},m(o,s){G(e,o,s),y(o,i,s),_[n].m(o,s),y(o,f,s),t=!0},p(o,[s]){const m={};s&4&&(m.show_label=o[2]),s&2&&(m.label=o[1]||"Audio"),e.$set(m);let k=n;n=g(o),n===k?_[n].p(o,s):(ge(),T(_[k],1,1,()=>{_[k]=null}),he(),a=_[n],a?a.p(o,s):(a=_[n]=u[n](o),a.c()),F(a,1),a.m(f.parentNode,f))},i(o){t||(F(e.$$.fragment,o),F(a),t=!0)},o(o){T(e.$$.fragment,o),T(a),t=!1},d(o){Z(e,o),o&&S(i),_[n].d(o),o&&S(f)}}}function mn(l,e,i){let{value:n=null}=e,{label:a}=e,{name:f}=e,{show_label:t}=e;const u=He();function _(s){X.call(this,l,s)}function g(s){X.call(this,l,s)}function o(s){X.call(this,l,s)}return l.$$set=s=>{"value"in s&&i(0,n=s.value),"label"in s&&i(1,a=s.label),"name"in s&&i(3,f=s.name),"show_label"in s&&i(2,t=s.show_label)},l.$$.update=()=>{l.$$.dirty&9&&n&&u("change",{name:f,data:n?.data})},[n,a,t,f,_,g,o]}class cn extends me{constructor(e){super(),ce(this,e,mn,dn,be,{value:0,label:1,name:3,show_label:2})}}function bn(l){let e,i;return e=new cn({props:{show_label:l[8],value:l[11],name:l[11]?.name||"audio_file",label:l[7]}}),{c(){q(e.$$.fragment)},m(n,a){G(e,n,a),i=!0},p(n,a){const f={};a&256&&(f.show_label=n[8]),a&2048&&(f.value=n[11]),a&2048&&(f.name=n[11]?.name||"audio_file"),a&128&&(f.label=n[7]),e.$set(f)},i(n){i||(F(e.$$.fragment,n),i=!0)},o(n){T(e.$$.fragment,n),i=!1},d(n){Z(e,n)}}}function gn(l){let e,i;return e=new un({props:{label:l[7],show_label:l[8],value:l[11],name:l[5],source:l[6],pending:l[9],streaming:l[10],drop_text:l[13]("interface.drop_audio"),or_text:l[13]("or"),upload_text:l[13]("interface.click_to_upload")}}),e.$on("change",l[18]),e.$on("stream",l[19]),e.$on("drag",l[20]),e.$on("edit",l[21]),e.$on("play",l[22]),e.$on("pause",l[23]),e.$on("ended",l[24]),e.$on("upload",l[25]),e.$on("error",l[26]),{c(){q(e.$$.fragment)},m(n,a){G(e,n,a),i=!0},p(n,a){const f={};a&128&&(f.label=n[7]),a&256&&(f.show_label=n[8]),a&2048&&(f.value=n[11]),a&32&&(f.name=n[5]),a&64&&(f.source=n[6]),a&512&&(f.pending=n[9]),a&1024&&(f.streaming=n[10]),a&8192&&(f.drop_text=n[13]("interface.drop_audio")),a&8192&&(f.or_text=n[13]("or")),a&8192&&(f.upload_text=n[13]("interface.click_to_upload")),e.$set(f)},i(n){i||(F(e.$$.fragment,n),i=!0)},o(n){T(e.$$.fragment,n),i=!1},d(n){Z(e,n)}}}function hn(l){let e,i,n,a,f,t;const u=[l[1]];let _={};for(let m=0;m{o[h]=null}),he(),a=o[n],a?a.p(m,k):(a=o[n]=g[n](m),a.c()),F(a,1),a.m(f.parentNode,f))},i(m){t||(F(e.$$.fragment,m),F(a),t=!0)},o(m){T(e.$$.fragment,m),T(a),t=!1},d(m){Z(e,m),m&&S(i),o[n].d(m),m&&S(f)}}}function kn(l){let e,i;return e=new Bl({props:{variant:l[4]==="dynamic"&&l[0]===null&&l[6]==="upload"?"dashed":"solid",color:l[12]?"green":"grey",padding:!1,elem_id:l[2],visible:l[3],$$slots:{default:[hn]},$$scope:{ctx:l}}}),{c(){q(e.$$.fragment)},m(n,a){G(e,n,a),i=!0},p(n,[a]){const f={};a&81&&(f.variant=n[4]==="dynamic"&&n[0]===null&&n[6]==="upload"?"dashed":"solid"),a&4096&&(f.color=n[12]?"green":"grey"),a&4&&(f.elem_id=n[2]),a&8&&(f.visible=n[3]),a&134234099&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(F(e.$$.fragment,n),i=!0)},o(n){T(e.$$.fragment,n),i=!1},d(n){Z(e,n)}}}function pn(l,e,i){let n;Dl(l,Ll,d=>i(13,n=d));let{style:a={}}=e;const f=He();let{elem_id:t=""}=e,{visible:u=!0}=e,{mode:_}=e,{value:g=null}=e,{name:o}=e,{source:s}=e,{label:m}=e,{root:k}=e,{show_label:p}=e,{pending:h}=e,{streaming:V}=e,{root_url:w}=e,{loading_status:I}=e,Q,D;const O=({detail:d})=>{i(0,g=d),f("change",g)},B=({detail:d})=>{i(0,g=d),f("stream",g)},J=({detail:d})=>i(12,D=d);function $(d){X.call(this,l,d)}function U(d){X.call(this,l,d)}function Y(d){X.call(this,l,d)}function ee(d){X.call(this,l,d)}function W(d){X.call(this,l,d)}const ae=({detail:d})=>{i(1,I=I||{}),i(1,I.status="error",I),i(1,I.message=d,I)};return l.$$set=d=>{"style"in d&&i(15,a=d.style),"elem_id"in d&&i(2,t=d.elem_id),"visible"in d&&i(3,u=d.visible),"mode"in d&&i(4,_=d.mode),"value"in d&&i(0,g=d.value),"name"in d&&i(5,o=d.name),"source"in d&&i(6,s=d.source),"label"in d&&i(7,m=d.label),"root"in d&&i(16,k=d.root),"show_label"in d&&i(8,p=d.show_label),"pending"in d&&i(9,h=d.pending),"streaming"in d&&i(10,V=d.streaming),"root_url"in d&&i(17,w=d.root_url),"loading_status"in d&&i(1,I=d.loading_status)},l.$$.update=()=>{l.$$.dirty&196609&&i(11,Q=zl(g,w??k))},[g,I,t,u,_,o,s,m,p,h,V,Q,D,n,f,a,k,w,O,B,J,$,U,Y,ee,W,ae]}class wn extends me{constructor(e){super(),ce(this,e,pn,kn,be,{style:15,elem_id:2,visible:3,mode:4,value:0,name:5,source:6,label:7,root:16,show_label:8,pending:9,streaming:10,root_url:17,loading_status:1})}}var En=wn;const Pn=["static","dynamic"],Rn=()=>({type:"{ name: string; data: string }",description:"audio data as base64 string",example_data:{name:"audio.wav",data:"data:audio/wav;base64,UklGRiQAAABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQAAAAA="}});export{En as Component,Rn as document,Pn as modes};
-//# sourceMappingURL=index.64f1ca39.js.map
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.aa361089.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.aa361089.js
deleted file mode 100644
index 568d92c50ecf5e9df48f05e69a7ed801a9f87362..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.aa361089.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as g,i as h,s as C,e as D,c as k,b as _,Y as b,f as E,m as T,j as m,k as c,n as F,o as p,F as I,a2 as K,Q as M,ad as Q,aa as Y,p as S,u as q,q as w,r as j,K as z}from"./index.396f4a72.js";import{a as G}from"./Tabs.6b500f1a.js";import{C as H}from"./Column.06c172ac.js";function J(a){let n;const s=a[5].default,t=S(s,a,a[6],null);return{c(){t&&t.c()},m(e,l){t&&t.m(e,l),n=!0},p(e,l){t&&t.p&&(!n||l&64)&&q(t,s,e,e[6],n?j(s,e[6],l,null):w(e[6]),null)},i(e){n||(m(t,e),n=!0)},o(e){c(t,e),n=!1},d(e){t&&t.d(e)}}}function L(a){let n,s,t;return s=new H({props:{$$slots:{default:[J]},$$scope:{ctx:a}}}),{c(){n=D("div"),k(s.$$.fragment),_(n,"id",a[0]),_(n,"class","tabitem p-2 border-2 border-t-0 border-gray-200 relative flex"),b(n,"display",a[2]===a[1]?"block":"none",!1)},m(e,l){E(e,n,l),T(s,n,null),t=!0},p(e,[l]){const f={};l&64&&(f.$$scope={dirty:l,ctx:e}),s.$set(f),(!t||l&1)&&_(n,"id",e[0]),l&6&&b(n,"display",e[2]===e[1]?"block":"none",!1)},i(e){t||(m(s.$$.fragment,e),t=!0)},o(e){c(s.$$.fragment,e),t=!1},d(e){e&&F(n),p(s)}}}function N(a,n,s){let t,{$$slots:e={},$$scope:l}=n,{elem_id:f=""}=n,{name:r}=n,{id:u={}}=n;const i=I(),{register_tab:A,unregister_tab:B,selected_tab:d}=K(G);return M(a,d,o=>s(2,t=o)),A({name:r,id:u}),Q(()=>()=>B({name:r,id:u})),a.$$set=o=>{"elem_id"in o&&s(0,f=o.elem_id),"name"in o&&s(4,r=o.name),"id"in o&&s(1,u=o.id),"$$scope"in o&&s(6,l=o.$$scope)},a.$$.update=()=>{a.$$.dirty&6&&t===u&&Y().then(()=>i("select"))},[f,u,t,d,r,e,l]}class O extends g{constructor(n){super(),h(this,n,N,L,C,{elem_id:0,name:4,id:1})}}function P(a){let n;const s=a[3].default,t=S(s,a,a[5],null);return{c(){t&&t.c()},m(e,l){t&&t.m(e,l),n=!0},p(e,l){t&&t.p&&(!n||l&32)&&q(t,s,e,e[5],n?j(s,e[5],l,null):w(e[5]),null)},i(e){n||(m(t,e),n=!0)},o(e){c(t,e),n=!1},d(e){t&&t.d(e)}}}function R(a){let n,s;return n=new O({props:{elem_id:a[0],name:a[1],id:a[2],$$slots:{default:[P]},$$scope:{ctx:a}}}),n.$on("select",a[4]),{c(){k(n.$$.fragment)},m(t,e){T(n,t,e),s=!0},p(t,[e]){const l={};e&1&&(l.elem_id=t[0]),e&2&&(l.name=t[1]),e&4&&(l.id=t[2]),e&32&&(l.$$scope={dirty:e,ctx:t}),n.$set(l)},i(t){s||(m(n.$$.fragment,t),s=!0)},o(t){c(n.$$.fragment,t),s=!1},d(t){p(n,t)}}}function U(a,n,s){let{$$slots:t={},$$scope:e}=n,{elem_id:l=""}=n,{label:f}=n,{id:r}=n;function u(i){z.call(this,a,i)}return a.$$set=i=>{"elem_id"in i&&s(0,l=i.elem_id),"label"in i&&s(1,f=i.label),"id"in i&&s(2,r=i.id),"$$scope"in i&&s(5,e=i.$$scope)},[l,f,r,t,u,e]}class V extends g{constructor(n){super(),h(this,n,U,R,C,{elem_id:0,label:1,id:2})}}var v=V;const y=["static"];export{v as Component,y as modes};
-//# sourceMappingURL=index.aa361089.js.map
diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/wsc/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/roberta/wsc/__init__.py
deleted file mode 100644
index 78afa4728eeed96142900118f6452730023466c9..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/roberta/wsc/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import wsc_criterion # noqa
-from . import wsc_task # noqa
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_asr.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_asr.py
deleted file mode 100644
index 005a11bfb34ca477ad9e133acd60f249e66cda47..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_asr.py
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import editdistance
-import re
-import shutil
-import soundfile as sf
-import subprocess
-from pathlib import Path
-
-from examples.speech_to_text.data_utils import load_tsv_to_dicts
-
-
-def preprocess_text(text):
- text = "|".join(re.sub(r"[^A-Z' ]", " ", text.upper()).split())
- text = " ".join(text)
- return text
-
-
-def prepare_w2v_data(
- dict_dir, sample_rate, label, audio_paths, texts, split, data_dir
-):
- data_dir.mkdir(parents=True, exist_ok=True)
- shutil.copyfile(
- dict_dir / f"dict.{label}.txt",
- data_dir / f"dict.{label}.txt"
- )
- with open(data_dir / f"{split}.tsv", "w") as f:
- f.write("/\n")
- for audio_path in audio_paths:
- wav, sr = sf.read(audio_path)
- assert sr == sample_rate, f"{sr} != sample_rate"
- nsample = len(wav)
- f.write(f"{audio_path}\t{nsample}\n")
- with open(data_dir / f"{split}.{label}", "w") as f:
- for text in texts:
- text = preprocess_text(text)
- f.write(f"{text}\n")
-
-
-def run_asr(asr_dir, split, w2v_ckpt, w2v_label, res_dir):
- """
- results will be saved at
- {res_dir}/{ref,hypo}.word-{w2v_ckpt.filename}-{split}.txt
- """
- cmd = ["python", "-m", "examples.speech_recognition.infer"]
- cmd += [str(asr_dir.resolve())]
- cmd += ["--task", "audio_finetuning", "--nbest", "1", "--quiet"]
- cmd += ["--w2l-decoder", "viterbi", "--criterion", "ctc"]
- cmd += ["--post-process", "letter", "--max-tokens", "4000000"]
- cmd += ["--path", str(w2v_ckpt.resolve()), "--labels", w2v_label]
- cmd += ["--gen-subset", split, "--results-path", str(res_dir.resolve())]
-
- print(f"running cmd:\n{' '.join(cmd)}")
- subprocess.run(cmd, check=True)
-
-
-def compute_error_rate(hyp_wrd_path, ref_wrd_path, unit="word"):
- """each line is " (None-)" """
- tokenize_line = {
- "word": lambda x: re.sub(r" \(.*\)$", "", x.rstrip()).split(),
- "char": lambda x: list(re.sub(r" \(.*\)$", "", x.rstrip()))
- }.get(unit)
- if tokenize_line is None:
- raise ValueError(f"{unit} not supported")
-
- inds = [int(re.sub(r"\D*(\d*)\D*", r"\1", line))
- for line in open(hyp_wrd_path)]
- hyps = [tokenize_line(line) for line in open(hyp_wrd_path)]
- refs = [tokenize_line(line) for line in open(ref_wrd_path)]
- assert(len(hyps) == len(refs))
- err_rates = [
- editdistance.eval(hyp, ref) / len(ref) for hyp, ref in zip(hyps, refs)
- ]
- ind_to_err_rates = {i: e for i, e in zip(inds, err_rates)}
- return ind_to_err_rates
-
-
-def main(args):
- samples = load_tsv_to_dicts(args.raw_manifest)
- ids = [
- sample[args.id_header] if args.id_header else "" for sample in samples
- ]
- audio_paths = [sample[args.audio_header] for sample in samples]
- texts = [sample[args.text_header] for sample in samples]
-
- prepare_w2v_data(
- args.w2v_dict_dir,
- args.w2v_sample_rate,
- args.w2v_label,
- audio_paths,
- texts,
- args.split,
- args.asr_dir
- )
- run_asr(args.asr_dir, args.split, args.w2v_ckpt, args.w2v_label, args.asr_dir)
- ind_to_err_rates = compute_error_rate(
- args.asr_dir / f"hypo.word-{args.w2v_ckpt.name}-{args.split}.txt",
- args.asr_dir / f"ref.word-{args.w2v_ckpt.name}-{args.split}.txt",
- args.err_unit,
- )
-
- uer_path = args.asr_dir / f"uer_{args.err_unit}.{args.split}.tsv"
- with open(uer_path, "w") as f:
- f.write("id\taudio\tuer\n")
- for ind, (id_, audio_path) in enumerate(zip(ids, audio_paths)):
- f.write(f"{id_}\t{audio_path}\t{ind_to_err_rates[ind]:.4f}\n")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--raw-manifest", required=True, type=Path)
- parser.add_argument("--asr-dir", required=True, type=Path)
- parser.add_argument("--id-header", default="id", type=str)
- parser.add_argument("--audio-header", default="audio", type=str)
- parser.add_argument("--text-header", default="src_text", type=str)
- parser.add_argument("--split", default="raw", type=str)
- parser.add_argument("--w2v-ckpt", required=True, type=Path)
- parser.add_argument("--w2v-dict-dir", required=True, type=Path)
- parser.add_argument("--w2v-sample-rate", default=16000, type=int)
- parser.add_argument("--w2v-label", default="ltr", type=str)
- parser.add_argument("--err-unit", default="word", type=str)
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh
deleted file mode 100644
index cb5bbb7277bfb9f2d5440da0514bf7b16da8140d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh
+++ /dev/null
@@ -1,63 +0,0 @@
-#!/usr/bin/env bash
-# Copyright 2012 Johns Hopkins University (Author: Daniel Povey)
-# 2014 Guoguo Chen
-# Apache 2.0
-
-[ -f ./path.sh ] && . ./path.sh
-
-# begin configuration section.
-cmd=run.pl
-stage=0
-decode_mbr=true
-word_ins_penalty=0.0,0.5,1.0
-min_lmwt=7
-max_lmwt=17
-iter=final
-#end configuration section.
-
-[ -f ./path.sh ] && . ./path.sh
-. parse_options.sh || exit 1;
-
-if [ $# -ne 3 ]; then
- echo "Usage: local/score.sh [--cmd (run.pl|queue.pl...)] "
- echo " Options:"
- echo " --cmd (run.pl|queue.pl...) # specify how to run the sub-processes."
- echo " --stage (0|1|2) # start scoring script from part-way through."
- echo " --decode_mbr (true/false) # maximum bayes risk decoding (confusion network)."
- echo " --min_lmwt # minumum LM-weight for lattice rescoring "
- echo " --max_lmwt # maximum LM-weight for lattice rescoring "
- exit 1;
-fi
-
-data=$1
-lang_or_graph=$2
-dir=$3
-
-symtab=$lang_or_graph/words.txt
-
-for f in $symtab $dir/lat.1.gz $data/text; do
- [ ! -f $f ] && echo "score.sh: no such file $f" && exit 1;
-done
-
-mkdir -p $dir/scoring/log
-
-cat $data/text | sed 's:::g' | sed 's:::g' > $dir/scoring/test_filt.txt
-
-for wip in $(echo $word_ins_penalty | sed 's/,/ /g'); do
- $cmd LMWT=$min_lmwt:$max_lmwt $dir/scoring/log/best_path.LMWT.$wip.log \
- lattice-scale --inv-acoustic-scale=LMWT "ark:gunzip -c $dir/lat.*.gz|" ark:- \| \
- lattice-add-penalty --word-ins-penalty=$wip ark:- ark:- \| \
- lattice-best-path --word-symbol-table=$symtab \
- ark:- ark,t:$dir/scoring/LMWT.$wip.tra || exit 1;
-done
-
-# Note: the double level of quoting for the sed command
-for wip in $(echo $word_ins_penalty | sed 's/,/ /g'); do
- $cmd LMWT=$min_lmwt:$max_lmwt $dir/scoring/log/score.LMWT.$wip.log \
- cat $dir/scoring/LMWT.$wip.tra \| \
- utils/int2sym.pl -f 2- $symtab \| sed 's:\::g' \| \
- compute-wer --text --mode=present \
- ark:$dir/scoring/test_filt.txt ark,p:- ">&" $dir/wer_LMWT_$wip || exit 1;
-done
-
-exit 0;
diff --git a/spaces/Illumotion/Koboldcpp/convert-baichuan-hf-to-gguf.py b/spaces/Illumotion/Koboldcpp/convert-baichuan-hf-to-gguf.py
deleted file mode 100644
index 8bd34dc440769b3ec5cf837402bd5d3d0d229a8c..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/convert-baichuan-hf-to-gguf.py
+++ /dev/null
@@ -1,304 +0,0 @@
-#!/usr/bin/env python3
-# HF baichuan --> gguf conversion
-
-from __future__ import annotations
-
-import argparse
-import json
-import os
-import struct
-import sys
-from pathlib import Path
-from typing import TYPE_CHECKING, Any
-import itertools
-import gguf
-import numpy as np
-import torch
-from sentencepiece import SentencePieceProcessor # type: ignore[import]
-
-
-if TYPE_CHECKING:
- from typing import TypeAlias
-
-NDArray: TypeAlias = 'np.ndarray[Any, Any]'
-
-# reverse HF permute back to original pth layout
-
-
-def reverse_hf_permute(weights: NDArray, n_head: int, n_kv_head: int | None = None) -> NDArray:
- if n_kv_head is not None and n_head != n_kv_head:
- n_head //= n_kv_head
-
- return (weights.reshape(n_head, 2, weights.shape[0] // n_head // 2, *weights.shape[1:])
- .swapaxes(1, 2)
- .reshape(weights.shape))
-
-def reverse_hf_permute_part(weights: NDArray, n_part: int, n_head: int, n_head_kv: int| None = None) -> NDArray:
- r = weights.shape[0] // 3
- return (reverse_hf_permute(weights[r * n_part : r * n_part + r, ...], n_head, n_head_kv))
-
-def reverse_hf_part(weights: NDArray, n_part: int) -> NDArray:
- r = weights.shape[0] // 3
- return weights[r * n_part : r * n_part + r, ...]
-
-def count_model_parts(dir_model: str) -> int:
- num_parts = 0
-
- for filename in os.listdir(dir_model):
- if filename.startswith("pytorch_model-"):
- num_parts += 1
-
- if num_parts > 0:
- print("gguf: found " + str(num_parts) + " model parts")
-
- return num_parts
-
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser(description="Convert a HuggingFace LLaMA model to a GGML compatible file")
- parser.add_argument(
- "--vocab-only", action="store_true",
- help="extract only the vocab",
- )
- parser.add_argument(
- "--outfile", type=Path,
- help="path to write to; default: based on input",
- )
- parser.add_argument(
- "model", type=Path,
- help="directory containing model file, or model file itself (*.bin)",
- )
- parser.add_argument(
- "ftype", type=int, choices=[0, 1], default=1, nargs='?',
- help="output format - use 0 for float32, 1 for float16",
- )
- return parser.parse_args()
-
-args = parse_args()
-
-dir_model = args.model
-ftype = args.ftype
-if not dir_model.is_dir():
- print(f'Error: {args.model} is not a directory', file = sys.stderr)
- sys.exit(1)
-
-# possible tensor data types
-# ftype == 0 -> float32
-# ftype == 1 -> float16
-
-# map from ftype to string
-ftype_str = ["f32", "f16"]
-
-if args.outfile is not None:
- fname_out = args.outfile
-else:
- # output in the same directory as the model by default
- fname_out = dir_model / f'ggml-model-{ftype_str[ftype]}.gguf'
-
-print("gguf: loading model "+dir_model.name)
-
-with open(dir_model / "config.json", "r", encoding="utf-8") as f:
- hparams = json.load(f)
-print("hello print: ",hparams["architectures"][0])
-if hparams["architectures"][0] != "BaichuanForCausalLM":
- print("Model architecture not supported: " + hparams["architectures"][0])
-
- sys.exit()
-
-# get number of model parts
-num_parts = count_model_parts(dir_model)
-print(f"num_parts:{num_parts}\n")
-ARCH=gguf.MODEL_ARCH.BAICHUAN
-gguf_writer = gguf.GGUFWriter(fname_out, gguf.MODEL_ARCH_NAMES[ARCH])
-
-print("gguf: get model metadata")
-
-block_count = hparams["num_hidden_layers"]
-head_count = hparams["num_attention_heads"]
-
-if "num_key_value_heads" in hparams:
- head_count_kv = hparams["num_key_value_heads"]
-else:
- head_count_kv = head_count
-
-if "_name_or_path" in hparams:
- hf_repo = hparams["_name_or_path"]
-else:
- hf_repo = ""
-
-if "max_sequence_length" in hparams:
- ctx_length = hparams["max_sequence_length"]
-elif "max_position_embeddings" in hparams:
- ctx_length = hparams["max_position_embeddings"]
-elif "model_max_length" in hparams:
- ctx_length = hparams["model_max_length"]
-else:
- print("gguf: can not find ctx length parameter.")
-
- sys.exit()
-
-
-gguf_writer.add_name(dir_model.name)
-gguf_writer.add_source_hf_repo(hf_repo)
-gguf_writer.add_tensor_data_layout("Meta AI original pth")
-gguf_writer.add_context_length(ctx_length)
-gguf_writer.add_embedding_length(hparams["hidden_size"])
-gguf_writer.add_block_count(block_count)
-gguf_writer.add_feed_forward_length(hparams["intermediate_size"])
-gguf_writer.add_rope_dimension_count(hparams["hidden_size"] // hparams["num_attention_heads"])
-gguf_writer.add_head_count(head_count)
-gguf_writer.add_head_count_kv(head_count_kv)
-gguf_writer.add_layer_norm_rms_eps(hparams["rms_norm_eps"])
-
-if "rope_scaling" in hparams and hparams["rope_scaling"] != None and "factor" in hparams["rope_scaling"]:
- if "type" in hparams["rope_scaling"]:
- if hparams["rope_scaling"]["type"] == "linear":
- gguf_writer.add_rope_scale_linear(hparams["rope_scaling"]["factor"])
-
-
-# TOKENIZATION
-
-print("gguf: get tokenizer metadata")
-
-tokens: list[bytes] = []
-scores: list[float] = []
-toktypes: list[int] = []
-
-tokenizer_model_file = dir_model / 'tokenizer.model'
-if not tokenizer_model_file.is_file():
- print(f'Error: Missing {tokenizer_model_file}', file = sys.stderr)
- sys.exit(1)
-
-# vocab type sentencepiece
-print("gguf: get sentencepiece tokenizer vocab, scores and token types")
-
-tokenizer = SentencePieceProcessor(str(tokenizer_model_file))
-
-for i in range(tokenizer.vocab_size()):
- text: bytes
- score: float
-
- piece = tokenizer.id_to_piece(i)
- text = piece.encode("utf-8")
- score = tokenizer.get_score(i)
-
- toktype = 1 # defualt to normal token type
- if tokenizer.is_unknown(i):
- toktype = 2
- if tokenizer.is_control(i):
- toktype = 3
-
- # toktype = 4 is user-defined = tokens from added_tokens.json
-
- if tokenizer.is_unused(i):
- toktype = 5
- if tokenizer.is_byte(i):
- toktype = 6
-
- tokens.append(text)
- scores.append(score)
- toktypes.append(toktype)
-
-added_tokens_file = dir_model / 'added_tokens.json'
-if added_tokens_file.is_file():
- with open(added_tokens_file, "r", encoding="utf-8") as f:
- addtokens_json = json.load(f)
-
- print("gguf: get added tokens")
-
- for key in addtokens_json:
- tokens.append( key.encode("utf-8") )
- scores.append(-1000.0)
- toktypes.append(4) # user-defined token type
-
-
-gguf_writer.add_tokenizer_model("llama")
-gguf_writer.add_token_list(tokens)
-gguf_writer.add_token_scores(scores)
-gguf_writer.add_token_types(toktypes)
-
-special_vocab = gguf.SpecialVocab(dir_model)
-special_vocab.add_to_gguf(gguf_writer)
-
-# TENSORS
-
-tensor_map = gguf.get_tensor_name_map(ARCH,block_count)
-
-# tensor info
-print("gguf: get tensor metadata")
-
-if num_parts == 0:
- part_names = iter(("pytorch_model.bin",))
-else:
- part_names = (
- f"pytorch_model-{n:05}-of-{num_parts:05}.bin" for n in range(1, num_parts + 1)
- )
-
-
-for part_name in part_names:
- if args.vocab_only:
- break
- print("gguf: loading model part '" + part_name + "'")
- model_part = torch.load(f"{dir_model}/{part_name}", map_location="cpu")
-
- tmp=model_part
- for i in range(block_count):
- if f"model.layers.{i}.self_attn.W_pack.weight" in model_part:
- print(f"Unpacking and permuting layer {i}")
- tmp[f"model.layers.{i}.self_attn.q_proj.weight"]=reverse_hf_permute_part(model_part[f"model.layers.{i}.self_attn.W_pack.weight"],0,head_count,head_count)
- tmp[f"model.layers.{i}.self_attn.k_proj.weight"]=reverse_hf_permute_part(model_part[f"model.layers.{i}.self_attn.W_pack.weight"],1,head_count,head_count_kv)
- tmp[f"model.layers.{i}.self_attn.v_proj.weight"]=reverse_hf_part(model_part[f"model.layers.{i}.self_attn.W_pack.weight"],2)
- del tmp[f"model.layers.{i}.self_attn.W_pack.weight"]
-
- for name in model_part.keys():
- data = model_part[name]
- # we don't need these
- if name.endswith(".rotary_emb.inv_freq"):
- continue
-
- old_dtype = data.dtype
-
- # convert any unsupported data types to float32
- if data.dtype != torch.float16 and data.dtype != torch.float32:
- data = data.to(torch.float32)
-
- data = data.squeeze().numpy()
-
- # map tensor names
- new_name = tensor_map.get_name(name, try_suffixes = (".weight", ".bias"))
- if new_name is None:
- print("Can not map tensor '" + name + "'")
- sys.exit()
-
- n_dims = len(data.shape)
- data_dtype = data.dtype
-
- # if f32 desired, convert any float16 to float32
- if ftype == 0 and data_dtype == np.float16:
- data = data.astype(np.float32)
-
- # TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
- if ftype == 1 and data_dtype == np.float16 and n_dims == 1:
- data = data.astype(np.float32)
-
- # if f16 desired, convert any float32 2-dim weight tensors to float16
- if ftype == 1 and data_dtype == np.float32 and name.endswith(".weight") and n_dims == 2:
- data = data.astype(np.float16)
-
- print(name + " -> " + new_name + ", n_dims = " + str(n_dims) + ", " + str(old_dtype) + " --> " + str(data.dtype))
- gguf_writer.add_tensor(new_name, data)
-
-
-print("gguf: write header")
-gguf_writer.write_header_to_file()
-print("gguf: write metadata")
-gguf_writer.write_kv_data_to_file()
-if not args.vocab_only:
- print("gguf: write tensors")
- gguf_writer.write_tensors_to_file()
-
-gguf_writer.close()
-
-print(f"gguf: model successfully exported to '{fname_out}'")
-print("")
diff --git a/spaces/Illumotion/Koboldcpp/include/CL/opencl.hpp b/spaces/Illumotion/Koboldcpp/include/CL/opencl.hpp
deleted file mode 100644
index 1e61d7890137572e74e34a84826d9cc9cda20155..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/include/CL/opencl.hpp
+++ /dev/null
@@ -1,10372 +0,0 @@
-//
-// Copyright (c) 2008-2020 The Khronos Group Inc.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-
-/*! \file
- *
- * \brief C++ bindings for OpenCL 1.0, OpenCL 1.1, OpenCL 1.2,
- * OpenCL 2.0, OpenCL 2.1, OpenCL 2.2, and OpenCL 3.0.
- * \author Lee Howes and Bruce Merry
- *
- * Derived from the OpenCL 1.x C++ bindings written by
- * Benedict R. Gaster, Laurent Morichetti and Lee Howes
- * With additions and fixes from:
- * Brian Cole, March 3rd 2010 and April 2012
- * Matt Gruenke, April 2012.
- * Bruce Merry, February 2013.
- * Tom Deakin and Simon McIntosh-Smith, July 2013
- * James Price, 2015-
- * \version 2.2.0
- * \date 2019-09-18
- *
- * Optional extension support
- *
- * cl_ext_device_fission
- * #define CL_HPP_USE_CL_DEVICE_FISSION
- * cl_khr_d3d10_sharing
- * #define CL_HPP_USE_DX_INTEROP
- * cl_khr_sub_groups
- * #define CL_HPP_USE_CL_SUB_GROUPS_KHR
- * cl_khr_image2d_from_buffer
- * #define CL_HPP_USE_CL_IMAGE2D_FROM_BUFFER_KHR
- *
- * Doxygen documentation for this header is available here:
- *
- * http://khronosgroup.github.io/OpenCL-CLHPP/
- *
- * The latest version of this header can be found on the GitHub releases page:
- *
- * https://github.com/KhronosGroup/OpenCL-CLHPP/releases
- *
- * Bugs and patches can be submitted to the GitHub repository:
- *
- * https://github.com/KhronosGroup/OpenCL-CLHPP
- */
-
-/*! \mainpage
- * \section intro Introduction
- * For many large applications C++ is the language of choice and so it seems
- * reasonable to define C++ bindings for OpenCL.
- *
- * The interface is contained with a single C++ header file \em opencl.hpp and all
- * definitions are contained within the namespace \em cl. There is no additional
- * requirement to include \em cl.h and to use either the C++ or original C
- * bindings; it is enough to simply include \em opencl.hpp.
- *
- * The bindings themselves are lightweight and correspond closely to the
- * underlying C API. Using the C++ bindings introduces no additional execution
- * overhead.
- *
- * There are numerous compatibility, portability and memory management
- * fixes in the new header as well as additional OpenCL 2.0 features.
- * As a result the header is not directly backward compatible and for this
- * reason we release it as opencl.hpp rather than a new version of cl.hpp.
- *
- *
- * \section compatibility Compatibility
- * Due to the evolution of the underlying OpenCL API the 2.0 C++ bindings
- * include an updated approach to defining supported feature versions
- * and the range of valid underlying OpenCL runtime versions supported.
- *
- * The combination of preprocessor macros CL_HPP_TARGET_OPENCL_VERSION and
- * CL_HPP_MINIMUM_OPENCL_VERSION control this range. These are three digit
- * decimal values representing OpenCL runime versions. The default for
- * the target is 200, representing OpenCL 2.0 and the minimum is also
- * defined as 200. These settings would use 2.0 API calls only.
- * If backward compatibility with a 1.2 runtime is required, the minimum
- * version may be set to 120.
- *
- * Note that this is a compile-time setting, and so affects linking against
- * a particular SDK version rather than the versioning of the loaded runtime.
- *
- * The earlier versions of the header included basic vector and string
- * classes based loosely on STL versions. These were difficult to
- * maintain and very rarely used. For the 2.0 header we now assume
- * the presence of the standard library unless requested otherwise.
- * We use std::array, std::vector, std::shared_ptr and std::string
- * throughout to safely manage memory and reduce the chance of a
- * recurrance of earlier memory management bugs.
- *
- * These classes are used through typedefs in the cl namespace:
- * cl::array, cl::vector, cl::pointer and cl::string.
- * In addition cl::allocate_pointer forwards to std::allocate_shared
- * by default.
- * In all cases these standard library classes can be replaced with
- * custom interface-compatible versions using the CL_HPP_NO_STD_ARRAY,
- * CL_HPP_NO_STD_VECTOR, CL_HPP_NO_STD_UNIQUE_PTR and
- * CL_HPP_NO_STD_STRING macros.
- *
- * The OpenCL 1.x versions of the C++ bindings included a size_t wrapper
- * class to interface with kernel enqueue. This caused unpleasant interactions
- * with the standard size_t declaration and led to namespacing bugs.
- * In the 2.0 version we have replaced this with a std::array-based interface.
- * However, the old behaviour can be regained for backward compatibility
- * using the CL_HPP_ENABLE_SIZE_T_COMPATIBILITY macro.
- *
- * Finally, the program construction interface used a clumsy vector-of-pairs
- * design in the earlier versions. We have replaced that with a cleaner
- * vector-of-vectors and vector-of-strings design. However, for backward
- * compatibility old behaviour can be regained with the
- * CL_HPP_ENABLE_PROGRAM_CONSTRUCTION_FROM_ARRAY_COMPATIBILITY macro.
- *
- * In OpenCL 2.0 OpenCL C is not entirely backward compatibility with
- * earlier versions. As a result a flag must be passed to the OpenCL C
- * compiled to request OpenCL 2.0 compilation of kernels with 1.2 as
- * the default in the absence of the flag.
- * In some cases the C++ bindings automatically compile code for ease.
- * For those cases the compilation defaults to OpenCL C 2.0.
- * If this is not wanted, the CL_HPP_CL_1_2_DEFAULT_BUILD macro may
- * be specified to assume 1.2 compilation.
- * If more fine-grained decisions on a per-kernel bases are required
- * then explicit build operations that take the flag should be used.
- *
- *
- * \section parameterization Parameters
- * This header may be parameterized by a set of preprocessor macros.
- *
- * - CL_HPP_TARGET_OPENCL_VERSION
- *
- * Defines the target OpenCL runtime version to build the header
- * against. Defaults to 200, representing OpenCL 2.0.
- *
- * - CL_HPP_NO_STD_STRING
- *
- * Do not use the standard library string class. cl::string is not
- * defined and may be defined by the user before opencl.hpp is
- * included.
- *
- * - CL_HPP_NO_STD_VECTOR
- *
- * Do not use the standard library vector class. cl::vector is not
- * defined and may be defined by the user before opencl.hpp is
- * included.
- *
- * - CL_HPP_NO_STD_ARRAY
- *
- * Do not use the standard library array class. cl::array is not
- * defined and may be defined by the user before opencl.hpp is
- * included.
- *
- * - CL_HPP_NO_STD_UNIQUE_PTR
- *
- * Do not use the standard library unique_ptr class. cl::pointer and
- * the cl::allocate_pointer functions are not defined and may be
- * defined by the user before opencl.hpp is included.
- *
- * - CL_HPP_ENABLE_EXCEPTIONS
- *
- * Enable exceptions for use in the C++ bindings header. This is the
- * preferred error handling mechanism but is not required.
- *
- * - CL_HPP_ENABLE_SIZE_T_COMPATIBILITY
- *
- * Backward compatibility option to support cl.hpp-style size_t
- * class. Replaces the updated std::array derived version and
- * removal of size_t from the namespace. Note that in this case the
- * new size_t class is placed in the cl::compatibility namespace and
- * thus requires an additional using declaration for direct backward
- * compatibility.
- *
- * - CL_HPP_ENABLE_PROGRAM_CONSTRUCTION_FROM_ARRAY_COMPATIBILITY
- *
- * Enable older vector of pairs interface for construction of
- * programs.
- *
- * - CL_HPP_CL_1_2_DEFAULT_BUILD
- *
- * Default to OpenCL C 1.2 compilation rather than OpenCL C 2.0
- * applies to use of cl::Program construction and other program
- * build variants.
- *
- * - CL_HPP_USE_CL_DEVICE_FISSION
- *
- * Enable the cl_ext_device_fission extension.
- *
- * - CL_HPP_USE_CL_IMAGE2D_FROM_BUFFER_KHR
- *
- * Enable the cl_khr_image2d_from_buffer extension.
- *
- * - CL_HPP_USE_CL_SUB_GROUPS_KHR
- *
- * Enable the cl_khr_subgroups extension.
- *
- * - CL_HPP_USE_DX_INTEROP
- *
- * Enable the cl_khr_d3d10_sharing extension.
- *
- * - CL_HPP_USE_IL_KHR
- *
- * Enable the cl_khr_il_program extension.
- *
- *
- * \section example Example
- *
- * The following example shows a general use case for the C++
- * bindings, including support for the optional exception feature and
- * also the supplied vector and string classes, see following sections for
- * decriptions of these features.
- *
- * Note: the C++ bindings use std::call_once and therefore may need to be
- * compiled using special command-line options (such as "-pthread") on some
- * platforms!
- *
- * \code
- #define CL_HPP_ENABLE_EXCEPTIONS
- #define CL_HPP_TARGET_OPENCL_VERSION 200
-
- #include
- #include
- #include
- #include
- #include
-
- const int numElements = 32;
-
- int main(void)
- {
- // Filter for a 2.0 or newer platform and set it as the default
- std::vector platforms;
- cl::Platform::get(&platforms);
- cl::Platform plat;
- for (auto &p : platforms) {
- std::string platver = p.getInfo();
- if (platver.find("OpenCL 2.") != std::string::npos ||
- platver.find("OpenCL 3.") != std::string::npos) {
- // Note: an OpenCL 3.x platform may not support all required features!
- plat = p;
- }
- }
- if (plat() == 0) {
- std::cout << "No OpenCL 2.0 or newer platform found.\n";
- return -1;
- }
-
- cl::Platform newP = cl::Platform::setDefault(plat);
- if (newP != plat) {
- std::cout << "Error setting default platform.\n";
- return -1;
- }
-
- // C++11 raw string literal for the first kernel
- std::string kernel1{R"CLC(
- global int globalA;
- kernel void updateGlobal()
- {
- globalA = 75;
- }
- )CLC"};
-
- // Raw string literal for the second kernel
- std::string kernel2{R"CLC(
- typedef struct { global int *bar; } Foo;
- kernel void vectorAdd(global const Foo* aNum, global const int *inputA, global const int *inputB,
- global int *output, int val, write_only pipe int outPipe, queue_t childQueue)
- {
- output[get_global_id(0)] = inputA[get_global_id(0)] + inputB[get_global_id(0)] + val + *(aNum->bar);
- write_pipe(outPipe, &val);
- queue_t default_queue = get_default_queue();
- ndrange_t ndrange = ndrange_1D(get_global_size(0)/2, get_global_size(0)/2);
-
- // Have a child kernel write into third quarter of output
- enqueue_kernel(default_queue, CLK_ENQUEUE_FLAGS_WAIT_KERNEL, ndrange,
- ^{
- output[get_global_size(0)*2 + get_global_id(0)] =
- inputA[get_global_size(0)*2 + get_global_id(0)] + inputB[get_global_size(0)*2 + get_global_id(0)] + globalA;
- });
-
- // Have a child kernel write into last quarter of output
- enqueue_kernel(childQueue, CLK_ENQUEUE_FLAGS_WAIT_KERNEL, ndrange,
- ^{
- output[get_global_size(0)*3 + get_global_id(0)] =
- inputA[get_global_size(0)*3 + get_global_id(0)] + inputB[get_global_size(0)*3 + get_global_id(0)] + globalA + 2;
- });
- }
- )CLC"};
-
- std::vector programStrings;
- programStrings.push_back(kernel1);
- programStrings.push_back(kernel2);
-
- cl::Program vectorAddProgram(programStrings);
- try {
- vectorAddProgram.build("-cl-std=CL2.0");
- }
- catch (...) {
- // Print build info for all devices
- cl_int buildErr = CL_SUCCESS;
- auto buildInfo = vectorAddProgram.getBuildInfo(&buildErr);
- for (auto &pair : buildInfo) {
- std::cerr << pair.second << std::endl << std::endl;
- }
-
- return 1;
- }
-
- typedef struct { int *bar; } Foo;
-
- // Get and run kernel that initializes the program-scope global
- // A test for kernels that take no arguments
- auto program2Kernel =
- cl::KernelFunctor<>(vectorAddProgram, "updateGlobal");
- program2Kernel(
- cl::EnqueueArgs(
- cl::NDRange(1)));
-
- //////////////////
- // SVM allocations
-
- auto anSVMInt = cl::allocate_svm>();
- *anSVMInt = 5;
- cl::SVMAllocator>> svmAllocReadOnly;
- auto fooPointer = cl::allocate_pointer(svmAllocReadOnly);
- fooPointer->bar = anSVMInt.get();
- cl::SVMAllocator> svmAlloc;
- std::vector>> inputA(numElements, 1, svmAlloc);
- cl::coarse_svm_vector inputB(numElements, 2, svmAlloc);
-
- //////////////
- // Traditional cl_mem allocations
-
- std::vector output(numElements, 0xdeadbeef);
- cl::Buffer outputBuffer(begin(output), end(output), false);
- cl::Pipe aPipe(sizeof(cl_int), numElements / 2);
-
- // Default command queue, also passed in as a parameter
- cl::DeviceCommandQueue defaultDeviceQueue = cl::DeviceCommandQueue::makeDefault(
- cl::Context::getDefault(), cl::Device::getDefault());
-
- auto vectorAddKernel =
- cl::KernelFunctor<
- decltype(fooPointer)&,
- int*,
- cl::coarse_svm_vector&,
- cl::Buffer,
- int,
- cl::Pipe&,
- cl::DeviceCommandQueue
- >(vectorAddProgram, "vectorAdd");
-
- // Ensure that the additional SVM pointer is available to the kernel
- // This one was not passed as a parameter
- vectorAddKernel.setSVMPointers(anSVMInt);
-
- cl_int error;
- vectorAddKernel(
- cl::EnqueueArgs(
- cl::NDRange(numElements/2),
- cl::NDRange(numElements/2)),
- fooPointer,
- inputA.data(),
- inputB,
- outputBuffer,
- 3,
- aPipe,
- defaultDeviceQueue,
- error
- );
-
- cl::copy(outputBuffer, begin(output), end(output));
-
- cl::Device d = cl::Device::getDefault();
-
- std::cout << "Output:\n";
- for (int i = 1; i < numElements; ++i) {
- std::cout << "\t" << output[i] << "\n";
- }
- std::cout << "\n\n";
-
- return 0;
- }
- *
- * \endcode
- *
- */
-#ifndef CL_HPP_
-#define CL_HPP_
-
-/* Handle deprecated preprocessor definitions. In each case, we only check for
- * the old name if the new name is not defined, so that user code can define
- * both and hence work with either version of the bindings.
- */
-#if !defined(CL_HPP_USE_DX_INTEROP) && defined(USE_DX_INTEROP)
-# pragma message("opencl.hpp: USE_DX_INTEROP is deprecated. Define CL_HPP_USE_DX_INTEROP instead")
-# define CL_HPP_USE_DX_INTEROP
-#endif
-#if !defined(CL_HPP_USE_CL_DEVICE_FISSION) && defined(USE_CL_DEVICE_FISSION)
-# pragma message("opencl.hpp: USE_CL_DEVICE_FISSION is deprecated. Define CL_HPP_USE_CL_DEVICE_FISSION instead")
-# define CL_HPP_USE_CL_DEVICE_FISSION
-#endif
-#if !defined(CL_HPP_ENABLE_EXCEPTIONS) && defined(__CL_ENABLE_EXCEPTIONS)
-# pragma message("opencl.hpp: __CL_ENABLE_EXCEPTIONS is deprecated. Define CL_HPP_ENABLE_EXCEPTIONS instead")
-# define CL_HPP_ENABLE_EXCEPTIONS
-#endif
-#if !defined(CL_HPP_NO_STD_VECTOR) && defined(__NO_STD_VECTOR)
-# pragma message("opencl.hpp: __NO_STD_VECTOR is deprecated. Define CL_HPP_NO_STD_VECTOR instead")
-# define CL_HPP_NO_STD_VECTOR
-#endif
-#if !defined(CL_HPP_NO_STD_STRING) && defined(__NO_STD_STRING)
-# pragma message("opencl.hpp: __NO_STD_STRING is deprecated. Define CL_HPP_NO_STD_STRING instead")
-# define CL_HPP_NO_STD_STRING
-#endif
-#if defined(VECTOR_CLASS)
-# pragma message("opencl.hpp: VECTOR_CLASS is deprecated. Alias cl::vector instead")
-#endif
-#if defined(STRING_CLASS)
-# pragma message("opencl.hpp: STRING_CLASS is deprecated. Alias cl::string instead.")
-#endif
-#if !defined(CL_HPP_USER_OVERRIDE_ERROR_STRINGS) && defined(__CL_USER_OVERRIDE_ERROR_STRINGS)
-# pragma message("opencl.hpp: __CL_USER_OVERRIDE_ERROR_STRINGS is deprecated. Define CL_HPP_USER_OVERRIDE_ERROR_STRINGS instead")
-# define CL_HPP_USER_OVERRIDE_ERROR_STRINGS
-#endif
-
-/* Warn about features that are no longer supported
- */
-#if defined(__USE_DEV_VECTOR)
-# pragma message("opencl.hpp: __USE_DEV_VECTOR is no longer supported. Expect compilation errors")
-#endif
-#if defined(__USE_DEV_STRING)
-# pragma message("opencl.hpp: __USE_DEV_STRING is no longer supported. Expect compilation errors")
-#endif
-
-/* Detect which version to target */
-#if !defined(CL_HPP_TARGET_OPENCL_VERSION)
-# pragma message("opencl.hpp: CL_HPP_TARGET_OPENCL_VERSION is not defined. It will default to 300 (OpenCL 3.0)")
-# define CL_HPP_TARGET_OPENCL_VERSION 300
-#endif
-#if CL_HPP_TARGET_OPENCL_VERSION != 100 && \
- CL_HPP_TARGET_OPENCL_VERSION != 110 && \
- CL_HPP_TARGET_OPENCL_VERSION != 120 && \
- CL_HPP_TARGET_OPENCL_VERSION != 200 && \
- CL_HPP_TARGET_OPENCL_VERSION != 210 && \
- CL_HPP_TARGET_OPENCL_VERSION != 220 && \
- CL_HPP_TARGET_OPENCL_VERSION != 300
-# pragma message("opencl.hpp: CL_HPP_TARGET_OPENCL_VERSION is not a valid value (100, 110, 120, 200, 210, 220 or 300). It will be set to 300 (OpenCL 3.0).")
-# undef CL_HPP_TARGET_OPENCL_VERSION
-# define CL_HPP_TARGET_OPENCL_VERSION 300
-#endif
-
-/* Forward target OpenCL version to C headers if necessary */
-#if defined(CL_TARGET_OPENCL_VERSION)
-/* Warn if prior definition of CL_TARGET_OPENCL_VERSION is lower than
- * requested C++ bindings version */
-#if CL_TARGET_OPENCL_VERSION < CL_HPP_TARGET_OPENCL_VERSION
-# pragma message("CL_TARGET_OPENCL_VERSION is already defined as is lower than CL_HPP_TARGET_OPENCL_VERSION")
-#endif
-#else
-# define CL_TARGET_OPENCL_VERSION CL_HPP_TARGET_OPENCL_VERSION
-#endif
-
-#if !defined(CL_HPP_MINIMUM_OPENCL_VERSION)
-# define CL_HPP_MINIMUM_OPENCL_VERSION 200
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION != 100 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 110 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 120 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 200 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 210 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 220 && \
- CL_HPP_MINIMUM_OPENCL_VERSION != 300
-# pragma message("opencl.hpp: CL_HPP_MINIMUM_OPENCL_VERSION is not a valid value (100, 110, 120, 200, 210, 220 or 300). It will be set to 100")
-# undef CL_HPP_MINIMUM_OPENCL_VERSION
-# define CL_HPP_MINIMUM_OPENCL_VERSION 100
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION > CL_HPP_TARGET_OPENCL_VERSION
-# error "CL_HPP_MINIMUM_OPENCL_VERSION must not be greater than CL_HPP_TARGET_OPENCL_VERSION"
-#endif
-
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 100 && !defined(CL_USE_DEPRECATED_OPENCL_1_0_APIS)
-# define CL_USE_DEPRECATED_OPENCL_1_0_APIS
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 110 && !defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-# define CL_USE_DEPRECATED_OPENCL_1_1_APIS
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 120 && !defined(CL_USE_DEPRECATED_OPENCL_1_2_APIS)
-# define CL_USE_DEPRECATED_OPENCL_1_2_APIS
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 200 && !defined(CL_USE_DEPRECATED_OPENCL_2_0_APIS)
-# define CL_USE_DEPRECATED_OPENCL_2_0_APIS
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 210 && !defined(CL_USE_DEPRECATED_OPENCL_2_1_APIS)
-# define CL_USE_DEPRECATED_OPENCL_2_1_APIS
-#endif
-#if CL_HPP_MINIMUM_OPENCL_VERSION <= 220 && !defined(CL_USE_DEPRECATED_OPENCL_2_2_APIS)
-# define CL_USE_DEPRECATED_OPENCL_2_2_APIS
-#endif
-
-#ifdef _WIN32
-
-#include
-
-#if defined(CL_HPP_USE_DX_INTEROP)
-#include
-#include
-#endif
-#endif // _WIN32
-
-#if defined(_MSC_VER)
-#include
-#endif // _MSC_VER
-
- // Check for a valid C++ version
-
-// Need to do both tests here because for some reason __cplusplus is not
-// updated in visual studio
-#if (!defined(_MSC_VER) && __cplusplus < 201103L) || (defined(_MSC_VER) && _MSC_VER < 1700)
-#error Visual studio 2013 or another C++11-supporting compiler required
-#endif
-
-//
-#if defined(CL_HPP_USE_CL_DEVICE_FISSION) || defined(CL_HPP_USE_CL_SUB_GROUPS_KHR)
-#include
-#endif
-
-#if defined(__APPLE__) || defined(__MACOSX)
-#include
-#else
-#include
-#endif // !__APPLE__
-
-#if (__cplusplus >= 201103L || _MSVC_LANG >= 201103L )
-#define CL_HPP_NOEXCEPT_ noexcept
-#else
-#define CL_HPP_NOEXCEPT_
-#endif
-
-#if __cplusplus >= 201703L
-# define CL_HPP_DEFINE_STATIC_MEMBER_ inline
-#elif defined(_MSC_VER)
-# define CL_HPP_DEFINE_STATIC_MEMBER_ __declspec(selectany)
-#elif defined(__MINGW32__)
-# define CL_HPP_DEFINE_STATIC_MEMBER_ __attribute__((selectany))
-#else
-# define CL_HPP_DEFINE_STATIC_MEMBER_ __attribute__((weak))
-#endif // !_MSC_VER
-
-// Define deprecated prefixes and suffixes to ensure compilation
-// in case they are not pre-defined
-#if !defined(CL_API_PREFIX__VERSION_1_1_DEPRECATED)
-#define CL_API_PREFIX__VERSION_1_1_DEPRECATED
-#endif // #if !defined(CL_API_PREFIX__VERSION_1_1_DEPRECATED)
-#if !defined(CL_API_SUFFIX__VERSION_1_1_DEPRECATED)
-#define CL_API_SUFFIX__VERSION_1_1_DEPRECATED
-#endif // #if !defined(CL_API_SUFFIX__VERSION_1_1_DEPRECATED)
-
-#if !defined(CL_API_PREFIX__VERSION_1_2_DEPRECATED)
-#define CL_API_PREFIX__VERSION_1_2_DEPRECATED
-#endif // #if !defined(CL_API_PREFIX__VERSION_1_2_DEPRECATED)
-#if !defined(CL_API_SUFFIX__VERSION_1_2_DEPRECATED)
-#define CL_API_SUFFIX__VERSION_1_2_DEPRECATED
-#endif // #if !defined(CL_API_SUFFIX__VERSION_1_2_DEPRECATED)
-
-#if !defined(CL_API_PREFIX__VERSION_2_2_DEPRECATED)
-#define CL_API_PREFIX__VERSION_2_2_DEPRECATED
-#endif // #if !defined(CL_API_PREFIX__VERSION_2_2_DEPRECATED)
-#if !defined(CL_API_SUFFIX__VERSION_2_2_DEPRECATED)
-#define CL_API_SUFFIX__VERSION_2_2_DEPRECATED
-#endif // #if !defined(CL_API_SUFFIX__VERSION_2_2_DEPRECATED)
-
-#if !defined(CL_CALLBACK)
-#define CL_CALLBACK
-#endif //CL_CALLBACK
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-
-// Define a size_type to represent a correctly resolved size_t
-#if defined(CL_HPP_ENABLE_SIZE_T_COMPATIBILITY)
-namespace cl {
- using size_type = ::size_t;
-} // namespace cl
-#else // #if defined(CL_HPP_ENABLE_SIZE_T_COMPATIBILITY)
-namespace cl {
- using size_type = size_t;
-} // namespace cl
-#endif // #if defined(CL_HPP_ENABLE_SIZE_T_COMPATIBILITY)
-
-
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
-#include
-#endif // #if defined(CL_HPP_ENABLE_EXCEPTIONS)
-
-#if !defined(CL_HPP_NO_STD_VECTOR)
-#include
-namespace cl {
- template < class T, class Alloc = std::allocator >
- using vector = std::vector;
-} // namespace cl
-#endif // #if !defined(CL_HPP_NO_STD_VECTOR)
-
-#if !defined(CL_HPP_NO_STD_STRING)
-#include
-namespace cl {
- using string = std::string;
-} // namespace cl
-#endif // #if !defined(CL_HPP_NO_STD_STRING)
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 200
-
-#if !defined(CL_HPP_NO_STD_UNIQUE_PTR)
-#include
-namespace cl {
- // Replace unique_ptr and allocate_pointer for internal use
- // to allow user to replace them
- template
- using pointer = std::unique_ptr;
-} // namespace cl
-#endif
-#endif // #if CL_HPP_TARGET_OPENCL_VERSION >= 200
-#if !defined(CL_HPP_NO_STD_ARRAY)
-#include
-namespace cl {
- template < class T, size_type N >
- using array = std::array;
-} // namespace cl
-#endif // #if !defined(CL_HPP_NO_STD_ARRAY)
-
-// Define size_type appropriately to allow backward-compatibility
-// use of the old size_t interface class
-#if defined(CL_HPP_ENABLE_SIZE_T_COMPATIBILITY)
-namespace cl {
- namespace compatibility {
- /*! \brief class used to interface between C++ and
- * OpenCL C calls that require arrays of size_t values, whose
- * size is known statically.
- */
- template
- class size_t
- {
- private:
- size_type data_[N];
-
- public:
- //! \brief Initialize size_t to all 0s
- size_t()
- {
- for (int i = 0; i < N; ++i) {
- data_[i] = 0;
- }
- }
-
- size_t(const array &rhs)
- {
- for (int i = 0; i < N; ++i) {
- data_[i] = rhs[i];
- }
- }
-
- size_type& operator[](int index)
- {
- return data_[index];
- }
-
- const size_type& operator[](int index) const
- {
- return data_[index];
- }
-
- //! \brief Conversion operator to T*.
- operator size_type* () { return data_; }
-
- //! \brief Conversion operator to const T*.
- operator const size_type* () const { return data_; }
-
- operator array() const
- {
- array ret;
-
- for (int i = 0; i < N; ++i) {
- ret[i] = data_[i];
- }
- return ret;
- }
- };
- } // namespace compatibility
-
- template
- using size_t = compatibility::size_t;
-} // namespace cl
-#endif // #if defined(CL_HPP_ENABLE_SIZE_T_COMPATIBILITY)
-
-// Helper alias to avoid confusing the macros
-namespace cl {
- namespace detail {
- using size_t_array = array;
- } // namespace detail
-} // namespace cl
-
-
-/*! \namespace cl
- *
- * \brief The OpenCL C++ bindings are defined within this namespace.
- *
- */
-namespace cl {
- class Memory;
-
-#define CL_HPP_INIT_CL_EXT_FCN_PTR_(name) \
- if (!pfn_##name) { \
- pfn_##name = (PFN_##name) \
- clGetExtensionFunctionAddress(#name); \
- if (!pfn_##name) { \
- } \
- }
-
-#define CL_HPP_INIT_CL_EXT_FCN_PTR_PLATFORM_(platform, name) \
- if (!pfn_##name) { \
- pfn_##name = (PFN_##name) \
- clGetExtensionFunctionAddressForPlatform(platform, #name); \
- if (!pfn_##name) { \
- } \
- }
-
- class Program;
- class Device;
- class Context;
- class CommandQueue;
- class DeviceCommandQueue;
- class Memory;
- class Buffer;
- class Pipe;
-
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- /*! \brief Exception class
- *
- * This may be thrown by API functions when CL_HPP_ENABLE_EXCEPTIONS is defined.
- */
- class Error : public std::exception
- {
- private:
- cl_int err_;
- const char * errStr_;
- public:
- /*! \brief Create a new CL error exception for a given error code
- * and corresponding message.
- *
- * \param err error code value.
- *
- * \param errStr a descriptive string that must remain in scope until
- * handling of the exception has concluded. If set, it
- * will be returned by what().
- */
- Error(cl_int err, const char * errStr = NULL) : err_(err), errStr_(errStr)
- {}
-
- ~Error() throw() {}
-
- /*! \brief Get error string associated with exception
- *
- * \return A memory pointer to the error message string.
- */
- virtual const char * what() const throw ()
- {
- if (errStr_ == NULL) {
- return "empty";
- }
- else {
- return errStr_;
- }
- }
-
- /*! \brief Get error code associated with exception
- *
- * \return The error code.
- */
- cl_int err(void) const { return err_; }
- };
-#define CL_HPP_ERR_STR_(x) #x
-#else
-#define CL_HPP_ERR_STR_(x) NULL
-#endif // CL_HPP_ENABLE_EXCEPTIONS
-
-
-namespace detail
-{
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
-static inline cl_int errHandler (
- cl_int err,
- const char * errStr = NULL)
-{
- if (err != CL_SUCCESS) {
- throw Error(err, errStr);
- }
- return err;
-}
-#else
-static inline cl_int errHandler (cl_int err, const char * errStr = NULL)
-{
- (void) errStr; // suppress unused variable warning
- return err;
-}
-#endif // CL_HPP_ENABLE_EXCEPTIONS
-}
-
-
-
-//! \cond DOXYGEN_DETAIL
-#if !defined(CL_HPP_USER_OVERRIDE_ERROR_STRINGS)
-#define __GET_DEVICE_INFO_ERR CL_HPP_ERR_STR_(clGetDeviceInfo)
-#define __GET_PLATFORM_INFO_ERR CL_HPP_ERR_STR_(clGetPlatformInfo)
-#define __GET_DEVICE_IDS_ERR CL_HPP_ERR_STR_(clGetDeviceIDs)
-#define __GET_PLATFORM_IDS_ERR CL_HPP_ERR_STR_(clGetPlatformIDs)
-#define __GET_CONTEXT_INFO_ERR CL_HPP_ERR_STR_(clGetContextInfo)
-#define __GET_EVENT_INFO_ERR CL_HPP_ERR_STR_(clGetEventInfo)
-#define __GET_EVENT_PROFILE_INFO_ERR CL_HPP_ERR_STR_(clGetEventProfileInfo)
-#define __GET_MEM_OBJECT_INFO_ERR CL_HPP_ERR_STR_(clGetMemObjectInfo)
-#define __GET_IMAGE_INFO_ERR CL_HPP_ERR_STR_(clGetImageInfo)
-#define __GET_SAMPLER_INFO_ERR CL_HPP_ERR_STR_(clGetSamplerInfo)
-#define __GET_KERNEL_INFO_ERR CL_HPP_ERR_STR_(clGetKernelInfo)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __GET_KERNEL_ARG_INFO_ERR CL_HPP_ERR_STR_(clGetKernelArgInfo)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#if CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __GET_KERNEL_SUB_GROUP_INFO_ERR CL_HPP_ERR_STR_(clGetKernelSubGroupInfo)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __GET_KERNEL_WORK_GROUP_INFO_ERR CL_HPP_ERR_STR_(clGetKernelWorkGroupInfo)
-#define __GET_PROGRAM_INFO_ERR CL_HPP_ERR_STR_(clGetProgramInfo)
-#define __GET_PROGRAM_BUILD_INFO_ERR CL_HPP_ERR_STR_(clGetProgramBuildInfo)
-#define __GET_COMMAND_QUEUE_INFO_ERR CL_HPP_ERR_STR_(clGetCommandQueueInfo)
-
-#define __CREATE_CONTEXT_ERR CL_HPP_ERR_STR_(clCreateContext)
-#define __CREATE_CONTEXT_FROM_TYPE_ERR CL_HPP_ERR_STR_(clCreateContextFromType)
-#define __GET_SUPPORTED_IMAGE_FORMATS_ERR CL_HPP_ERR_STR_(clGetSupportedImageFormats)
-
-#define __CREATE_BUFFER_ERR CL_HPP_ERR_STR_(clCreateBuffer)
-#define __COPY_ERR CL_HPP_ERR_STR_(cl::copy)
-#define __CREATE_SUBBUFFER_ERR CL_HPP_ERR_STR_(clCreateSubBuffer)
-#define __CREATE_GL_BUFFER_ERR CL_HPP_ERR_STR_(clCreateFromGLBuffer)
-#define __CREATE_GL_RENDER_BUFFER_ERR CL_HPP_ERR_STR_(clCreateFromGLBuffer)
-#define __GET_GL_OBJECT_INFO_ERR CL_HPP_ERR_STR_(clGetGLObjectInfo)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __CREATE_IMAGE_ERR CL_HPP_ERR_STR_(clCreateImage)
-#define __CREATE_GL_TEXTURE_ERR CL_HPP_ERR_STR_(clCreateFromGLTexture)
-#define __IMAGE_DIMENSION_ERR CL_HPP_ERR_STR_(Incorrect image dimensions)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __SET_MEM_OBJECT_DESTRUCTOR_CALLBACK_ERR CL_HPP_ERR_STR_(clSetMemObjectDestructorCallback)
-
-#define __CREATE_USER_EVENT_ERR CL_HPP_ERR_STR_(clCreateUserEvent)
-#define __SET_USER_EVENT_STATUS_ERR CL_HPP_ERR_STR_(clSetUserEventStatus)
-#define __SET_EVENT_CALLBACK_ERR CL_HPP_ERR_STR_(clSetEventCallback)
-#define __WAIT_FOR_EVENTS_ERR CL_HPP_ERR_STR_(clWaitForEvents)
-
-#define __CREATE_KERNEL_ERR CL_HPP_ERR_STR_(clCreateKernel)
-#define __SET_KERNEL_ARGS_ERR CL_HPP_ERR_STR_(clSetKernelArg)
-#define __CREATE_PROGRAM_WITH_SOURCE_ERR CL_HPP_ERR_STR_(clCreateProgramWithSource)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __CREATE_PROGRAM_WITH_IL_ERR CL_HPP_ERR_STR_(clCreateProgramWithIL)
-#endif // #if CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __CREATE_PROGRAM_WITH_BINARY_ERR CL_HPP_ERR_STR_(clCreateProgramWithBinary)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
-#define __CREATE_PROGRAM_WITH_IL_ERR CL_HPP_ERR_STR_(clCreateProgramWithIL)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 210
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __CREATE_PROGRAM_WITH_BUILT_IN_KERNELS_ERR CL_HPP_ERR_STR_(clCreateProgramWithBuiltInKernels)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __BUILD_PROGRAM_ERR CL_HPP_ERR_STR_(clBuildProgram)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __COMPILE_PROGRAM_ERR CL_HPP_ERR_STR_(clCompileProgram)
-#define __LINK_PROGRAM_ERR CL_HPP_ERR_STR_(clLinkProgram)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __CREATE_KERNELS_IN_PROGRAM_ERR CL_HPP_ERR_STR_(clCreateKernelsInProgram)
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __CREATE_COMMAND_QUEUE_WITH_PROPERTIES_ERR CL_HPP_ERR_STR_(clCreateCommandQueueWithProperties)
-#define __CREATE_SAMPLER_WITH_PROPERTIES_ERR CL_HPP_ERR_STR_(clCreateSamplerWithProperties)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 200
-#define __SET_COMMAND_QUEUE_PROPERTY_ERR CL_HPP_ERR_STR_(clSetCommandQueueProperty)
-#define __ENQUEUE_READ_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueReadBuffer)
-#define __ENQUEUE_READ_BUFFER_RECT_ERR CL_HPP_ERR_STR_(clEnqueueReadBufferRect)
-#define __ENQUEUE_WRITE_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueWriteBuffer)
-#define __ENQUEUE_WRITE_BUFFER_RECT_ERR CL_HPP_ERR_STR_(clEnqueueWriteBufferRect)
-#define __ENQEUE_COPY_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueCopyBuffer)
-#define __ENQEUE_COPY_BUFFER_RECT_ERR CL_HPP_ERR_STR_(clEnqueueCopyBufferRect)
-#define __ENQUEUE_FILL_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueFillBuffer)
-#define __ENQUEUE_READ_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueReadImage)
-#define __ENQUEUE_WRITE_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueWriteImage)
-#define __ENQUEUE_COPY_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueCopyImage)
-#define __ENQUEUE_FILL_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueFillImage)
-#define __ENQUEUE_COPY_IMAGE_TO_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueCopyImageToBuffer)
-#define __ENQUEUE_COPY_BUFFER_TO_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueCopyBufferToImage)
-#define __ENQUEUE_MAP_BUFFER_ERR CL_HPP_ERR_STR_(clEnqueueMapBuffer)
-#define __ENQUEUE_MAP_IMAGE_ERR CL_HPP_ERR_STR_(clEnqueueMapImage)
-#define __ENQUEUE_UNMAP_MEM_OBJECT_ERR CL_HPP_ERR_STR_(clEnqueueUnMapMemObject)
-#define __ENQUEUE_NDRANGE_KERNEL_ERR CL_HPP_ERR_STR_(clEnqueueNDRangeKernel)
-#define __ENQUEUE_NATIVE_KERNEL CL_HPP_ERR_STR_(clEnqueueNativeKernel)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __ENQUEUE_MIGRATE_MEM_OBJECTS_ERR CL_HPP_ERR_STR_(clEnqueueMigrateMemObjects)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
-#define __ENQUEUE_MIGRATE_SVM_ERR CL_HPP_ERR_STR_(clEnqueueSVMMigrateMem)
-#define __SET_DEFAULT_DEVICE_COMMAND_QUEUE_ERR CL_HPP_ERR_STR_(clSetDefaultDeviceCommandQueue)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 210
-
-
-#define __ENQUEUE_ACQUIRE_GL_ERR CL_HPP_ERR_STR_(clEnqueueAcquireGLObjects)
-#define __ENQUEUE_RELEASE_GL_ERR CL_HPP_ERR_STR_(clEnqueueReleaseGLObjects)
-
-#define __CREATE_PIPE_ERR CL_HPP_ERR_STR_(clCreatePipe)
-#define __GET_PIPE_INFO_ERR CL_HPP_ERR_STR_(clGetPipeInfo)
-
-
-#define __RETAIN_ERR CL_HPP_ERR_STR_(Retain Object)
-#define __RELEASE_ERR CL_HPP_ERR_STR_(Release Object)
-#define __FLUSH_ERR CL_HPP_ERR_STR_(clFlush)
-#define __FINISH_ERR CL_HPP_ERR_STR_(clFinish)
-#define __VECTOR_CAPACITY_ERR CL_HPP_ERR_STR_(Vector capacity error)
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
-#define __GET_HOST_TIMER_ERR CL_HPP_ERR_STR_(clGetHostTimer)
-#define __GET_DEVICE_AND_HOST_TIMER_ERR CL_HPP_ERR_STR_(clGetDeviceAndHostTimer)
-#endif
-#if CL_HPP_TARGET_OPENCL_VERSION >= 220
-#define __SET_PROGRAM_RELEASE_CALLBACK_ERR CL_HPP_ERR_STR_(clSetProgramReleaseCallback)
-#define __SET_PROGRAM_SPECIALIZATION_CONSTANT_ERR CL_HPP_ERR_STR_(clSetProgramSpecializationConstant)
-#endif
-
-
-/**
- * CL 1.2 version that uses device fission.
- */
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __CREATE_SUB_DEVICES_ERR CL_HPP_ERR_STR_(clCreateSubDevices)
-#else
-#define __CREATE_SUB_DEVICES_ERR CL_HPP_ERR_STR_(clCreateSubDevicesEXT)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-
-/**
- * Deprecated APIs for 1.2
- */
-#if defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-#define __ENQUEUE_MARKER_ERR CL_HPP_ERR_STR_(clEnqueueMarker)
-#define __ENQUEUE_WAIT_FOR_EVENTS_ERR CL_HPP_ERR_STR_(clEnqueueWaitForEvents)
-#define __ENQUEUE_BARRIER_ERR CL_HPP_ERR_STR_(clEnqueueBarrier)
-#define __UNLOAD_COMPILER_ERR CL_HPP_ERR_STR_(clUnloadCompiler)
-#define __CREATE_GL_TEXTURE_2D_ERR CL_HPP_ERR_STR_(clCreateFromGLTexture2D)
-#define __CREATE_GL_TEXTURE_3D_ERR CL_HPP_ERR_STR_(clCreateFromGLTexture3D)
-#define __CREATE_IMAGE2D_ERR CL_HPP_ERR_STR_(clCreateImage2D)
-#define __CREATE_IMAGE3D_ERR CL_HPP_ERR_STR_(clCreateImage3D)
-#endif // #if defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-
-/**
- * Deprecated APIs for 2.0
- */
-#if defined(CL_USE_DEPRECATED_OPENCL_1_2_APIS)
-#define __CREATE_COMMAND_QUEUE_ERR CL_HPP_ERR_STR_(clCreateCommandQueue)
-#define __ENQUEUE_TASK_ERR CL_HPP_ERR_STR_(clEnqueueTask)
-#define __CREATE_SAMPLER_ERR CL_HPP_ERR_STR_(clCreateSampler)
-#endif // #if defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-
-/**
- * CL 1.2 marker and barrier commands
- */
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#define __ENQUEUE_MARKER_WAIT_LIST_ERR CL_HPP_ERR_STR_(clEnqueueMarkerWithWaitList)
-#define __ENQUEUE_BARRIER_WAIT_LIST_ERR CL_HPP_ERR_STR_(clEnqueueBarrierWithWaitList)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
-#define __CLONE_KERNEL_ERR CL_HPP_ERR_STR_(clCloneKernel)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 210
-
-#endif // CL_HPP_USER_OVERRIDE_ERROR_STRINGS
-//! \endcond
-
-
-namespace detail {
-
-// Generic getInfoHelper. The final parameter is used to guide overload
-// resolution: the actual parameter passed is an int, which makes this
-// a worse conversion sequence than a specialization that declares the
-// parameter as an int.
-template
-inline cl_int getInfoHelper(Functor f, cl_uint name, T* param, long)
-{
- return f(name, sizeof(T), param, NULL);
-}
-
-// Specialized for getInfo
-// Assumes that the output vector was correctly resized on the way in
-template
-inline cl_int getInfoHelper(Func f, cl_uint name, vector>* param, int)
-{
- if (name != CL_PROGRAM_BINARIES) {
- return CL_INVALID_VALUE;
- }
- if (param) {
- // Create array of pointers, calculate total size and pass pointer array in
- size_type numBinaries = param->size();
- vector binariesPointers(numBinaries);
-
- for (size_type i = 0; i < numBinaries; ++i)
- {
- binariesPointers[i] = (*param)[i].data();
- }
-
- cl_int err = f(name, numBinaries * sizeof(unsigned char*), binariesPointers.data(), NULL);
-
- if (err != CL_SUCCESS) {
- return err;
- }
- }
-
-
- return CL_SUCCESS;
-}
-
-// Specialized getInfoHelper for vector params
-template
-inline cl_int getInfoHelper(Func f, cl_uint name, vector* param, long)
-{
- size_type required;
- cl_int err = f(name, 0, NULL, &required);
- if (err != CL_SUCCESS) {
- return err;
- }
- const size_type elements = required / sizeof(T);
-
- // Temporary to avoid changing param on an error
- vector localData(elements);
- err = f(name, required, localData.data(), NULL);
- if (err != CL_SUCCESS) {
- return err;
- }
- if (param) {
- *param = std::move(localData);
- }
-
- return CL_SUCCESS;
-}
-
-/* Specialization for reference-counted types. This depends on the
- * existence of Wrapper::cl_type, and none of the other types having the
- * cl_type member. Note that simplify specifying the parameter as Wrapper
- * does not work, because when using a derived type (e.g. Context) the generic
- * template will provide a better match.
- */
-template
-inline cl_int getInfoHelper(
- Func f, cl_uint name, vector* param, int, typename T::cl_type = 0)
-{
- size_type required;
- cl_int err = f(name, 0, NULL, &required);
- if (err != CL_SUCCESS) {
- return err;
- }
-
- const size_type elements = required / sizeof(typename T::cl_type);
-
- vector value(elements);
- err = f(name, required, value.data(), NULL);
- if (err != CL_SUCCESS) {
- return err;
- }
-
- if (param) {
- // Assign to convert CL type to T for each element
- param->resize(elements);
-
- // Assign to param, constructing with retain behaviour
- // to correctly capture each underlying CL object
- for (size_type i = 0; i < elements; i++) {
- (*param)[i] = T(value[i], true);
- }
- }
- return CL_SUCCESS;
-}
-
-// Specialized GetInfoHelper for string params
-template
-inline cl_int getInfoHelper(Func f, cl_uint name, string* param, long)
-{
- size_type required;
- cl_int err = f(name, 0, NULL, &required);
- if (err != CL_SUCCESS) {
- return err;
- }
-
- // std::string has a constant data member
- // a char vector does not
- if (required > 0) {
- vector value(required);
- err = f(name, required, value.data(), NULL);
- if (err != CL_SUCCESS) {
- return err;
- }
- if (param) {
- param->assign(begin(value), prev(end(value)));
- }
- }
- else if (param) {
- param->assign("");
- }
- return CL_SUCCESS;
-}
-
-// Specialized GetInfoHelper for clsize_t params
-template
-inline cl_int getInfoHelper(Func f, cl_uint name, array* param, long)
-{
- size_type required;
- cl_int err = f(name, 0, NULL, &required);
- if (err != CL_SUCCESS) {
- return err;
- }
-
- size_type elements = required / sizeof(size_type);
- vector value(elements, 0);
-
- err = f(name, required, value.data(), NULL);
- if (err != CL_SUCCESS) {
- return err;
- }
-
- // Bound the copy with N to prevent overruns
- // if passed N > than the amount copied
- if (elements > N) {
- elements = N;
- }
- for (size_type i = 0; i < elements; ++i) {
- (*param)[i] = value[i];
- }
-
- return CL_SUCCESS;
-}
-
-template struct ReferenceHandler;
-
-/* Specialization for reference-counted types. This depends on the
- * existence of Wrapper::cl_type, and none of the other types having the
- * cl_type member. Note that simplify specifying the parameter as Wrapper
- * does not work, because when using a derived type (e.g. Context) the generic
- * template will provide a better match.
- */
-template
-inline cl_int getInfoHelper(Func f, cl_uint name, T* param, int, typename T::cl_type = 0)
-{
- typename T::cl_type value;
- cl_int err = f(name, sizeof(value), &value, NULL);
- if (err != CL_SUCCESS) {
- return err;
- }
- *param = value;
- if (value != NULL)
- {
- err = param->retain();
- if (err != CL_SUCCESS) {
- return err;
- }
- }
- return CL_SUCCESS;
-}
-
-#define CL_HPP_PARAM_NAME_INFO_1_0_(F) \
- F(cl_platform_info, CL_PLATFORM_PROFILE, string) \
- F(cl_platform_info, CL_PLATFORM_VERSION, string) \
- F(cl_platform_info, CL_PLATFORM_NAME, string) \
- F(cl_platform_info, CL_PLATFORM_VENDOR, string) \
- F(cl_platform_info, CL_PLATFORM_EXTENSIONS, string) \
- \
- F(cl_device_info, CL_DEVICE_TYPE, cl_device_type) \
- F(cl_device_info, CL_DEVICE_VENDOR_ID, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_COMPUTE_UNITS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_WORK_GROUP_SIZE, size_type) \
- F(cl_device_info, CL_DEVICE_MAX_WORK_ITEM_SIZES, cl::vector) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_CLOCK_FREQUENCY, cl_uint) \
- F(cl_device_info, CL_DEVICE_ADDRESS_BITS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_READ_IMAGE_ARGS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_WRITE_IMAGE_ARGS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_MEM_ALLOC_SIZE, cl_ulong) \
- F(cl_device_info, CL_DEVICE_IMAGE2D_MAX_WIDTH, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE2D_MAX_HEIGHT, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE3D_MAX_WIDTH, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE3D_MAX_HEIGHT, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE3D_MAX_DEPTH, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_MAX_PARAMETER_SIZE, size_type) \
- F(cl_device_info, CL_DEVICE_MAX_SAMPLERS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MEM_BASE_ADDR_ALIGN, cl_uint) \
- F(cl_device_info, CL_DEVICE_MIN_DATA_TYPE_ALIGN_SIZE, cl_uint) \
- F(cl_device_info, CL_DEVICE_SINGLE_FP_CONFIG, cl_device_fp_config) \
- F(cl_device_info, CL_DEVICE_DOUBLE_FP_CONFIG, cl_device_fp_config) \
- F(cl_device_info, CL_DEVICE_HALF_FP_CONFIG, cl_device_fp_config) \
- F(cl_device_info, CL_DEVICE_GLOBAL_MEM_CACHE_TYPE, cl_device_mem_cache_type) \
- F(cl_device_info, CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE, cl_uint)\
- F(cl_device_info, CL_DEVICE_GLOBAL_MEM_CACHE_SIZE, cl_ulong) \
- F(cl_device_info, CL_DEVICE_GLOBAL_MEM_SIZE, cl_ulong) \
- F(cl_device_info, CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE, cl_ulong) \
- F(cl_device_info, CL_DEVICE_MAX_CONSTANT_ARGS, cl_uint) \
- F(cl_device_info, CL_DEVICE_LOCAL_MEM_TYPE, cl_device_local_mem_type) \
- F(cl_device_info, CL_DEVICE_LOCAL_MEM_SIZE, cl_ulong) \
- F(cl_device_info, CL_DEVICE_ERROR_CORRECTION_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_PROFILING_TIMER_RESOLUTION, size_type) \
- F(cl_device_info, CL_DEVICE_ENDIAN_LITTLE, cl_bool) \
- F(cl_device_info, CL_DEVICE_AVAILABLE, cl_bool) \
- F(cl_device_info, CL_DEVICE_COMPILER_AVAILABLE, cl_bool) \
- F(cl_device_info, CL_DEVICE_EXECUTION_CAPABILITIES, cl_device_exec_capabilities) \
- F(cl_device_info, CL_DEVICE_PLATFORM, cl_platform_id) \
- F(cl_device_info, CL_DEVICE_NAME, string) \
- F(cl_device_info, CL_DEVICE_VENDOR, string) \
- F(cl_device_info, CL_DRIVER_VERSION, string) \
- F(cl_device_info, CL_DEVICE_PROFILE, string) \
- F(cl_device_info, CL_DEVICE_VERSION, string) \
- F(cl_device_info, CL_DEVICE_EXTENSIONS, string) \
- \
- F(cl_context_info, CL_CONTEXT_REFERENCE_COUNT, cl_uint) \
- F(cl_context_info, CL_CONTEXT_DEVICES, cl::vector) \
- F(cl_context_info, CL_CONTEXT_PROPERTIES, cl::vector) \
- \
- F(cl_event_info, CL_EVENT_COMMAND_QUEUE, cl::CommandQueue) \
- F(cl_event_info, CL_EVENT_COMMAND_TYPE, cl_command_type) \
- F(cl_event_info, CL_EVENT_REFERENCE_COUNT, cl_uint) \
- F(cl_event_info, CL_EVENT_COMMAND_EXECUTION_STATUS, cl_int) \
- \
- F(cl_profiling_info, CL_PROFILING_COMMAND_QUEUED, cl_ulong) \
- F(cl_profiling_info, CL_PROFILING_COMMAND_SUBMIT, cl_ulong) \
- F(cl_profiling_info, CL_PROFILING_COMMAND_START, cl_ulong) \
- F(cl_profiling_info, CL_PROFILING_COMMAND_END, cl_ulong) \
- \
- F(cl_mem_info, CL_MEM_TYPE, cl_mem_object_type) \
- F(cl_mem_info, CL_MEM_FLAGS, cl_mem_flags) \
- F(cl_mem_info, CL_MEM_SIZE, size_type) \
- F(cl_mem_info, CL_MEM_HOST_PTR, void*) \
- F(cl_mem_info, CL_MEM_MAP_COUNT, cl_uint) \
- F(cl_mem_info, CL_MEM_REFERENCE_COUNT, cl_uint) \
- F(cl_mem_info, CL_MEM_CONTEXT, cl::Context) \
- \
- F(cl_image_info, CL_IMAGE_FORMAT, cl_image_format) \
- F(cl_image_info, CL_IMAGE_ELEMENT_SIZE, size_type) \
- F(cl_image_info, CL_IMAGE_ROW_PITCH, size_type) \
- F(cl_image_info, CL_IMAGE_SLICE_PITCH, size_type) \
- F(cl_image_info, CL_IMAGE_WIDTH, size_type) \
- F(cl_image_info, CL_IMAGE_HEIGHT, size_type) \
- F(cl_image_info, CL_IMAGE_DEPTH, size_type) \
- \
- F(cl_sampler_info, CL_SAMPLER_REFERENCE_COUNT, cl_uint) \
- F(cl_sampler_info, CL_SAMPLER_CONTEXT, cl::Context) \
- F(cl_sampler_info, CL_SAMPLER_NORMALIZED_COORDS, cl_bool) \
- F(cl_sampler_info, CL_SAMPLER_ADDRESSING_MODE, cl_addressing_mode) \
- F(cl_sampler_info, CL_SAMPLER_FILTER_MODE, cl_filter_mode) \
- \
- F(cl_program_info, CL_PROGRAM_REFERENCE_COUNT, cl_uint) \
- F(cl_program_info, CL_PROGRAM_CONTEXT, cl::Context) \
- F(cl_program_info, CL_PROGRAM_NUM_DEVICES, cl_uint) \
- F(cl_program_info, CL_PROGRAM_DEVICES, cl::vector) \
- F(cl_program_info, CL_PROGRAM_SOURCE, string) \
- F(cl_program_info, CL_PROGRAM_BINARY_SIZES, cl::vector) \
- F(cl_program_info, CL_PROGRAM_BINARIES, cl::vector>) \
- \
- F(cl_program_build_info, CL_PROGRAM_BUILD_STATUS, cl_build_status) \
- F(cl_program_build_info, CL_PROGRAM_BUILD_OPTIONS, string) \
- F(cl_program_build_info, CL_PROGRAM_BUILD_LOG, string) \
- \
- F(cl_kernel_info, CL_KERNEL_FUNCTION_NAME, string) \
- F(cl_kernel_info, CL_KERNEL_NUM_ARGS, cl_uint) \
- F(cl_kernel_info, CL_KERNEL_REFERENCE_COUNT, cl_uint) \
- F(cl_kernel_info, CL_KERNEL_CONTEXT, cl::Context) \
- F(cl_kernel_info, CL_KERNEL_PROGRAM, cl::Program) \
- \
- F(cl_kernel_work_group_info, CL_KERNEL_WORK_GROUP_SIZE, size_type) \
- F(cl_kernel_work_group_info, CL_KERNEL_COMPILE_WORK_GROUP_SIZE, cl::detail::size_t_array) \
- F(cl_kernel_work_group_info, CL_KERNEL_LOCAL_MEM_SIZE, cl_ulong) \
- \
- F(cl_command_queue_info, CL_QUEUE_CONTEXT, cl::Context) \
- F(cl_command_queue_info, CL_QUEUE_DEVICE, cl::Device) \
- F(cl_command_queue_info, CL_QUEUE_REFERENCE_COUNT, cl_uint) \
- F(cl_command_queue_info, CL_QUEUE_PROPERTIES, cl_command_queue_properties)
-
-
-#define CL_HPP_PARAM_NAME_INFO_1_1_(F) \
- F(cl_context_info, CL_CONTEXT_NUM_DEVICES, cl_uint)\
- F(cl_device_info, CL_DEVICE_PREFERRED_VECTOR_WIDTH_HALF, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_INT, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE, cl_uint) \
- F(cl_device_info, CL_DEVICE_NATIVE_VECTOR_WIDTH_HALF, cl_uint) \
- F(cl_device_info, CL_DEVICE_OPENCL_C_VERSION, string) \
- \
- F(cl_mem_info, CL_MEM_ASSOCIATED_MEMOBJECT, cl::Memory) \
- F(cl_mem_info, CL_MEM_OFFSET, size_type) \
- \
- F(cl_kernel_work_group_info, CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, size_type) \
- F(cl_kernel_work_group_info, CL_KERNEL_PRIVATE_MEM_SIZE, cl_ulong) \
- \
- F(cl_event_info, CL_EVENT_CONTEXT, cl::Context)
-
-#define CL_HPP_PARAM_NAME_INFO_1_2_(F) \
- F(cl_program_info, CL_PROGRAM_NUM_KERNELS, size_type) \
- F(cl_program_info, CL_PROGRAM_KERNEL_NAMES, string) \
- \
- F(cl_program_build_info, CL_PROGRAM_BINARY_TYPE, cl_program_binary_type) \
- \
- F(cl_kernel_info, CL_KERNEL_ATTRIBUTES, string) \
- \
- F(cl_kernel_arg_info, CL_KERNEL_ARG_ADDRESS_QUALIFIER, cl_kernel_arg_address_qualifier) \
- F(cl_kernel_arg_info, CL_KERNEL_ARG_ACCESS_QUALIFIER, cl_kernel_arg_access_qualifier) \
- F(cl_kernel_arg_info, CL_KERNEL_ARG_TYPE_NAME, string) \
- F(cl_kernel_arg_info, CL_KERNEL_ARG_NAME, string) \
- F(cl_kernel_arg_info, CL_KERNEL_ARG_TYPE_QUALIFIER, cl_kernel_arg_type_qualifier) \
- \
- F(cl_kernel_work_group_info, CL_KERNEL_GLOBAL_WORK_SIZE, cl::detail::size_t_array) \
- \
- F(cl_device_info, CL_DEVICE_LINKER_AVAILABLE, cl_bool) \
- F(cl_device_info, CL_DEVICE_IMAGE_MAX_BUFFER_SIZE, size_type) \
- F(cl_device_info, CL_DEVICE_IMAGE_MAX_ARRAY_SIZE, size_type) \
- F(cl_device_info, CL_DEVICE_PARENT_DEVICE, cl::Device) \
- F(cl_device_info, CL_DEVICE_PARTITION_MAX_SUB_DEVICES, cl_uint) \
- F(cl_device_info, CL_DEVICE_PARTITION_PROPERTIES, cl::vector) \
- F(cl_device_info, CL_DEVICE_PARTITION_TYPE, cl::vector) \
- F(cl_device_info, CL_DEVICE_REFERENCE_COUNT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_INTEROP_USER_SYNC, cl_bool) \
- F(cl_device_info, CL_DEVICE_PARTITION_AFFINITY_DOMAIN, cl_device_affinity_domain) \
- F(cl_device_info, CL_DEVICE_BUILT_IN_KERNELS, string) \
- F(cl_device_info, CL_DEVICE_PRINTF_BUFFER_SIZE, size_type) \
- \
- F(cl_image_info, CL_IMAGE_ARRAY_SIZE, size_type) \
- F(cl_image_info, CL_IMAGE_NUM_MIP_LEVELS, cl_uint) \
- F(cl_image_info, CL_IMAGE_NUM_SAMPLES, cl_uint)
-
-#define CL_HPP_PARAM_NAME_INFO_2_0_(F) \
- F(cl_device_info, CL_DEVICE_QUEUE_ON_HOST_PROPERTIES, cl_command_queue_properties) \
- F(cl_device_info, CL_DEVICE_QUEUE_ON_DEVICE_PROPERTIES, cl_command_queue_properties) \
- F(cl_device_info, CL_DEVICE_QUEUE_ON_DEVICE_PREFERRED_SIZE, cl_uint) \
- F(cl_device_info, CL_DEVICE_QUEUE_ON_DEVICE_MAX_SIZE, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_ON_DEVICE_QUEUES, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_ON_DEVICE_EVENTS, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_PIPE_ARGS, cl_uint) \
- F(cl_device_info, CL_DEVICE_PIPE_MAX_ACTIVE_RESERVATIONS, cl_uint) \
- F(cl_device_info, CL_DEVICE_PIPE_MAX_PACKET_SIZE, cl_uint) \
- F(cl_device_info, CL_DEVICE_SVM_CAPABILITIES, cl_device_svm_capabilities) \
- F(cl_device_info, CL_DEVICE_PREFERRED_PLATFORM_ATOMIC_ALIGNMENT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_GLOBAL_ATOMIC_ALIGNMENT, cl_uint) \
- F(cl_device_info, CL_DEVICE_PREFERRED_LOCAL_ATOMIC_ALIGNMENT, cl_uint) \
- F(cl_device_info, CL_DEVICE_IMAGE_PITCH_ALIGNMENT, cl_uint) \
- F(cl_device_info, CL_DEVICE_IMAGE_BASE_ADDRESS_ALIGNMENT, cl_uint) \
- F(cl_device_info, CL_DEVICE_MAX_READ_WRITE_IMAGE_ARGS, cl_uint ) \
- F(cl_device_info, CL_DEVICE_MAX_GLOBAL_VARIABLE_SIZE, size_type ) \
- F(cl_device_info, CL_DEVICE_GLOBAL_VARIABLE_PREFERRED_TOTAL_SIZE, size_type ) \
- F(cl_profiling_info, CL_PROFILING_COMMAND_COMPLETE, cl_ulong) \
- F(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_SVM_FINE_GRAIN_SYSTEM, cl_bool) \
- F(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_SVM_PTRS, void**) \
- F(cl_command_queue_info, CL_QUEUE_SIZE, cl_uint) \
- F(cl_mem_info, CL_MEM_USES_SVM_POINTER, cl_bool) \
- F(cl_program_build_info, CL_PROGRAM_BUILD_GLOBAL_VARIABLE_TOTAL_SIZE, size_type) \
- F(cl_pipe_info, CL_PIPE_PACKET_SIZE, cl_uint) \
- F(cl_pipe_info, CL_PIPE_MAX_PACKETS, cl_uint)
-
-#define CL_HPP_PARAM_NAME_INFO_SUBGROUP_KHR_(F) \
- F(cl_kernel_sub_group_info, CL_KERNEL_MAX_SUB_GROUP_SIZE_FOR_NDRANGE_KHR, size_type) \
- F(cl_kernel_sub_group_info, CL_KERNEL_SUB_GROUP_COUNT_FOR_NDRANGE_KHR, size_type)
-
-#define CL_HPP_PARAM_NAME_INFO_IL_KHR_(F) \
- F(cl_device_info, CL_DEVICE_IL_VERSION_KHR, string) \
- F(cl_program_info, CL_PROGRAM_IL_KHR, cl::vector)
-
-#define CL_HPP_PARAM_NAME_INFO_2_1_(F) \
- F(cl_platform_info, CL_PLATFORM_HOST_TIMER_RESOLUTION, cl_ulong) \
- F(cl_program_info, CL_PROGRAM_IL, cl::vector) \
- F(cl_device_info, CL_DEVICE_MAX_NUM_SUB_GROUPS, cl_uint) \
- F(cl_device_info, CL_DEVICE_IL_VERSION, string) \
- F(cl_device_info, CL_DEVICE_SUB_GROUP_INDEPENDENT_FORWARD_PROGRESS, cl_bool) \
- F(cl_command_queue_info, CL_QUEUE_DEVICE_DEFAULT, cl::DeviceCommandQueue) \
- F(cl_kernel_sub_group_info, CL_KERNEL_MAX_SUB_GROUP_SIZE_FOR_NDRANGE, size_type) \
- F(cl_kernel_sub_group_info, CL_KERNEL_SUB_GROUP_COUNT_FOR_NDRANGE, size_type) \
- F(cl_kernel_sub_group_info, CL_KERNEL_LOCAL_SIZE_FOR_SUB_GROUP_COUNT, cl::detail::size_t_array) \
- F(cl_kernel_sub_group_info, CL_KERNEL_MAX_NUM_SUB_GROUPS, size_type) \
- F(cl_kernel_sub_group_info, CL_KERNEL_COMPILE_NUM_SUB_GROUPS, size_type)
-
-#define CL_HPP_PARAM_NAME_INFO_2_2_(F) \
- F(cl_program_info, CL_PROGRAM_SCOPE_GLOBAL_CTORS_PRESENT, cl_bool) \
- F(cl_program_info, CL_PROGRAM_SCOPE_GLOBAL_DTORS_PRESENT, cl_bool)
-
-#define CL_HPP_PARAM_NAME_DEVICE_FISSION_(F) \
- F(cl_device_info, CL_DEVICE_PARENT_DEVICE_EXT, cl_device_id) \
- F(cl_device_info, CL_DEVICE_PARTITION_TYPES_EXT, cl::vector) \
- F(cl_device_info, CL_DEVICE_AFFINITY_DOMAINS_EXT, cl::vector) \
- F(cl_device_info, CL_DEVICE_REFERENCE_COUNT_EXT , cl_uint) \
- F(cl_device_info, CL_DEVICE_PARTITION_STYLE_EXT, cl::vector)
-
-#define CL_HPP_PARAM_NAME_CL_KHR_EXTENDED_VERSIONING_CL3_SHARED_(F) \
- F(cl_platform_info, CL_PLATFORM_NUMERIC_VERSION_KHR, cl_version_khr) \
- F(cl_platform_info, CL_PLATFORM_EXTENSIONS_WITH_VERSION_KHR, cl::vector) \
- \
- F(cl_device_info, CL_DEVICE_NUMERIC_VERSION_KHR, cl_version_khr) \
- F(cl_device_info, CL_DEVICE_EXTENSIONS_WITH_VERSION_KHR, cl::vector) \
- F(cl_device_info, CL_DEVICE_ILS_WITH_VERSION_KHR, cl::vector) \
- F(cl_device_info, CL_DEVICE_BUILT_IN_KERNELS_WITH_VERSION_KHR, cl::vector)
-
-#define CL_HPP_PARAM_NAME_CL_KHR_EXTENDED_VERSIONING_KHRONLY_(F) \
- F(cl_device_info, CL_DEVICE_OPENCL_C_NUMERIC_VERSION_KHR, cl_version_khr)
-
-#define CL_HPP_PARAM_NAME_INFO_3_0_(F) \
- F(cl_platform_info, CL_PLATFORM_NUMERIC_VERSION, cl_version) \
- F(cl_platform_info, CL_PLATFORM_EXTENSIONS_WITH_VERSION, cl::vector) \
- \
- F(cl_device_info, CL_DEVICE_NUMERIC_VERSION, cl_version) \
- F(cl_device_info, CL_DEVICE_EXTENSIONS_WITH_VERSION, cl::vector) \
- F(cl_device_info, CL_DEVICE_ILS_WITH_VERSION, cl::vector) \
- F(cl_device_info, CL_DEVICE_BUILT_IN_KERNELS_WITH_VERSION, cl::vector) \
- F(cl_device_info, CL_DEVICE_ATOMIC_MEMORY_CAPABILITIES, cl_device_atomic_capabilities) \
- F(cl_device_info, CL_DEVICE_ATOMIC_FENCE_CAPABILITIES, cl_device_atomic_capabilities) \
- F(cl_device_info, CL_DEVICE_NON_UNIFORM_WORK_GROUP_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_OPENCL_C_ALL_VERSIONS, cl::vector) \
- F(cl_device_info, CL_DEVICE_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, size_type) \
- F(cl_device_info, CL_DEVICE_WORK_GROUP_COLLECTIVE_FUNCTIONS_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_GENERIC_ADDRESS_SPACE_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_OPENCL_C_FEATURES, cl::vector) \
- F(cl_device_info, CL_DEVICE_DEVICE_ENQUEUE_CAPABILITIES, cl_device_device_enqueue_capabilities) \
- F(cl_device_info, CL_DEVICE_PIPE_SUPPORT, cl_bool) \
- F(cl_device_info, CL_DEVICE_LATEST_CONFORMANCE_VERSION_PASSED, string) \
- \
- F(cl_command_queue_info, CL_QUEUE_PROPERTIES_ARRAY, cl::vector) \
- F(cl_mem_info, CL_MEM_PROPERTIES, cl::vector) \
- F(cl_pipe_info, CL_PIPE_PROPERTIES, cl::vector) \
- F(cl_sampler_info, CL_SAMPLER_PROPERTIES, cl::vector)
-
-template
-struct param_traits {};
-
-#define CL_HPP_DECLARE_PARAM_TRAITS_(token, param_name, T) \
-struct token; \
-template<> \
-struct param_traits \
-{ \
- enum { value = param_name }; \
- typedef T param_type; \
-};
-
-CL_HPP_PARAM_NAME_INFO_1_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#if CL_HPP_TARGET_OPENCL_VERSION >= 110
-CL_HPP_PARAM_NAME_INFO_1_1_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 110
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-CL_HPP_PARAM_NAME_INFO_1_2_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-#if CL_HPP_TARGET_OPENCL_VERSION >= 200
-CL_HPP_PARAM_NAME_INFO_2_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 200
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
-CL_HPP_PARAM_NAME_INFO_2_1_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 210
-#if CL_HPP_TARGET_OPENCL_VERSION >= 220
-CL_HPP_PARAM_NAME_INFO_2_2_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 220
-#if CL_HPP_TARGET_OPENCL_VERSION >= 300
-CL_HPP_PARAM_NAME_INFO_3_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 300
-
-#if defined(CL_HPP_USE_CL_SUB_GROUPS_KHR) && CL_HPP_TARGET_OPENCL_VERSION < 210
-CL_HPP_PARAM_NAME_INFO_SUBGROUP_KHR_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // #if defined(CL_HPP_USE_CL_SUB_GROUPS_KHR) && CL_HPP_TARGET_OPENCL_VERSION < 210
-
-#if defined(CL_HPP_USE_IL_KHR) && CL_HPP_TARGET_OPENCL_VERSION < 210
-CL_HPP_PARAM_NAME_INFO_IL_KHR_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // #if defined(CL_HPP_USE_IL_KHR)
-
-
-// Flags deprecated in OpenCL 2.0
-#define CL_HPP_PARAM_NAME_INFO_1_0_DEPRECATED_IN_2_0_(F) \
- F(cl_device_info, CL_DEVICE_QUEUE_PROPERTIES, cl_command_queue_properties)
-
-#define CL_HPP_PARAM_NAME_INFO_1_1_DEPRECATED_IN_2_0_(F) \
- F(cl_device_info, CL_DEVICE_HOST_UNIFIED_MEMORY, cl_bool)
-
-#define CL_HPP_PARAM_NAME_INFO_1_2_DEPRECATED_IN_2_0_(F) \
- F(cl_image_info, CL_IMAGE_BUFFER, cl::Buffer)
-
-// Include deprecated query flags based on versions
-// Only include deprecated 1.0 flags if 2.0 not active as there is an enum clash
-#if CL_HPP_TARGET_OPENCL_VERSION > 100 && CL_HPP_MINIMUM_OPENCL_VERSION < 200 && CL_HPP_TARGET_OPENCL_VERSION < 200
-CL_HPP_PARAM_NAME_INFO_1_0_DEPRECATED_IN_2_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_MINIMUM_OPENCL_VERSION < 110
-#if CL_HPP_TARGET_OPENCL_VERSION > 110 && CL_HPP_MINIMUM_OPENCL_VERSION < 200
-CL_HPP_PARAM_NAME_INFO_1_1_DEPRECATED_IN_2_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_MINIMUM_OPENCL_VERSION < 120
-#if CL_HPP_TARGET_OPENCL_VERSION > 120 && CL_HPP_MINIMUM_OPENCL_VERSION < 200
-CL_HPP_PARAM_NAME_INFO_1_2_DEPRECATED_IN_2_0_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_MINIMUM_OPENCL_VERSION < 200
-
-#if defined(CL_HPP_USE_CL_DEVICE_FISSION)
-CL_HPP_PARAM_NAME_DEVICE_FISSION_(CL_HPP_DECLARE_PARAM_TRAITS_);
-#endif // CL_HPP_USE_CL_DEVICE_FISSION
-
-#if defined(cl_khr_extended_versioning)
-#if CL_HPP_TARGET_OPENCL_VERSION < 300
-CL_HPP_PARAM_NAME_CL_KHR_EXTENDED_VERSIONING_CL3_SHARED_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // CL_HPP_TARGET_OPENCL_VERSION < 300
-CL_HPP_PARAM_NAME_CL_KHR_EXTENDED_VERSIONING_KHRONLY_(CL_HPP_DECLARE_PARAM_TRAITS_)
-#endif // cl_khr_extended_versioning
-
-#if defined(cl_khr_device_uuid)
-using uuid_array = array;
-using luid_array = array;
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_UUID_KHR, uuid_array)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DRIVER_UUID_KHR, uuid_array)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_LUID_VALID_KHR, cl_bool)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_LUID_KHR, luid_array)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_NODE_MASK_KHR, cl_uint)
-#endif
-
-#if defined(cl_khr_pci_bus_info)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_PCI_BUS_INFO_KHR, cl_device_pci_bus_info_khr)
-#endif
-
-#if defined(cl_khr_integer_dot_product)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_INTEGER_DOT_PRODUCT_CAPABILITIES_KHR, cl_device_integer_dot_product_capabilities_khr)
-#if defined(CL_DEVICE_INTEGER_DOT_PRODUCT_ACCELERATION_PROPERTIES_8BIT_KHR)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_INTEGER_DOT_PRODUCT_ACCELERATION_PROPERTIES_8BIT_KHR, cl_device_integer_dot_product_acceleration_properties_khr)
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_INTEGER_DOT_PRODUCT_ACCELERATION_PROPERTIES_4x8BIT_PACKED_KHR, cl_device_integer_dot_product_acceleration_properties_khr)
-#endif // defined(CL_DEVICE_INTEGER_DOT_PRODUCT_ACCELERATION_PROPERTIES_8BIT_KHR)
-#endif // defined(cl_khr_integer_dot_product)
-
-#ifdef CL_PLATFORM_ICD_SUFFIX_KHR
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_platform_info, CL_PLATFORM_ICD_SUFFIX_KHR, string)
-#endif
-
-#ifdef CL_DEVICE_PROFILING_TIMER_OFFSET_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_PROFILING_TIMER_OFFSET_AMD, cl_ulong)
-#endif
-#ifdef CL_DEVICE_GLOBAL_FREE_MEMORY_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_GLOBAL_FREE_MEMORY_AMD, vector)
-#endif
-#ifdef CL_DEVICE_SIMD_PER_COMPUTE_UNIT_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_SIMD_PER_COMPUTE_UNIT_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_SIMD_WIDTH_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_SIMD_WIDTH_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_SIMD_INSTRUCTION_WIDTH_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_SIMD_INSTRUCTION_WIDTH_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_WAVEFRONT_WIDTH_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_WAVEFRONT_WIDTH_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_GLOBAL_MEM_CHANNELS_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_GLOBAL_MEM_CHANNELS_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_GLOBAL_MEM_CHANNEL_BANKS_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_GLOBAL_MEM_CHANNEL_BANKS_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_GLOBAL_MEM_CHANNEL_BANK_WIDTH_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_GLOBAL_MEM_CHANNEL_BANK_WIDTH_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_LOCAL_MEM_SIZE_PER_COMPUTE_UNIT_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_LOCAL_MEM_SIZE_PER_COMPUTE_UNIT_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_LOCAL_MEM_BANKS_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_LOCAL_MEM_BANKS_AMD, cl_uint)
-#endif
-#ifdef CL_DEVICE_BOARD_NAME_AMD
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_BOARD_NAME_AMD, string)
-#endif
-
-#ifdef CL_DEVICE_COMPUTE_UNITS_BITFIELD_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_COMPUTE_UNITS_BITFIELD_ARM, cl_ulong)
-#endif
-#ifdef CL_DEVICE_JOB_SLOTS_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_JOB_SLOTS_ARM, cl_uint)
-#endif
-#ifdef CL_DEVICE_SCHEDULING_CONTROLS_CAPABILITIES_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_SCHEDULING_CONTROLS_CAPABILITIES_ARM, cl_bitfield)
-#endif
-#ifdef CL_DEVICE_SUPPORTED_REGISTER_ALLOCATIONS_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_SUPPORTED_REGISTER_ALLOCATIONS_ARM, vector)
-#endif
-#ifdef CL_DEVICE_MAX_WARP_COUNT_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_MAX_WARP_COUNT_ARM, cl_uint)
-#endif
-#ifdef CL_KERNEL_MAX_WARP_COUNT_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_kernel_info, CL_KERNEL_MAX_WARP_COUNT_ARM, cl_uint)
-#endif
-#ifdef CL_KERNEL_EXEC_INFO_WORKGROUP_BATCH_SIZE_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_WORKGROUP_BATCH_SIZE_ARM, cl_uint)
-#endif
-#ifdef CL_KERNEL_EXEC_INFO_WORKGROUP_BATCH_SIZE_MODIFIER_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_WORKGROUP_BATCH_SIZE_MODIFIER_ARM, cl_int)
-#endif
-#ifdef CL_KERNEL_EXEC_INFO_WARP_COUNT_LIMIT_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_WARP_COUNT_LIMIT_ARM, cl_uint)
-#endif
-#ifdef CL_KERNEL_EXEC_INFO_COMPUTE_UNIT_MAX_QUEUED_BATCHES_ARM
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_kernel_exec_info, CL_KERNEL_EXEC_INFO_COMPUTE_UNIT_MAX_QUEUED_BATCHES_ARM, cl_uint)
-#endif
-
-#ifdef CL_DEVICE_COMPUTE_CAPABILITY_MAJOR_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_COMPUTE_CAPABILITY_MAJOR_NV, cl_uint)
-#endif
-#ifdef CL_DEVICE_COMPUTE_CAPABILITY_MINOR_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_COMPUTE_CAPABILITY_MINOR_NV, cl_uint)
-#endif
-#ifdef CL_DEVICE_REGISTERS_PER_BLOCK_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_REGISTERS_PER_BLOCK_NV, cl_uint)
-#endif
-#ifdef CL_DEVICE_WARP_SIZE_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_WARP_SIZE_NV, cl_uint)
-#endif
-#ifdef CL_DEVICE_GPU_OVERLAP_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_GPU_OVERLAP_NV, cl_bool)
-#endif
-#ifdef CL_DEVICE_KERNEL_EXEC_TIMEOUT_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_KERNEL_EXEC_TIMEOUT_NV, cl_bool)
-#endif
-#ifdef CL_DEVICE_INTEGRATED_MEMORY_NV
-CL_HPP_DECLARE_PARAM_TRAITS_(cl_device_info, CL_DEVICE_INTEGRATED_MEMORY_NV, cl_bool)
-#endif
-
-// Convenience functions
-
-template
-inline cl_int
-getInfo(Func f, cl_uint name, T* param)
-{
- return getInfoHelper(f, name, param, 0);
-}
-
-template
-struct GetInfoFunctor0
-{
- Func f_; const Arg0& arg0_;
- cl_int operator ()(
- cl_uint param, size_type size, void* value, size_type* size_ret)
- { return f_(arg0_, param, size, value, size_ret); }
-};
-
-template
-struct GetInfoFunctor1
-{
- Func f_; const Arg0& arg0_; const Arg1& arg1_;
- cl_int operator ()(
- cl_uint param, size_type size, void* value, size_type* size_ret)
- { return f_(arg0_, arg1_, param, size, value, size_ret); }
-};
-
-template
-inline cl_int
-getInfo(Func f, const Arg0& arg0, cl_uint name, T* param)
-{
- GetInfoFunctor0 f0 = { f, arg0 };
- return getInfoHelper(f0, name, param, 0);
-}
-
-template
-inline cl_int
-getInfo(Func f, const Arg0& arg0, const Arg1& arg1, cl_uint name, T* param)
-{
- GetInfoFunctor1 f0 = { f, arg0, arg1 };
- return getInfoHelper(f0, name, param, 0);
-}
-
-
-template
-struct ReferenceHandler
-{ };
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-/**
- * OpenCL 1.2 devices do have retain/release.
- */
-template <>
-struct ReferenceHandler
-{
- /**
- * Retain the device.
- * \param device A valid device created using createSubDevices
- * \return
- * CL_SUCCESS if the function executed successfully.
- * CL_INVALID_DEVICE if device was not a valid subdevice
- * CL_OUT_OF_RESOURCES
- * CL_OUT_OF_HOST_MEMORY
- */
- static cl_int retain(cl_device_id device)
- { return ::clRetainDevice(device); }
- /**
- * Retain the device.
- * \param device A valid device created using createSubDevices
- * \return
- * CL_SUCCESS if the function executed successfully.
- * CL_INVALID_DEVICE if device was not a valid subdevice
- * CL_OUT_OF_RESOURCES
- * CL_OUT_OF_HOST_MEMORY
- */
- static cl_int release(cl_device_id device)
- { return ::clReleaseDevice(device); }
-};
-#else // CL_HPP_TARGET_OPENCL_VERSION >= 120
-/**
- * OpenCL 1.1 devices do not have retain/release.
- */
-template <>
-struct ReferenceHandler
-{
- // cl_device_id does not have retain().
- static cl_int retain(cl_device_id)
- { return CL_SUCCESS; }
- // cl_device_id does not have release().
- static cl_int release(cl_device_id)
- { return CL_SUCCESS; }
-};
-#endif // ! (CL_HPP_TARGET_OPENCL_VERSION >= 120)
-
-template <>
-struct ReferenceHandler
-{
- // cl_platform_id does not have retain().
- static cl_int retain(cl_platform_id)
- { return CL_SUCCESS; }
- // cl_platform_id does not have release().
- static cl_int release(cl_platform_id)
- { return CL_SUCCESS; }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_context context)
- { return ::clRetainContext(context); }
- static cl_int release(cl_context context)
- { return ::clReleaseContext(context); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_command_queue queue)
- { return ::clRetainCommandQueue(queue); }
- static cl_int release(cl_command_queue queue)
- { return ::clReleaseCommandQueue(queue); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_mem memory)
- { return ::clRetainMemObject(memory); }
- static cl_int release(cl_mem memory)
- { return ::clReleaseMemObject(memory); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_sampler sampler)
- { return ::clRetainSampler(sampler); }
- static cl_int release(cl_sampler sampler)
- { return ::clReleaseSampler(sampler); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_program program)
- { return ::clRetainProgram(program); }
- static cl_int release(cl_program program)
- { return ::clReleaseProgram(program); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_kernel kernel)
- { return ::clRetainKernel(kernel); }
- static cl_int release(cl_kernel kernel)
- { return ::clReleaseKernel(kernel); }
-};
-
-template <>
-struct ReferenceHandler
-{
- static cl_int retain(cl_event event)
- { return ::clRetainEvent(event); }
- static cl_int release(cl_event event)
- { return ::clReleaseEvent(event); }
-};
-
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120 && CL_HPP_MINIMUM_OPENCL_VERSION < 120
-// Extracts version number with major in the upper 16 bits, minor in the lower 16
-static cl_uint getVersion(const vector &versionInfo)
-{
- int highVersion = 0;
- int lowVersion = 0;
- int index = 7;
- while(versionInfo[index] != '.' ) {
- highVersion *= 10;
- highVersion += versionInfo[index]-'0';
- ++index;
- }
- ++index;
- while(versionInfo[index] != ' ' && versionInfo[index] != '\0') {
- lowVersion *= 10;
- lowVersion += versionInfo[index]-'0';
- ++index;
- }
- return (highVersion << 16) | lowVersion;
-}
-
-static cl_uint getPlatformVersion(cl_platform_id platform)
-{
- size_type size = 0;
- clGetPlatformInfo(platform, CL_PLATFORM_VERSION, 0, NULL, &size);
-
- vector versionInfo(size);
- clGetPlatformInfo(platform, CL_PLATFORM_VERSION, size, versionInfo.data(), &size);
- return getVersion(versionInfo);
-}
-
-static cl_uint getDevicePlatformVersion(cl_device_id device)
-{
- cl_platform_id platform;
- clGetDeviceInfo(device, CL_DEVICE_PLATFORM, sizeof(platform), &platform, NULL);
- return getPlatformVersion(platform);
-}
-
-static cl_uint getContextPlatformVersion(cl_context context)
-{
- // The platform cannot be queried directly, so we first have to grab a
- // device and obtain its context
- size_type size = 0;
- clGetContextInfo(context, CL_CONTEXT_DEVICES, 0, NULL, &size);
- if (size == 0)
- return 0;
- vector devices(size/sizeof(cl_device_id));
- clGetContextInfo(context, CL_CONTEXT_DEVICES, size, devices.data(), NULL);
- return getDevicePlatformVersion(devices[0]);
-}
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120 && CL_HPP_MINIMUM_OPENCL_VERSION < 120
-
-template
-class Wrapper
-{
-public:
- typedef T cl_type;
-
-protected:
- cl_type object_;
-
-public:
- Wrapper() : object_(NULL) { }
-
- Wrapper(const cl_type &obj, bool retainObject) : object_(obj)
- {
- if (retainObject) {
- detail::errHandler(retain(), __RETAIN_ERR);
- }
- }
-
- ~Wrapper()
- {
- if (object_ != NULL) { release(); }
- }
-
- Wrapper(const Wrapper& rhs)
- {
- object_ = rhs.object_;
- detail::errHandler(retain(), __RETAIN_ERR);
- }
-
- Wrapper(Wrapper&& rhs) CL_HPP_NOEXCEPT_
- {
- object_ = rhs.object_;
- rhs.object_ = NULL;
- }
-
- Wrapper& operator = (const Wrapper& rhs)
- {
- if (this != &rhs) {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs.object_;
- detail::errHandler(retain(), __RETAIN_ERR);
- }
- return *this;
- }
-
- Wrapper& operator = (Wrapper&& rhs)
- {
- if (this != &rhs) {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs.object_;
- rhs.object_ = NULL;
- }
- return *this;
- }
-
- Wrapper& operator = (const cl_type &rhs)
- {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs;
- return *this;
- }
-
- const cl_type& operator ()() const { return object_; }
-
- cl_type& operator ()() { return object_; }
-
- cl_type get() const { return object_; }
-
-protected:
- template
- friend inline cl_int getInfoHelper(Func, cl_uint, U*, int, typename U::cl_type);
-
- cl_int retain() const
- {
- if (object_ != nullptr) {
- return ReferenceHandler::retain(object_);
- }
- else {
- return CL_SUCCESS;
- }
- }
-
- cl_int release() const
- {
- if (object_ != nullptr) {
- return ReferenceHandler::release(object_);
- }
- else {
- return CL_SUCCESS;
- }
- }
-};
-
-template <>
-class Wrapper
-{
-public:
- typedef cl_device_id cl_type;
-
-protected:
- cl_type object_;
- bool referenceCountable_;
-
- static bool isReferenceCountable(cl_device_id device)
- {
- bool retVal = false;
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
-#if CL_HPP_MINIMUM_OPENCL_VERSION < 120
- if (device != NULL) {
- int version = getDevicePlatformVersion(device);
- if(version > ((1 << 16) + 1)) {
- retVal = true;
- }
- }
-#else // CL_HPP_MINIMUM_OPENCL_VERSION < 120
- retVal = true;
-#endif // CL_HPP_MINIMUM_OPENCL_VERSION < 120
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
- (void)device;
- return retVal;
- }
-
-public:
- Wrapper() : object_(NULL), referenceCountable_(false)
- {
- }
-
- Wrapper(const cl_type &obj, bool retainObject) :
- object_(obj),
- referenceCountable_(false)
- {
- referenceCountable_ = isReferenceCountable(obj);
-
- if (retainObject) {
- detail::errHandler(retain(), __RETAIN_ERR);
- }
- }
-
- ~Wrapper()
- {
- release();
- }
-
- Wrapper(const Wrapper& rhs)
- {
- object_ = rhs.object_;
- referenceCountable_ = isReferenceCountable(object_);
- detail::errHandler(retain(), __RETAIN_ERR);
- }
-
- Wrapper(Wrapper&& rhs) CL_HPP_NOEXCEPT_
- {
- object_ = rhs.object_;
- referenceCountable_ = rhs.referenceCountable_;
- rhs.object_ = NULL;
- rhs.referenceCountable_ = false;
- }
-
- Wrapper& operator = (const Wrapper& rhs)
- {
- if (this != &rhs) {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs.object_;
- referenceCountable_ = rhs.referenceCountable_;
- detail::errHandler(retain(), __RETAIN_ERR);
- }
- return *this;
- }
-
- Wrapper& operator = (Wrapper&& rhs)
- {
- if (this != &rhs) {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs.object_;
- referenceCountable_ = rhs.referenceCountable_;
- rhs.object_ = NULL;
- rhs.referenceCountable_ = false;
- }
- return *this;
- }
-
- Wrapper& operator = (const cl_type &rhs)
- {
- detail::errHandler(release(), __RELEASE_ERR);
- object_ = rhs;
- referenceCountable_ = isReferenceCountable(object_);
- return *this;
- }
-
- const cl_type& operator ()() const { return object_; }
-
- cl_type& operator ()() { return object_; }
-
- cl_type get() const { return object_; }
-
-protected:
- template
- friend inline cl_int getInfoHelper(Func, cl_uint, U*, int, typename U::cl_type);
-
- template
- friend inline cl_int getInfoHelper(Func, cl_uint, vector*, int, typename U::cl_type);
-
- cl_int retain() const
- {
- if( object_ != nullptr && referenceCountable_ ) {
- return ReferenceHandler::retain(object_);
- }
- else {
- return CL_SUCCESS;
- }
- }
-
- cl_int release() const
- {
- if (object_ != nullptr && referenceCountable_) {
- return ReferenceHandler::release(object_);
- }
- else {
- return CL_SUCCESS;
- }
- }
-};
-
-template
-inline bool operator==(const Wrapper &lhs, const Wrapper &rhs)
-{
- return lhs() == rhs();
-}
-
-template
-inline bool operator!=(const Wrapper &lhs, const Wrapper &rhs)
-{
- return !operator==(lhs, rhs);
-}
-
-} // namespace detail
-//! \endcond
-
-
-
-
-
-/*! \stuct ImageFormat
- * \brief Adds constructors and member functions for cl_image_format.
- *
- * \see cl_image_format
- */
-struct ImageFormat : public cl_image_format
-{
- //! \brief Default constructor - performs no initialization.
- ImageFormat(){}
-
- //! \brief Initializing constructor.
- ImageFormat(cl_channel_order order, cl_channel_type type)
- {
- image_channel_order = order;
- image_channel_data_type = type;
- }
-
- //! \brief Copy constructor.
- ImageFormat(const ImageFormat &other) { *this = other; }
-
- //! \brief Assignment operator.
- ImageFormat& operator = (const ImageFormat& rhs)
- {
- if (this != &rhs) {
- this->image_channel_data_type = rhs.image_channel_data_type;
- this->image_channel_order = rhs.image_channel_order;
- }
- return *this;
- }
-};
-
-/*! \brief Class interface for cl_device_id.
- *
- * \note Copies of these objects are inexpensive, since they don't 'own'
- * any underlying resources or data structures.
- *
- * \see cl_device_id
- */
-class Device : public detail::Wrapper
-{
-private:
- static std::once_flag default_initialized_;
- static Device default_;
- static cl_int default_error_;
-
- /*! \brief Create the default context.
- *
- * This sets @c default_ and @c default_error_. It does not throw
- * @c cl::Error.
- */
- static void makeDefault();
-
- /*! \brief Create the default platform from a provided platform.
- *
- * This sets @c default_. It does not throw
- * @c cl::Error.
- */
- static void makeDefaultProvided(const Device &p) {
- default_ = p;
- }
-
-public:
-#ifdef CL_HPP_UNIT_TEST_ENABLE
- /*! \brief Reset the default.
- *
- * This sets @c default_ to an empty value to support cleanup in
- * the unit test framework.
- * This function is not thread safe.
- */
- static void unitTestClearDefault() {
- default_ = Device();
- }
-#endif // #ifdef CL_HPP_UNIT_TEST_ENABLE
-
- //! \brief Default constructor - initializes to NULL.
- Device() : detail::Wrapper() { }
-
- /*! \brief Constructor from cl_device_id.
- *
- * This simply copies the device ID value, which is an inexpensive operation.
- */
- explicit Device(const cl_device_id &device, bool retainObject = false) :
- detail::Wrapper(device, retainObject) { }
-
- /*! \brief Returns the first device on the default context.
- *
- * \see Context::getDefault()
- */
- static Device getDefault(
- cl_int *errResult = NULL)
- {
- std::call_once(default_initialized_, makeDefault);
- detail::errHandler(default_error_);
- if (errResult != NULL) {
- *errResult = default_error_;
- }
- return default_;
- }
-
- /**
- * Modify the default device to be used by
- * subsequent operations.
- * Will only set the default if no default was previously created.
- * @return updated default device.
- * Should be compared to the passed value to ensure that it was updated.
- */
- static Device setDefault(const Device &default_device)
- {
- std::call_once(default_initialized_, makeDefaultProvided, std::cref(default_device));
- detail::errHandler(default_error_);
- return default_;
- }
-
- /*! \brief Assignment operator from cl_device_id.
- *
- * This simply copies the device ID value, which is an inexpensive operation.
- */
- Device& operator = (const cl_device_id& rhs)
- {
- detail::Wrapper::operator=(rhs);
- return *this;
- }
-
- /*! \brief Copy constructor to forward copy to the superclass correctly.
- * Required for MSVC.
- */
- Device(const Device& dev) : detail::Wrapper(dev) {}
-
- /*! \brief Copy assignment to forward copy to the superclass correctly.
- * Required for MSVC.
- */
- Device& operator = (const Device &dev)
- {
- detail::Wrapper::operator=(dev);
- return *this;
- }
-
- /*! \brief Move constructor to forward move to the superclass correctly.
- * Required for MSVC.
- */
- Device(Device&& dev) CL_HPP_NOEXCEPT_ : detail::Wrapper(std::move(dev)) {}
-
- /*! \brief Move assignment to forward move to the superclass correctly.
- * Required for MSVC.
- */
- Device& operator = (Device &&dev)
- {
- detail::Wrapper::operator=(std::move(dev));
- return *this;
- }
-
- //! \brief Wrapper for clGetDeviceInfo().
- template
- cl_int getInfo(cl_device_info name, T* param) const
- {
- return detail::errHandler(
- detail::getInfo(&::clGetDeviceInfo, object_, name, param),
- __GET_DEVICE_INFO_ERR);
- }
-
- //! \brief Wrapper for clGetDeviceInfo() that returns by value.
- template typename
- detail::param_traits::param_type
- getInfo(cl_int* err = NULL) const
- {
- typename detail::param_traits<
- detail::cl_device_info, name>::param_type param;
- cl_int result = getInfo(name, ¶m);
- if (err != NULL) {
- *err = result;
- }
- return param;
- }
-
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 210
- /**
- * Return the current value of the host clock as seen by the device.
- * The resolution of the device timer may be queried with the
- * CL_DEVICE_PROFILING_TIMER_RESOLUTION query.
- * @return The host timer value.
- */
- cl_ulong getHostTimer(cl_int *error = nullptr)
- {
- cl_ulong retVal = 0;
- cl_int err =
- clGetHostTimer(this->get(), &retVal);
- detail::errHandler(
- err,
- __GET_HOST_TIMER_ERR);
- if (error) {
- *error = err;
- }
- return retVal;
- }
-
- /**
- * Return a synchronized pair of host and device timestamps as seen by device.
- * Use to correlate the clocks and get the host timer only using getHostTimer
- * as a lower cost mechanism in between calls.
- * The resolution of the host timer may be queried with the
- * CL_PLATFORM_HOST_TIMER_RESOLUTION query.
- * The resolution of the device timer may be queried with the
- * CL_DEVICE_PROFILING_TIMER_RESOLUTION query.
- * @return A pair of (device timer, host timer) timer values.
- */
- std::pair getDeviceAndHostTimer(cl_int *error = nullptr)
- {
- std::pair retVal;
- cl_int err =
- clGetDeviceAndHostTimer(this->get(), &(retVal.first), &(retVal.second));
- detail::errHandler(
- err,
- __GET_DEVICE_AND_HOST_TIMER_ERR);
- if (error) {
- *error = err;
- }
- return retVal;
- }
-#endif // #if CL_HPP_TARGET_OPENCL_VERSION >= 210
-
- /**
- * CL 1.2 version
- */
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
- //! \brief Wrapper for clCreateSubDevices().
- cl_int createSubDevices(
- const cl_device_partition_property * properties,
- vector* devices)
- {
- cl_uint n = 0;
- cl_int err = clCreateSubDevices(object_, properties, 0, NULL, &n);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __CREATE_SUB_DEVICES_ERR);
- }
-
- vector ids(n);
- err = clCreateSubDevices(object_, properties, n, ids.data(), NULL);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __CREATE_SUB_DEVICES_ERR);
- }
-
- // Cannot trivially assign because we need to capture intermediates
- // with safe construction
- if (devices) {
- devices->resize(ids.size());
-
- // Assign to param, constructing with retain behaviour
- // to correctly capture each underlying CL object
- for (size_type i = 0; i < ids.size(); i++) {
- // We do not need to retain because this device is being created
- // by the runtime
- (*devices)[i] = Device(ids[i], false);
- }
- }
-
- return CL_SUCCESS;
- }
-#elif defined(CL_HPP_USE_CL_DEVICE_FISSION)
-
-/**
- * CL 1.1 version that uses device fission extension.
- */
- cl_int createSubDevices(
- const cl_device_partition_property_ext * properties,
- vector* devices)
- {
- typedef CL_API_ENTRY cl_int
- ( CL_API_CALL * PFN_clCreateSubDevicesEXT)(
- cl_device_id /*in_device*/,
- const cl_device_partition_property_ext * /* properties */,
- cl_uint /*num_entries*/,
- cl_device_id * /*out_devices*/,
- cl_uint * /*num_devices*/ ) CL_API_SUFFIX__VERSION_1_1;
-
- static PFN_clCreateSubDevicesEXT pfn_clCreateSubDevicesEXT = NULL;
- CL_HPP_INIT_CL_EXT_FCN_PTR_(clCreateSubDevicesEXT);
-
- cl_uint n = 0;
- cl_int err = pfn_clCreateSubDevicesEXT(object_, properties, 0, NULL, &n);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __CREATE_SUB_DEVICES_ERR);
- }
-
- vector ids(n);
- err = pfn_clCreateSubDevicesEXT(object_, properties, n, ids.data(), NULL);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __CREATE_SUB_DEVICES_ERR);
- }
- // Cannot trivially assign because we need to capture intermediates
- // with safe construction
- if (devices) {
- devices->resize(ids.size());
-
- // Assign to param, constructing with retain behaviour
- // to correctly capture each underlying CL object
- for (size_type i = 0; i < ids.size(); i++) {
- // We do not need to retain because this device is being created
- // by the runtime
- (*devices)[i] = Device(ids[i], false);
- }
- }
- return CL_SUCCESS;
- }
-#endif // defined(CL_HPP_USE_CL_DEVICE_FISSION)
-};
-
-using BuildLogType = vector::param_type>>;
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
-/**
-* Exception class for build errors to carry build info
-*/
-class BuildError : public Error
-{
-private:
- BuildLogType buildLogs;
-public:
- BuildError(cl_int err, const char * errStr, const BuildLogType &vec) : Error(err, errStr), buildLogs(vec)
- {
- }
-
- BuildLogType getBuildLog() const
- {
- return buildLogs;
- }
-};
-namespace detail {
- static inline cl_int buildErrHandler(
- cl_int err,
- const char * errStr,
- const BuildLogType &buildLogs)
- {
- if (err != CL_SUCCESS) {
- throw BuildError(err, errStr, buildLogs);
- }
- return err;
- }
-} // namespace detail
-
-#else
-namespace detail {
- static inline cl_int buildErrHandler(
- cl_int err,
- const char * errStr,
- const BuildLogType &buildLogs)
- {
- (void)buildLogs; // suppress unused variable warning
- (void)errStr;
- return err;
- }
-} // namespace detail
-#endif // #if defined(CL_HPP_ENABLE_EXCEPTIONS)
-
-CL_HPP_DEFINE_STATIC_MEMBER_ std::once_flag Device::default_initialized_;
-CL_HPP_DEFINE_STATIC_MEMBER_ Device Device::default_;
-CL_HPP_DEFINE_STATIC_MEMBER_ cl_int Device::default_error_ = CL_SUCCESS;
-
-/*! \brief Class interface for cl_platform_id.
- *
- * \note Copies of these objects are inexpensive, since they don't 'own'
- * any underlying resources or data structures.
- *
- * \see cl_platform_id
- */
-class Platform : public detail::Wrapper
-{
-private:
- static std::once_flag default_initialized_;
- static Platform default_;
- static cl_int default_error_;
-
- /*! \brief Create the default context.
- *
- * This sets @c default_ and @c default_error_. It does not throw
- * @c cl::Error.
- */
- static void makeDefault() {
- /* Throwing an exception from a call_once invocation does not do
- * what we wish, so we catch it and save the error.
- */
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- try
-#endif
- {
- // If default wasn't passed ,generate one
- // Otherwise set it
- cl_uint n = 0;
-
- cl_int err = ::clGetPlatformIDs(0, NULL, &n);
- if (err != CL_SUCCESS) {
- default_error_ = err;
- return;
- }
- if (n == 0) {
- default_error_ = CL_INVALID_PLATFORM;
- return;
- }
-
- vector ids(n);
- err = ::clGetPlatformIDs(n, ids.data(), NULL);
- if (err != CL_SUCCESS) {
- default_error_ = err;
- return;
- }
-
- default_ = Platform(ids[0]);
- }
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- catch (cl::Error &e) {
- default_error_ = e.err();
- }
-#endif
- }
-
- /*! \brief Create the default platform from a provided platform.
- *
- * This sets @c default_. It does not throw
- * @c cl::Error.
- */
- static void makeDefaultProvided(const Platform &p) {
- default_ = p;
- }
-
-public:
-#ifdef CL_HPP_UNIT_TEST_ENABLE
- /*! \brief Reset the default.
- *
- * This sets @c default_ to an empty value to support cleanup in
- * the unit test framework.
- * This function is not thread safe.
- */
- static void unitTestClearDefault() {
- default_ = Platform();
- }
-#endif // #ifdef CL_HPP_UNIT_TEST_ENABLE
-
- //! \brief Default constructor - initializes to NULL.
- Platform() : detail::Wrapper() { }
-
- /*! \brief Constructor from cl_platform_id.
- *
- * \param retainObject will cause the constructor to retain its cl object.
- * Defaults to false to maintain compatibility with
- * earlier versions.
- * This simply copies the platform ID value, which is an inexpensive operation.
- */
- explicit Platform(const cl_platform_id &platform, bool retainObject = false) :
- detail::Wrapper(platform, retainObject) { }
-
- /*! \brief Assignment operator from cl_platform_id.
- *
- * This simply copies the platform ID value, which is an inexpensive operation.
- */
- Platform& operator = (const cl_platform_id& rhs)
- {
- detail::Wrapper::operator=(rhs);
- return *this;
- }
-
- static Platform getDefault(
- cl_int *errResult = NULL)
- {
- std::call_once(default_initialized_, makeDefault);
- detail::errHandler(default_error_);
- if (errResult != NULL) {
- *errResult = default_error_;
- }
- return default_;
- }
-
- /**
- * Modify the default platform to be used by
- * subsequent operations.
- * Will only set the default if no default was previously created.
- * @return updated default platform.
- * Should be compared to the passed value to ensure that it was updated.
- */
- static Platform setDefault(const Platform &default_platform)
- {
- std::call_once(default_initialized_, makeDefaultProvided, std::cref(default_platform));
- detail::errHandler(default_error_);
- return default_;
- }
-
- //! \brief Wrapper for clGetPlatformInfo().
- template
- cl_int getInfo(cl_platform_info name, T* param) const
- {
- return detail::errHandler(
- detail::getInfo(&::clGetPlatformInfo, object_, name, param),
- __GET_PLATFORM_INFO_ERR);
- }
-
- //! \brief Wrapper for clGetPlatformInfo() that returns by value.
- template typename
- detail::param_traits::param_type
- getInfo(cl_int* err = NULL) const
- {
- typename detail::param_traits<
- detail::cl_platform_info, name>::param_type param;
- cl_int result = getInfo(name, ¶m);
- if (err != NULL) {
- *err = result;
- }
- return param;
- }
-
- /*! \brief Gets a list of devices for this platform.
- *
- * Wraps clGetDeviceIDs().
- */
- cl_int getDevices(
- cl_device_type type,
- vector* devices) const
- {
- cl_uint n = 0;
- if( devices == NULL ) {
- return detail::errHandler(CL_INVALID_ARG_VALUE, __GET_DEVICE_IDS_ERR);
- }
- cl_int err = ::clGetDeviceIDs(object_, type, 0, NULL, &n);
- if (err != CL_SUCCESS && err != CL_DEVICE_NOT_FOUND) {
- return detail::errHandler(err, __GET_DEVICE_IDS_ERR);
- }
-
- vector ids(n);
- if (n>0) {
- err = ::clGetDeviceIDs(object_, type, n, ids.data(), NULL);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __GET_DEVICE_IDS_ERR);
- }
- }
-
- // Cannot trivially assign because we need to capture intermediates
- // with safe construction
- // We must retain things we obtain from the API to avoid releasing
- // API-owned objects.
- if (devices) {
- devices->resize(ids.size());
-
- // Assign to param, constructing with retain behaviour
- // to correctly capture each underlying CL object
- for (size_type i = 0; i < ids.size(); i++) {
- (*devices)[i] = Device(ids[i], true);
- }
- }
- return CL_SUCCESS;
- }
-
-#if defined(CL_HPP_USE_DX_INTEROP)
- /*! \brief Get the list of available D3D10 devices.
- *
- * \param d3d_device_source.
- *
- * \param d3d_object.
- *
- * \param d3d_device_set.
- *
- * \param devices returns a vector of OpenCL D3D10 devices found. The cl::Device
- * values returned in devices can be used to identify a specific OpenCL
- * device. If \a devices argument is NULL, this argument is ignored.
- *
- * \return One of the following values:
- * - CL_SUCCESS if the function is executed successfully.
- *
- * The application can query specific capabilities of the OpenCL device(s)
- * returned by cl::getDevices. This can be used by the application to
- * determine which device(s) to use.
- *
- * \note In the case that exceptions are enabled and a return value
- * other than CL_SUCCESS is generated, then cl::Error exception is
- * generated.
- */
- cl_int getDevices(
- cl_d3d10_device_source_khr d3d_device_source,
- void * d3d_object,
- cl_d3d10_device_set_khr d3d_device_set,
- vector* devices) const
- {
- typedef CL_API_ENTRY cl_int (CL_API_CALL *PFN_clGetDeviceIDsFromD3D10KHR)(
- cl_platform_id platform,
- cl_d3d10_device_source_khr d3d_device_source,
- void * d3d_object,
- cl_d3d10_device_set_khr d3d_device_set,
- cl_uint num_entries,
- cl_device_id * devices,
- cl_uint* num_devices);
-
- if( devices == NULL ) {
- return detail::errHandler(CL_INVALID_ARG_VALUE, __GET_DEVICE_IDS_ERR);
- }
-
- static PFN_clGetDeviceIDsFromD3D10KHR pfn_clGetDeviceIDsFromD3D10KHR = NULL;
- CL_HPP_INIT_CL_EXT_FCN_PTR_PLATFORM_(object_, clGetDeviceIDsFromD3D10KHR);
-
- cl_uint n = 0;
- cl_int err = pfn_clGetDeviceIDsFromD3D10KHR(
- object_,
- d3d_device_source,
- d3d_object,
- d3d_device_set,
- 0,
- NULL,
- &n);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __GET_DEVICE_IDS_ERR);
- }
-
- vector ids(n);
- err = pfn_clGetDeviceIDsFromD3D10KHR(
- object_,
- d3d_device_source,
- d3d_object,
- d3d_device_set,
- n,
- ids.data(),
- NULL);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __GET_DEVICE_IDS_ERR);
- }
-
- // Cannot trivially assign because we need to capture intermediates
- // with safe construction
- // We must retain things we obtain from the API to avoid releasing
- // API-owned objects.
- if (devices) {
- devices->resize(ids.size());
-
- // Assign to param, constructing with retain behaviour
- // to correctly capture each underlying CL object
- for (size_type i = 0; i < ids.size(); i++) {
- (*devices)[i] = Device(ids[i], true);
- }
- }
- return CL_SUCCESS;
- }
-#endif
-
- /*! \brief Gets a list of available platforms.
- *
- * Wraps clGetPlatformIDs().
- */
- static cl_int get(
- vector* platforms)
- {
- cl_uint n = 0;
-
- if( platforms == NULL ) {
- return detail::errHandler(CL_INVALID_ARG_VALUE, __GET_PLATFORM_IDS_ERR);
- }
-
- cl_int err = ::clGetPlatformIDs(0, NULL, &n);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __GET_PLATFORM_IDS_ERR);
- }
-
- vector ids(n);
- err = ::clGetPlatformIDs(n, ids.data(), NULL);
- if (err != CL_SUCCESS) {
- return detail::errHandler(err, __GET_PLATFORM_IDS_ERR);
- }
-
- if (platforms) {
- platforms->resize(ids.size());
-
- // Platforms don't reference count
- for (size_type i = 0; i < ids.size(); i++) {
- (*platforms)[i] = Platform(ids[i]);
- }
- }
- return CL_SUCCESS;
- }
-
- /*! \brief Gets the first available platform.
- *
- * Wraps clGetPlatformIDs(), returning the first result.
- */
- static cl_int get(
- Platform * platform)
- {
- cl_int err;
- Platform default_platform = Platform::getDefault(&err);
- if (platform) {
- *platform = default_platform;
- }
- return err;
- }
-
- /*! \brief Gets the first available platform, returning it by value.
- *
- * \return Returns a valid platform if one is available.
- * If no platform is available will return a null platform.
- * Throws an exception if no platforms are available
- * or an error condition occurs.
- * Wraps clGetPlatformIDs(), returning the first result.
- */
- static Platform get(
- cl_int * errResult = NULL)
- {
- cl_int err;
- Platform default_platform = Platform::getDefault(&err);
- if (errResult) {
- *errResult = err;
- }
- return default_platform;
- }
-
-#if CL_HPP_TARGET_OPENCL_VERSION >= 120
- //! \brief Wrapper for clUnloadCompiler().
- cl_int
- unloadCompiler()
- {
- return ::clUnloadPlatformCompiler(object_);
- }
-#endif // CL_HPP_TARGET_OPENCL_VERSION >= 120
-}; // class Platform
-
-CL_HPP_DEFINE_STATIC_MEMBER_ std::once_flag Platform::default_initialized_;
-CL_HPP_DEFINE_STATIC_MEMBER_ Platform Platform::default_;
-CL_HPP_DEFINE_STATIC_MEMBER_ cl_int Platform::default_error_ = CL_SUCCESS;
-
-
-/**
- * Deprecated APIs for 1.2
- */
-#if defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-/**
- * Unload the OpenCL compiler.
- * \note Deprecated for OpenCL 1.2. Use Platform::unloadCompiler instead.
- */
-inline CL_API_PREFIX__VERSION_1_1_DEPRECATED cl_int
-UnloadCompiler() CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-inline cl_int
-UnloadCompiler()
-{
- return ::clUnloadCompiler();
-}
-#endif // #if defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
-
-/*! \brief Class interface for cl_context.
- *
- * \note Copies of these objects are shallow, meaning that the copy will refer
- * to the same underlying cl_context as the original. For details, see
- * clRetainContext() and clReleaseContext().
- *
- * \see cl_context
- */
-class Context
- : public detail::Wrapper
-{
-private:
- static std::once_flag default_initialized_;
- static Context default_;
- static cl_int default_error_;
-
- /*! \brief Create the default context from the default device type in the default platform.
- *
- * This sets @c default_ and @c default_error_. It does not throw
- * @c cl::Error.
- */
- static void makeDefault() {
- /* Throwing an exception from a call_once invocation does not do
- * what we wish, so we catch it and save the error.
- */
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- try
-#endif
- {
-#if !defined(__APPLE__) && !defined(__MACOS)
- const Platform &p = Platform::getDefault();
- cl_platform_id defaultPlatform = p();
- cl_context_properties properties[3] = {
- CL_CONTEXT_PLATFORM, (cl_context_properties)defaultPlatform, 0
- };
-#else // #if !defined(__APPLE__) && !defined(__MACOS)
- cl_context_properties *properties = nullptr;
-#endif // #if !defined(__APPLE__) && !defined(__MACOS)
-
- default_ = Context(
- CL_DEVICE_TYPE_DEFAULT,
- properties,
- NULL,
- NULL,
- &default_error_);
- }
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- catch (cl::Error &e) {
- default_error_ = e.err();
- }
-#endif
- }
-
-
- /*! \brief Create the default context from a provided Context.
- *
- * This sets @c default_. It does not throw
- * @c cl::Error.
- */
- static void makeDefaultProvided(const Context &c) {
- default_ = c;
- }
-
-public:
-#ifdef CL_HPP_UNIT_TEST_ENABLE
- /*! \brief Reset the default.
- *
- * This sets @c default_ to an empty value to support cleanup in
- * the unit test framework.
- * This function is not thread safe.
- */
- static void unitTestClearDefault() {
- default_ = Context();
- }
-#endif // #ifdef CL_HPP_UNIT_TEST_ENABLE
-
- /*! \brief Constructs a context including a list of specified devices.
- *
- * Wraps clCreateContext().
- */
- Context(
- const vector& devices,
- const cl_context_properties* properties = NULL,
- void (CL_CALLBACK * notifyFptr)(
- const char *,
- const void *,
- size_type,
- void *) = NULL,
- void* data = NULL,
- cl_int* err = NULL)
- {
- cl_int error;
-
- size_type numDevices = devices.size();
- vector deviceIDs(numDevices);
-
- for( size_type deviceIndex = 0; deviceIndex < numDevices; ++deviceIndex ) {
- deviceIDs[deviceIndex] = (devices[deviceIndex])();
- }
-
- object_ = ::clCreateContext(
- properties, (cl_uint) numDevices,
- deviceIDs.data(),
- notifyFptr, data, &error);
-
- detail::errHandler(error, __CREATE_CONTEXT_ERR);
- if (err != NULL) {
- *err = error;
- }
- }
-
- /*! \brief Constructs a context including a specific device.
- *
- * Wraps clCreateContext().
- */
- Context(
- const Device& device,
- const cl_context_properties* properties = NULL,
- void (CL_CALLBACK * notifyFptr)(
- const char *,
- const void *,
- size_type,
- void *) = NULL,
- void* data = NULL,
- cl_int* err = NULL)
- {
- cl_int error;
-
- cl_device_id deviceID = device();
-
- object_ = ::clCreateContext(
- properties, 1,
- &deviceID,
- notifyFptr, data, &error);
-
- detail::errHandler(error, __CREATE_CONTEXT_ERR);
- if (err != NULL) {
- *err = error;
- }
- }
-
- /*! \brief Constructs a context including all or a subset of devices of a specified type.
- *
- * Wraps clCreateContextFromType().
- */
- Context(
- cl_device_type type,
- const cl_context_properties* properties = NULL,
- void (CL_CALLBACK * notifyFptr)(
- const char *,
- const void *,
- size_type,
- void *) = NULL,
- void* data = NULL,
- cl_int* err = NULL)
- {
- cl_int error;
-
-#if !defined(__APPLE__) && !defined(__MACOS)
- cl_context_properties prop[4] = {CL_CONTEXT_PLATFORM, 0, 0, 0 };
-
- if (properties == NULL) {
- // Get a valid platform ID as we cannot send in a blank one
- vector platforms;
- error = Platform::get(&platforms);
- if (error != CL_SUCCESS) {
- detail::errHandler(error, __CREATE_CONTEXT_FROM_TYPE_ERR);
- if (err != NULL) {
- *err = error;
- }
- return;
- }
-
- // Check the platforms we found for a device of our specified type
- cl_context_properties platform_id = 0;
- for (unsigned int i = 0; i < platforms.size(); i++) {
-
- vector devices;
-
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- try {
-#endif
-
- error = platforms[i].getDevices(type, &devices);
-
-#if defined(CL_HPP_ENABLE_EXCEPTIONS)
- } catch (cl::Error& e) {
- error = e.err();
- }
- // Catch if exceptions are enabled as we don't want to exit if first platform has no devices of type
- // We do error checking next anyway, and can throw there if needed
-#endif
-
- // Only squash CL_SUCCESS and CL_DEVICE_NOT_FOUND
- if (error != CL_SUCCESS && error != CL_DEVICE_NOT_FOUND) {
- detail::errHandler(error, __CREATE_CONTEXT_FROM_TYPE_ERR);
- if (err != NULL) {
- *err = error;
- }
- }
-
- if (devices.size() > 0) {
- platform_id = (cl_context_properties)platforms[i]();
- break;
- }
- }
-
- if (platform_id == 0) {
- detail::errHandler(CL_DEVICE_NOT_FOUND, __CREATE_CONTEXT_FROM_TYPE_ERR);
- if (err != NULL) {
- *err = CL_DEVICE_NOT_FOUND;
- }
- return;
- }
-
- prop[1] = platform_id;
- properties = &prop[0];
- }
-#endif
- object_ = ::clCreateContextFromType(
- properties, type, notifyFptr, data, &error);
-
- detail::errHandler(error, __CREATE_CONTEXT_FROM_TYPE_ERR);
- if (err != NULL) {
- *err = error;
- }
- }
-
- /*! \brief Copy constructor to forward copy to the superclass correctly.
- * Required for MSVC.
- */
- Context(const Context& ctx) : detail::Wrapper(ctx) {}
-
- /*! \brief Copy assignment to forward copy to the superclass correctly.
- * Required for MSVC.
- */
- Context& operator = (const Context &ctx)
- {
- detail::Wrapper::operator=(ctx);
- return *this;
- }
-
- /*! \brief Move constructor to forward move to the superclass correctly.
- * Required for MSVC.
- */
- Context(Context&& ctx) CL_HPP_NOEXCEPT_ : detail::Wrapper(std::move(ctx)) {}
-
- /*! \brief Move assignment to forward move to the superclass correctly.
- * Required for MSVC.
- */
- Context& operator = (Context &&ctx)
- {
- detail::Wrapper::operator=(std::move(ctx));
- return *this;
- }
-
-
- /*! \brief Returns a singleton context including all devices of CL_DEVICE_TYPE_DEFAULT.
- *
- * \note All calls to this function return the same cl_context as the first.
- */
- static Context getDefault(cl_int * err = NULL)
- {
- std::call_once(default_initialized_, makeDefault);
- detail::errHandler(default_error_);
- if (err != NULL) {
- *err = default_error_;
- }
- return default_;
- }
-
- /**
- * Modify the default context to be used by
- * subsequent operations.
- * Will only set the default if no default was previously created.
- * @return updated default context.
- * Should be compared to the passed value to ensure that it was updated.
- */
- static Context setDefault(const Context &default_context)
- {
- std::call_once(default_initialized_, makeDefaultProvided, std::cref(default_context));
- detail::errHandler(default_error_);
- return default_;
- }
-
- //! \brief Default constructor - initializes to NULL.
- Context() : detail::Wrapper() { }
-
- /*! \brief Constructor from cl_context - takes ownership.
- *
- * This effectively transfers ownership of a refcount on the cl_context
- * into the new Context object.
- */
- explicit Context(const cl_context& context, bool retainObject = false) :
- detail::Wrapper