diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 4.4 Crack Full Version [32-bit 64-bit] [NEW].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 4.4 Crack Full Version [32-bit 64-bit] [NEW].md deleted file mode 100644 index 6c3e721ec0a28df7ab3a2598ec5140cfbbb7bca5..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 4.4 Crack Full Version [32-bit 64-bit] [NEW].md +++ /dev/null @@ -1,153 +0,0 @@ - -
Do you want to record your screen activities with high quality and low file size? Do you want to capture your gaming sessions, video chats, webinars, or tutorials with ease? Do you want to edit and share your videos without any hassle? If you answered yes to any of these questions, then you need Bandicam 4.4 Crack Full Version, a powerful and versatile screen recorder for Windows.
-Download File ->->->-> https://byltly.com/2uKAd3
Bandicam is a lightweight screen recorder that allows you to record your screen activities to a video file. It has three recording modes: game recording, screen recording, and device recording. You can use Bandicam to record anything on your PC, such as games, videos, webcams, desktops, HDMI devices, and more.
-One of the main advantages of Bandicam is that it is very light on your system resources. It uses much lower CPU/GPU/RAM usage than other similar software, which means it causes less lag and does not affect your PC performance. Bandicam also compresses the video while recording, which results in smaller file sizes and faster upload speeds.
-Another advantage of Bandicam is that it can record various types of content on your PC. You can use the game recording mode to capture your gameplay with high FPS and HD quality. You can use the screen recording mode to record any area of your screen, such as web browsers, PowerPoint presentations, Skype calls, etc. You can also use the device recording mode to record external devices connected to your PC, such as webcams, smartphones, game consoles, etc.
-Bandicam also has many features and benefits that make it a great choice for users who want to record their screen activities. Some of these features are:
-Bandicam 4.4 full version free download with crack
-How to activate Bandicam 4.4 with crack for lifetime
-Bandicam 4.4 screen recorder crack download for Windows
-Bandicam 4.4 crack serial key generator
-Bandicam 4.4 crack patch keygen
-Bandicam 4.4 crack license key activation code
-Bandicam 4.4 crack registration key product key
-Bandicam 4.4 crack no watermark no lag
-Bandicam 4.4 crack full features unlocked
-Bandicam 4.4 crack latest version updated
-Bandicam 4.4 crack for Mac OS X
-Bandicam 4.4 crack for Linux Ubuntu
-Bandicam 4.4 crack portable edition
-Bandicam 4.4 crack offline installer setup
-Bandicam 4.4 crack online activation tool
-Bandicam 4.4 crack working method 2023
-Bandicam 4.4 crack review and tutorial
-Bandicam 4.4 crack comparison with other screen recorders
-Bandicam 4.4 crack best settings for high quality recording
-Bandicam 4.4 crack tips and tricks to improve performance
-Bandicam 4.4 crack alternatives and competitors
-Bandicam 4.4 crack pros and cons advantages and disadvantages
-Bandicam 4.4 crack system requirements and compatibility
-Bandicam 4.4 crack technical support and customer service
-Bandicam 4.4 crack refund policy and money back guarantee
-Bandicam 4.4 crack discount coupon code and promo offer
-Bandicam 4.4 crack testimonials and user feedback
-Bandicam 4.4 crack FAQs and solutions to common problems
-Bandicam 4.4 crack download link and installation guide
-Bandicam 4.4 crack virus scan and malware check
-Bandicam 4.4 crack safe and secure download source
-Bandicam 4.4 crack legal and ethical issues
-Bandicam 4.4 crack risks and consequences of using cracked software
-Bandicam 4.4 crack benefits and advantages of using original software
-Bandicam 4.
If you want to enjoy all the features and benefits of Bandicam without any limitations or watermarks, you need to download and install Bandicam 4.4 Crack Full Version on your PC. Here are the steps you need to follow:
-The first step is to download Bandicam 4.4 Crack from a reliable source on the internet. You can find many websites that offer Bandicam 4.4 Crack for free download, but be careful not to download any malware or viruses along with it. One of the trusted sources you can use is Tech Idea, which provides a safe and secure download link for Bandicam 4.4 Crack. You can also check out other sources like FileHorse or Bandicam official website if you have a 32-bit Windows system.
-The next step is to install Bandicam 4.4 Crack on your PC. To do this, you need to follow these simple steps:
-The final step is to activate Bandicam 4.4 Crack with the serial number plate that comes with the crack file. To do this, you need to follow these simple steps:
-Now that you have downloaded and installed Bandicam 4.4 Crack Full Version on your PC, you are ready to use it to record your screen activities. Here are some tips on how to use Bandicam 4.4 Crack Full Version effectively:
-The first thing you need to do is select the recording mode that suits your needs. You can choose between game recording mode (for capturing games), screen recording mode (for capturing any area of your screen), or device recording mode (for capturing external devices). To select a mode, click on one of the icons at the top of the main window of Bandicam.
-The next thing you need to do is select the area that you want to record. You can either choose a full-screen window (for DirectX/OpenGL games) or a user-defined area (for other applications). To select an area, click on one of the buttons at the top-left corner of the main window of Bandicam.
-The next thing you need to do is adjust the settings and options that affect the quality and performance of your recording. You can access these settings by clicking on one of the buttons at the top-right corner of the main window of Bandicam.
-Some of the settings and options that you can adjust are:
-The next thing you need to do is start and stop the recording. To do this, you need to follow these simple steps:
-The last thing you need to do is edit and save the recorded file. To do this, you need to follow these simple steps:
-Bandicam 4.4 Crack Full Version is a powerful and versatile screen recorder that can help you record your screen activities with high quality and low file size. However, there are some tips and tricks that can help you get even better results with Bandicam 4.4 Crack Full Version. Here are some of them:
-One of the tips that can help you improve the performance of Bandicam 4.4 Crack Full Version is to use hardware acceleration. Hardware acceleration is a feature that allows Bandicam to use your GPU (graphics card) instead of your CPU (processor) to encode your video. This can reduce the CPU usage and increase the FPS of your recording. To use hardware acceleration, you need to follow these simple steps:
-Another tip that can help you enhance your video with Bandicam 4.4 Crack Full Version is to use real-time drawing and mouse effects. Real-time drawing and mouse effects are features that allow you to draw lines, boxes, highlights, cursor effects, or text overlays on your screen while recording. This can help you emphasize important points, add annotations, or create tutorials with Bandicam. To use real-time drawing and mouse effects, you need to follow these simple steps:
-The last tip that can help you record for a long time with Bandicam 4.4 Crack Full Version is to use the auto-complete recording function. Auto-complete recording is a feature that allows Bandicam to automatically stop or split your recording after a certain time or file size. This can help you avoid recording too long videos that are hard to edit or upload. To use auto-complete recording, you need to follow these simple steps:
-In conclusion, Bandicam 4.4 Crack Full Version is a powerful and versatile screen recorder that can help you record your screen activities with high quality and low file size. It has many features and benefits for users who want to capture their gameplay, video chats, webinars, tutorials, or anything else on their PC. It also has some tips and tricks that can help you get even better results with Bandicam 4.4 Crack Full Version. If you want to enjoy all these features and benefits without any limitations or watermarks, you need to download and install Bandicam 4.4 Crack Full Version from a reliable source on the internet. You also need to activate it with the serial number plate that comes with the crack file. Then, you can use it to record your screen activities with ease and share them with others without any hassle.
-Here are some frequently asked questions about Bandicam 4.4 Crack Full Version:
-Bandicam 4.4 Crack Full Version is safe if you download it from a reliable source on the internet. However, be careful not to download any malware or viruses along with it. You should also scan your PC with an antivirus program after installing it.
-Bandicam 4.4 Crack Full Version is not legal because it violates the terms and conditions of Bandicam Company, which owns the rights of Bandicam software. You should buy a license from Bandicam official website if you want to support them and use their software legally.
-If you don't want to use Bandicam 4.4 Crack Full Version for any reason, there are some alternatives that you can try instead. Some of them are OBS Studio (free), Camtasia Studio (paid), Fraps (paid), ScreenFlow (paid), etc.
-If you have any questions or issues related to Bandicam software, you can contact Bandisoft support by email or forum. They will try their best to help you solve your problems.
-If you want to learn more about Bandicam software, you can visit Bandisoft website where you can find more information about their products, features, tutorials, reviews, etc.
-Hard Sentinel is a software that helps you to check and monitor the health of your hard drive. It can detect and report any potential problems, such as bad sectors, temperature, performance, and SMART attributes. It can also alert you if your hard drive is failing or needs to be replaced.
-Download ››››› https://byltly.com/2uKwjc
If you want to download Hard Sentinel and use it to keep an eye on your hard drive health, here are the steps you need to follow:
-By downloading Hard Sentinel and using it regularly, you can ensure that your hard drive is in good condition and prevent any data loss or damage. You can also improve the performance and lifespan of your hard drive by following some simple tips, such as defragmenting your disk, cleaning up your files, and updating your drivers.
- -Your hard drive is one of the most important components of your computer. It stores all your data, such as your documents, photos, videos, music, and programs. However, your hard drive is also prone to various problems and failures that can cause data loss or corruption. Some of the common causes of hard drive problems are:
-These problems can affect the performance and reliability of your hard drive. They can also lead to data loss or corruption, which can be devastating and costly. That's why you need Hard Sentinel to monitor your hard drive health and prevent any potential disasters.
- -Hard Sentinel is a software that uses the SMART (Self-Monitoring, Analysis, and Reporting Technology) feature of your hard drive to monitor its health. SMART is a built-in function that tracks various parameters and attributes of your hard drive, such as:
- -These parameters and attributes can indicate the current and future status of your hard drive. They can also help you to identify any potential problems or failures before they become serious. Hard Sentinel analyzes the SMART data and displays it in an easy-to-understand way. It also assigns a health percentage and a performance percentage to your hard drive based on the SMART data. It can alert you if your hard drive is in danger or needs to be replaced.
- -By using Hard Sentinel to monitor your hard drive health, you can enjoy the following benefits:
-Hard Sentinel is a must-have software for anyone who cares about their hard drive and their data. It is easy to use, reliable, and affordable. You can download Hard Sentinel today and start monitoring your hard drive health in minutes.
ddb901b051If you own a PlayStation 3 (PS3) console, you may have heard of bios or system software. But what is bios and why do you need it? And how can you download bios folder for ps3? In this article, we will answer these questions and provide you with a step-by-step guide on how to download bios folder for ps3 using two different methods: using the internet or using a computer. We will also show you how to reinstall the system software if you ever need to.
-Download File ✸ https://byltly.com/2uKyEH
Bios stands for Basic Input/Output System. It is a firmware that controls the hardware and software of your PS3 console. It is stored in a chip on the motherboard of your console and it is loaded into memory when you turn on your console.
-Bios is responsible for initializing and testing the hardware components of your console, such as the CPU, GPU, RAM, hard disk drive, optical drive, etc. It also provides an interface between the hardware and the operating system (OS) of your console, which is stored on the hard disk drive. The OS allows you to run games, apps, media, and other features on your console.
-Without bios, your PS3 console would not be able to start up or function properly. Bios checks if all the hardware components are working correctly and if there are any errors or problems. If everything is OK, bios loads the OS from the hard disk drive into memory and transfers control to it. If there are any issues, bios displays an error message on the screen or flashes a red light on your console.
-Sony Interactive Entertainment (SIE) regularly releases updates for the bios or system software of your PS3 console. These updates can improve the quality, stability, performance, and security of your console. They can also add new features, settings, options, and compatibility with new games and devices.
-How to download bios folder for ps3 emulator
-Download bios folder for ps3 games on pc
-Where to download bios folder for ps3 rpcs3
-Download bios folder for ps3 iso files
-Download bios folder for ps3 free and easy
-Download bios folder for ps3 windows 10
-Download bios folder for ps3 mac os
-Download bios folder for ps3 linux
-Download bios folder for ps3 android
-Download bios folder for ps3 online
-Download bios folder for ps3 rar
-Download bios folder for ps3 zip
-Download bios folder for ps3 utorrent
-Download bios folder for ps3 mega
-Download bios folder for ps3 mediafire
-Download bios folder for ps3 google drive
-Download bios folder for ps3 dropbox
-Download bios folder for ps3 no survey
-Download bios folder for ps3 no password
-Download bios folder for ps3 no virus
-Download bios folder for ps3 legit
-Download bios folder for ps3 working
-Download bios folder for ps3 updated
-Download bios folder for ps3 latest version
-Download bios folder for ps3 2021
-Download bios folder for ps3 2022
-Download bios folder for ps3 2023
-Download bios folder for ps3 4k resolution
-Download bios folder for ps3 60 fps
-Download bios folder for ps3 best settings
-Download bios folder for ps3 full speed
-Download bios folder for ps3 high compatibility
-Download bios folder for ps3 low end pc
-Download bios folder for ps3 high end pc
-Download bios folder for ps3 laptop
-Download bios folder for ps3 desktop
-Download bios folder for ps3 tutorial
-Download bios folder for ps3 guide
-Download bios folder for ps3 step by step
-Download bios folder for ps3 video
-Download bios folder for ps3 youtube
-Download bios folder for ps3 reddit
-Download bios folder for ps3 quora
-Download bios folder for ps3 forum
-Download bios folder for ps3 blog
-Download bios folder for ps3 website
-Download bios folder for ps3 link
-Download bios folder for ps3 file size
-Download bios folder for ps3 checksum
It is recommended that you always update your PS3 console to the latest version of the system software. By updating, you can enjoy additional benefits, improved usability, and enhanced security. You can also renew the Blu-ray player encryption key, which is required to play Blu-ray discs on your console.
-One of the easiest ways to download bios folder for ps3 is using the internet. This method requires a USB drive formatted as FAT32 and a PC or Mac with an internet connection. Here are the steps you need to follow:
-The first thing you need is a USB drive that has at least 200MB of free space and that is formatted as FAT32. FAT32 is a file system that allows your USB drive to be compatible with both Windows and Mac computers. To format your USB drive as FAT32, you can use tools such as Disk Utility on Mac or Disk Management on Windows.
-You also need a PC or Mac that has an internet connection and that can access the official SIE website. You can use any web browser such as Chrome, Firefox, Safari, etc.
-The next thing you need to do is create two folders on your USB drive: one named "PS3" and another one named "UPDATE". These folders are necessary for storing the update file that you will download from the SIE website.
-To create these folders, you can use any file manager such as Finder on Mac or File Explorer on Windows. Simply right-click on your USB drive icon and select New Folder. Name the first folder "PS3" (without quotation marks) and then open it. Inside it, create another folder named "UPDATE" (without quotation marks).
-The final thing you need to do is download the latest PS3 system software update file from the SIE website and save it in the "UPDATE" folder that you created on your USB drive. The update file has a name like "PS3UPDAT.PUP" (without quotation marks) and has a size of about 200MB.
-To download this file, you can use any web browser such as Chrome, Firefox, Safari, etc. Go to this link: https://www.playstation.com/en-us/support/hardware/ps3/system-software/ . This is the official SIE website that provides information and downloads for PS3 system software updates.
-On this website, scroll down until you see a section titled "Update using a computer". Click on this section to expand it. Then click on "Download now". This will start downloading the update file to your computer.
-Once the download is complete, locate the update file on your computer. It should be in your Downloads folder by default. Then copy or drag-and-drop this file into the "UPDATE" folder that you created on your USB drive. Make sure that you rename this file as "PS3UPDAT.PUP" (without quotation marks) if it has a different name.
-The last thing you need to do is plug your USB device into one of the USB ports of your PS3 console and follow the on-screen instructions to install the update.
-To do this, turn on your PS console and go to Settings (Settings) > System Update (System Settings) > [Update via Storage Media]. The system automatically searches for and finds the update data saved on the storage media or USB device. Press the X button to start the update. Follow the on-screen instructions to complete the update.
-Please note, during an update, do not turn off the system or remove the storage media or USB device. Doing so may cause damage to your system. Also, do not use the network features of your system until all of the update data has been installed.
-Another way to download bios folder for ps is using a computer. This method requires a USB drive formatted as FAT32 and a PC or Mac with a USB cable. Here are the steps you need to follow:
-The first thing you need is a USB drive that has at least 200MB of free space and that is formatted as FAT32. FAT32 is a file system that allows your USB drive to be compatible with both Windows and Mac computers. To format your USB drive as FAT32, you can use tools such as Disk Utility on Mac or Disk Management on Windows.
-You also need a PC or Mac that has a USB cable that can connect to your PS3 console. You can use any USB cable that has a Type A connector on one end and a Mini-B connector on the other end.
-The next thing you need to do is download the latest PS3 system software update file from the SIE website and save it on your computer. The update file has a name like "PS3UPDAT.PUP" (without quotation marks) and has a size of about 200MB.
-To download this file, you can use any web browser such as Chrome, Firefox, Safari, etc. Go to this link: https://www.playstation.com/en-us/support/hardware/ps3/system-software/ . This is the official SIE website that provides information and downloads for PS3 system software updates.
-On this website, scroll down until you see a section titled "Update using a computer". Click on this section to expand it. Then click on "Download now". This will start downloading the update file to your computer.
-Once the download is complete, locate the update file on your computer. It should be in your Downloads folder by default.
-The next thing you need to do is connect your PS3 console to your computer using a USB cable. Make sure that both your console and your computer are turned off before you do this.
-Plug one end of the USB cable into one of the USB ports of your PS3 console. Plug the other end of the USB cable into one of the USB ports of your computer.
-The final thing you need to do is start the PS3 system in Safe Mode and select [6] System Update to install the update from your computer.
-To do this, turn on your PS console by pressing and holding the power button until you hear three beeps. The first beep tells you that the PS is powering on. Keep holding. After about 5 seconds, the second beep signifies the video reset. After another 5 seconds, the third beep will be a double-beep; you should see this screen:
-Connect your controller to the PS and press the PS button. The PS will proceed to the next screen.
-On this screen, select [6] System Update. The system will search for and find the update data saved on your computer. Press the X button to start the update. Follow the on-screen instructions to complete the update.
-Please note, during an update, do not turn off the system or disconnect the USB cable. Doing so may cause damage to your system. Also, do not use the network features of your system until all of the update data has been installed.
-In some cases, such as after initializing your console, or encountering an error, you may need to reinstall the system software. This is a complete restoration of your system, back to the state it was in when you bought it. You will lose all data if you use this option.
-To reinstall the system software, you need to follow the same steps as downloading bios folder for ps using a computer. However, instead of selecting [6] System Update, you need to select [5] Restore PS System. This will erase everything on your hard disk drive and install a new copy of the system software. Follow the on-screen instructions to complete the process.
-Downloading bios folder for ps is easy and beneficial. It can help you improve your system performance and security, as well as fix any issues that may prevent your console from starting up properly. You can choose between two methods: using the internet or using a computer. You can also reinstall the system software if needed. However, be careful not to lose any data or damage your system by following the instructions carefully. We hope this article was helpful and informative. Happy gaming!
-Download ····· https://imgfil.com/2uxZWo
Anyone who has seen the Crocodile films (or has at least liked the concept) shouldnt miss the this one. What you get is a very long, very creepy, and very funny ride through the countryside of Arkansas. Crocodile all the way, baby!
-Download ··· https://imgfil.com/2uy0Sp
Finally, students also studied how crocodilians use the behaviour of other animals to their advantage, such as the adult crocodiles who patrol the area in order to deter hunting. The adult male who defends his home from intruders is a familiar sight around the lakes that have crocodiles. There are lots of patterns that the young crocodiles also learn from the adults, such as how to make the warning call on mating.
-The week really got my students thinking about the environment and the natural world. We saw the direct and indirect impact of deforestation on crocodiles, and discussed the huge amount of resources it takes to raise the meat trade. We highlighted the predatory nature of crocodiles, and the importance of conserving the environment in which these animals live. We explained the value of conserving animal and plant species, their use to people, and how the meat trade is unsustainable in the long-term.
-This week was our first introduction to Crocodile Physics, and it was a phenomenal success. We found it incredibly useful, and all of my students who participated were very impressed. They highlighted a few issues that we can address in future versions. It is currently very difficult to take the device out of the water, so it may not make it into the next release.
- -Thousands of years of civilisation, gone. Never to return. All that remains of these latest civilisations is a dense cloud of dust in the dim recesses of outer space. A rogue molecule, that defies all known laws of physics, breaks the bonds that lock together the elements of matter, and all that is left in the shattered remnants of the dead worlds are billions of cosmic point particles. Will these strange new mutations slowly learn to acclimatise to their new environments, or will they become extinct just as they arrived?
899543212bIf you are looking for a way to use multiple accounts of the same app on your Android device, you might have heard of clone messenger apk. This is a utility app that allows you to clone your personal WhatsApp account into another phone. But what exactly is clone messenger apk, how does it work, and what are the pros and cons of using it? In this article, we will answer these questions and show you how to download, install, and use clone messenger apk on your Android device. We will also introduce you to another app called App Cloner, which can help you clone other apps besides WhatsApp.
-Clone messenger apk is an app developed by BlueSoft Digital that lets you create a duplicate version of your WhatsApp account on another phone. This way, you can have a single account on two different devices, without logging out from one or the other. This can be useful if you want to separate your personal and professional chats, or if you want to have a backup account in case of emergencies.
-Download ••• https://jinyurl.com/2uNPSe
However, clone messenger apk is not an official app from WhatsApp, and it may not work properly with some features or updates. It also requires you to grant some permissions and access settings that may compromise your privacy or security. Moreover, it only works with WhatsApp, so if you want to clone other apps, you will need another tool.
-Since clone messenger apk is not available in the Google Play Store, you will need to download it from a third-party source and sideload it on your device. Here are the steps to do so:
-After installing clone messenger apk, you can start cloning your WhatsApp account by following these steps:
-One of the advantages of clone messenger apk is that it also comes with some extra features that are not available in the official WhatsApp app. For example, You can use the direct chat and story saver features of the cloned app. The direct chat feature allows you to chat with any WhatsApp user without saving their number in your contact list. You just have to enter their number in the direct chat tab and start your conversation. The story saver feature allows you to save WhatsApp stories to your device to view them offline or re-share them with your friends and family . These features are not available in the official WhatsApp app, so they can make your communication more convenient and fun.
-clone whatsapp messenger apk
-cloneapp messenger pro apk
-clone messenger for web and status saver apk
-clone app messenger dual account apk
-clone messenger apk free download
-cloneapp messenger latest version apk
-clone messenger apk mod
-cloneapp messenger premium apk
-clone messenger apk for android
-cloneapp messenger app cloner apk
-clone messenger apk old version
-cloneapp messenger story saver apk
-clone messenger apk no ads
-cloneapp messenger direct chat apk
-clone messenger apk 2021
-cloneapp messenger whatsapp web apk
-clone messenger apk offline
-cloneapp messenger online apk
-clone messenger apk update
-cloneapp messenger backup apk
-clone messenger apk 2020
-cloneapp messenger new update apk
-clone messenger apk 2019
-cloneapp messenger original apk
-clone messenger apk 2018
-cloneapp messenger beta apk
-clone messenger apk 2017
-cloneapp messenger cracked apk
-clone messenger apk 2016
-cloneapp messenger hack apk
-clone messenger apk 2015
-cloneapp messenger full version apk
-clone messenger lite apk
-cloneapp messenger plus apk
-superclone app cloner for multiple accounts - dual space & parallel app - whatsapp, facebook, instagram, snapchat, twitter, telegram, line, wechat, imo, viber, zalo, kakaotalk, hike, signal, skype, gmail, youtube, tiktok, likee, bigo live, vmate, helo and more social media apps - support 64bit - support android 10 - support dark mode - support app lock - support custom icon and label - support notification badge - support multiple accounts and dual space - support incognito installation and private cloning - support speed mode and power saving mode - support task manager and app uninstaller - support cloning game apps such as pubg mobile lite, free fire and more - support cloning vpn apps such as turbo vpn and more - support cloning browser apps such as chrome and more - support cloning video player apps such as mx player and more - support cloning photo editor apps such as picsart and more - support cloning music player apps such as spotify and more - support cloning file manager apps such as es file explorer and more - support cloning launcher apps such as nova launcher and more - support cloning keyboard apps such as gboard and more - support cloning utility apps such as flashlight and more - support cloning productivity apps such as evernote and more - support cloning education apps such as duolingo and more
If you want to clone other apps besides WhatsApp, you will need another tool called App Cloner. App Cloner is an app that lets you create and install multiple copies of any Android app. App Cloner is different from Clone Messenger APK because it does not require you to scan a QR code or use the same account on two devices. Instead, it creates independent and customizable clones that can have different names, icons, settings, and features .
-Here is how you can use App Cloner to clone other apps on your Android device:
-Cloning apps on Android can be a useful way to use multiple accounts, backup your data, or customize your apps. Clone Messenger APK and App Cloner are two tools that can help you clone WhatsApp and other apps on your Android device. However, you should be aware of the potential risks and limitations of using cloned apps, such as compatibility issues, privacy concerns, or legal implications. You should also respect the terms and conditions of the original apps and use cloned apps responsibly and ethically.
-Some common issues or errors when cloning apps on Android are:
-No, you cannot clone any app on Android using Clone Messenger APK or App Cloner. Clone Messenger APK only works with WhatsApp, while App Cloner may not work with some apps that have anti-cloning measures or special requirements. Some examples of apps that cannot be cloned are Google Play Services, Google Play Store, Gmail, YouTube, Facebook Messenger, Snapchat, TikTok, etc.
-Cloning apps on Android may not be legal or safe depending on how you use them and what apps you clone. Some apps may have terms and conditions that prohibit cloning or modifying their apps without their permission. Some apps may also have security features that prevent cloning or detect cloned apps and block them. Cloning apps may also expose your personal information or data to third parties or hackers. Therefore, you should always check the legality and safety of cloning apps before doing so and use them at your own risk.
-You can switch between cloned apps and original apps on Android by using the app switcher button on your device or by tapping on the app icons on your home screen or app drawer. The cloned apps and original apps have different icons, names, and colors, so you can easily distinguish them. You can also rename or change the icons of the cloned apps using App Cloner to make them more recognizable.
-You can delete or uninstall cloned apps on Android by following the same steps as deleting or uninstalling any other app on your device. You can either long-press on the app icon and drag it to the trash bin, or go to Settings > Apps and select the app you want to delete or uninstall. You may also need to clear the cache and data of the app before deleting or uninstalling it.
401be4b1e0If you are a fan of strategy games, you might have heard of Conquest 2, a thrilling sci-fi game that features large-scale fleet battles, intelligent admirals, and deep space exploration. Conquest 2 is the sequel to Conquest: Frontier Wars, a classic RTS game that was released in 2001. In this article, we will show you how to download Conquest 2 apk for your Android device, and how to play it on your PC using an emulator.
-Download ✏ ✏ ✏ https://jinyurl.com/2uNSJc
Conquest 2 is a real-time strategy game that takes place in a vast galaxy where three races compete for resources and territory: the humans, the insectoid Mantis, and the energy-based Celaerans. Each race has its own strengths, weaknesses, and unique units. You can choose to play as any of them in the single-player campaign mode, or challenge other players online in the multiplayer mode.
-Conquest 2 has a lot of features that make it stand out from other strategy games. For example, you can manage your supply lines while waging war in multiple maps simultaneously using wormholes. You can also command up to six highly intelligent fleet admirals who serve as hero units and have their own personalities and abilities. Moreover, you can customize your ships and research new technologies to gain an edge over your enemies.
-If you want to play Conquest 2 on your Android device, you will need to download the apk file from a reliable source. Here are the steps to follow:
-Some tips and warnings before you download Conquest 2 apk:
-If you prefer to play Conquest 2 on a bigger screen with better graphics and controls, you can use an emulator to run it on your PC. An emulator is a software that simulates an Android device on your computer, allowing you to play Android games and apps on your PC. Here are some benefits of playing Conquest 2 on PC using an emulator:
-To play Conquest 2 on PC using an emulator, you will need to follow these steps:
-Conquest 2 is an amazing strategy game that will keep you hooked for hours with its immersive gameplay, stunning graphics, and challenging missions. Whether you want to play it on your Android device or your PC, you can easily download Conquest 2 apk from the links we provided and follow our simple guide. Don't miss this opportunity to experience one of the best sci-fi games ever made!
-If you liked this article, please share it with your friends and leave a comment below. Also, don't forget to check out our other articles on gaming, technology, and more. Thanks for reading!
-Epic Conquest 2 Android game free download
-Epic Conquest 2 latest version XAPK download
-Epic Conquest 2 open world adventure game APK
-How to install Epic Conquest 2 on Android device
-Epic Conquest 2 by Gaco Games APK for Android
-Epic Conquest 2 character customization and skills APK
-Epic Conquest 2 offline RPG game APK download
-Epic Conquest 2 mod APK unlimited money and gems
-Epic Conquest 2 review and gameplay APK download
-Epic Conquest 2 APK download for PC Windows 10
-Download Epic Conquest 2 from APKCombo website
-Download Epic Conquest 2 from Softonic website
-Download Epic Conquest 2 from Google Play Store
-Epic Conquest 2 APK file size and requirements
-Epic Conquest 2 APK update and patch notes
-Epic Conquest 2 tips and tricks APK download
-Epic Conquest 2 best characters and builds APK
-Epic Conquest 2 cheats and hacks APK download
-Epic Conquest 2 story and lore APK download
-Epic Conquest 2 multiplayer and co-op mode APK
-Epic Conquest 2 graphics and sound quality APK
-Epic Conquest 2 achievements and rewards APK
-Epic Conquest 2 bugs and issues APK download
-Epic Conquest 2 fan art and community APK
-Epic Conquest 2 alternatives and similar games APK
Here are some of the most common questions that people ask about Conquest 2 apk download:
-Yes, Conquest 2 is free to play, but it may contain some in-app purchases and ads.
-Yes, Conquest 2 is safe to download as long as you use a trusted source like the ones we recommended. However, you should always scan any apk file before installing it on your device or PC.
-Conquest 2 requires Android 4.1 or higher to run on your device. You can check your device's Android version by going to Settings > About Phone > Software Information. If your device meets the minimum requirements, you should be able to play Conquest 2 without any issues.
-You can update Conquest 2 by going to the Google Play Store and tapping on the Update button. Alternatively, you can download the latest apk file from the links we provided and install it over the existing one.
-You can contact the developers of Conquest 2 by visiting their official website at [Conquest Games] or by sending them an email at support@conquestgames.com. You can also follow them on social media platforms like Facebook, Twitter, and Instagram for news and updates.
197e85843dHave you ever wondered what it would be like to unleash your inner villain and destroy planets with various weapons and disasters? If so, then you might want to check out Solar Smash, a game that lets you do just that. Solar Smash is a planet destruction simulator that allows you to use a variety of different weapons to destroy the planet. These include nuclear missiles, lasers, asteroids, aliens, black holes, and more. You can also customize your own planet or choose from a list of preset ones, such as Earth, Mars, Jupiter, or even a giant pumpkin. The game has stunning graphics, realistic physics, and satisfying sound effects that make you feel like a powerful cosmic force.
-DOWNLOAD ✅ https://jinyurl.com/2uNOTw
In this article, we will show you how to download and play Solar Smash on your Android device or PC, as well as some tips and tricks to help you have the best destruction experience possible. We will also introduce you to some alternatives to Solar Smash that you might enjoy if you are looking for more games like this one. So, without further ado, let's get started!
-Solar Smash is a game developed by Paradyme Games, an indie studio based in Australia. The game was released in 2020 and has since gained over 100 million downloads on Google Play Store. The game is rated 4.6 out of 5 stars by more than 1.4 million users who praise its graphics, gameplay, and variety of weapons.
-The game has two main modes: Planet Smash and System Smash. In Planet Smash mode, you can choose a single planet to destroy with different weapons and scenarios. You can also customize your own planet by drawing on it or changing its size, color, atmosphere, and gravity. In System Smash mode, you can destroy an entire solar system with multiple planets and stars. You can also create your own system by adding or removing planets and stars.
-The game has a wide range of weapons and disasters that you can use to destroy planets. Some of them are realistic, such as nuclear missiles, lasers, asteroids, comets, volcanoes, earthquakes, tsunamis, etc. Some of them are fictional or fantastical, such as aliens, UFOs, black holes, wormholes, antimatter bombs, giant balls, etc. Each weapon has its own effect and damage level on the planet. You can also combine different weapons to create more devastating effects.
-How to download solar smash apk for free
-Solar smash apk mod unlimited money
-Solar smash planet destruction simulator apk
-Solar smash apk latest version download
-Solar smash apk for pc windows 10
-Solar smash apk online play
-Solar smash apk no ads
-Solar smash apk hack download
-Solar smash apk game review
-Solar smash apk offline mode
-Solar smash apk cheats and tips
-Solar smash apk best weapons
-Solar smash apk custom planets
-Solar smash apk multiplayer mode
-Solar smash apk fun and addictive
-Solar smash apk realistic physics
-Solar smash apk graphics settings
-Solar smash apk file size
-Solar smash apk requirements and compatibility
-Solar smash apk update and new features
-Solar smash apk alternatives and similar games
-Solar smash apk download error and fix
-Solar smash apk safe and secure
-Solar smash apk ratings and feedback
-Solar smash apk developer contact and support
The game also has a list of achievements that you can complete by destroying planets in certain ways or using certain weapons. Some of them are easy, such as destroying Earth with a nuclear missile or destroying Mars with an asteroid. Some of them are hard, such as destroying Jupiter with a black hole or destroying Saturn with a ring breaker. Completing achievements will give you a sense of accomplishment and challenge.
-If you want to play Solar Smash on your Android device, you can download it for free from Google Play Store. However, if for some reason you cannot access the Play Store or want to get the latest version of the game before it is officially released, you can also download the APK file from other sources online. APK stands for Android Package Kit and it is a file format that contains all the necessary files for installing an app on an Android device.
-One of the websites that offer the APK file for Solar Smash is APKPure. To download and install the APK file from this website, follow these steps:
-Note: Downloading and installing APK files from unknown sources may pose some risks to your device and data. Make sure you trust the source and scan the file for viruses before installing it. We are not responsible for any damage or loss caused by using APK files.
-If you want to play Solar Smash on your PC, you will need an emulator that can run Android apps on your computer. An emulator is a software that mimics the functions of another device or system. There are many emulators available online, but one of the most popular and reliable ones is BlueStacks. BlueStacks is a free emulator that allows you to play Android games and apps on your PC with ease. To play Solar Smash on PC with BlueStacks, follow these steps:
-Note: Playing Solar Smash on PC may require more resources than playing it on your mobile device. Make sure you have enough RAM, CPU, and disk space to run BlueStacks smoothly. You can also adjust the settings of BlueStacks to optimize its performance and compatibility with Solar Smash.
-Solar Smash has a list of achievements that you can complete by destroying planets in certain ways or using certain weapons. Completing achievements will give you a sense of accomplishment and challenge. Some of them are easy, such as destroying Earth with a nuclear missile or destroying Mars with an asteroid. Some of them are hard, such as destroying Jupiter with a black hole or destroying Saturn with a ring breaker. Here are some tips and tricks for completing some of the achievements in the game:
-You can check your progress on the achievements by clicking on the trophy icon on the top right corner of the screen. You can also see how many times you have used each weapon by clicking on the weapon icon on the top left corner of the screen.
-Solar Smash is a game that requires some skill and strategy to destroy planets efficiently. You can't just spam the fire button and hope for the best. You have to aim at the right spots to cause the most damage and destruction. Here are some tips and tricks for hitting the right spots to destroy planets faster:
-Some of the weapons have specific spots that can cause more damage than others. For example, if you use the nuclear missile, you can aim at the major cities or landmarks on Earth, such as New York, London, Paris, Tokyo, etc. If you use the laser, you can aim at the poles or the equator of the planet, where the temperature difference is higher. If you use the asteroid, you can aim at the oceans or the continents, depending on whether you want to cause more water or land damage. If you use the black hole, you can aim at the center of the planet, where the gravity is stronger.
-You can also experiment with different combinations of weapons and scenarios to see what happens. For example, you can use the alien invasion scenario and then use the antimatter bomb to destroy both the aliens and the planet. Or you can use the giant ball scenario and then use the ring breaker to destroy both the ball and Saturn's rings. The possibilities are endless!
-Solar Smash has a list of preset planets that you can choose from in Planet Smash mode. These include Earth, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, Mercury, Venus, Moon, Sun, and Pumpkin. However, there are also some secret planets that are not shown on the list. These are hidden planets that you can unlock by completing certain tasks or using certain weapons in the game. Here are some tips and tricks for unlocking all the secret planets in Solar Smash:
-You can check your progress on the secret planets by clicking on the planet icon on the top right corner of the screen. You can also see how many times you have destroyed each planet by clicking on the planet icon on the top left corner of the screen.
-If you enjoy playing Solar Smash, you might also like some other games that let you destroy planets or simulate space scenarios. Here are some of the best alternatives to Solar Smash that you can try:
-Game | -Description | -
---|---|
Universe Sandbox | -Universe Sandbox is a physics-based space simulator that allows you to create, destroy, and interact with anything in the universe. You can explore the solar system, collide planets, create black holes, simulate gravity, and more. The game has realistic graphics, sound effects, and data that make you feel like a true cosmic explorer. | -
Solar 2 | -Solar 2 is a sandbox game that lets you play as an asteroid, a planet, a star, or a black hole. You can grow, evolve, and interact with other objects in the universe. You can also complete missions, challenges, and achievements that test your skills and creativity. The game has simple but beautiful graphics, relaxing music, and humorous narration. | -
Planet Bomber | -Planet Bomber is a casual game that lets you bomb planets with different weapons and upgrades. You can choose from various types of bombs, such as cluster bombs, nuclear bombs, plasma bombs, etc. You can also upgrade your bomber's speed, power, accuracy, and more. The game has colorful graphics, addictive gameplay, and satisfying explosions. | -
Solar Smash 2 | -Solar Smash 2 is the sequel to Solar Smash that adds more features and improvements to the original game. You can enjoy new weapons, scenarios, planets, systems, modes, and more. You can also play online with other players or offline with bots. The game has enhanced graphics, physics, and sound effects that make it more realistic and fun. | -
Solar Smash is a great game for anyone who likes to destroy planets or simulate space scenarios. However, it is not perfect and it has some pros and cons compared to other games in the same genre. Here are some of the pros and cons of Solar Smash:
-Of course, these pros and cons are subjective and may vary depending on your personal preferences and expectations. You can always try the game for yourself and see if you like it or not. After all, the best way to judge a game is to play it!
-Solar Smash is a planet destruction simulator that allows you to use a variety of different weapons to destroy the planet. You can also customize your own planet or choose from a list of preset ones, such as Earth, Mars, Jupiter, or even a giant pumpkin. The game has stunning graphics, realistic physics, and satisfying sound effects that make you feel like a powerful cosmic force.
-In this article, we have shown you how to download and play Solar Smash on your Android device or PC, as well as some tips and tricks to help you have the best destruction experience possible. We have also introduced you to some alternatives to Solar Smash that you might enjoy if you are looking for more games like this one.
-If you are interested in playing Solar Smash, you can download it for free from Google Play Store or from other sources online. You can also visit the official website or the Facebook page of Paradyme Games, the developer of Solar Smash, to learn more about the game and its updates.
-We hope you have enjoyed reading this article and found it useful and informative. If you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you!
-Now, go ahead and unleash your inner villain and destroy some planets with Solar Smash! Have fun!
-Thank you for reading this article and I hope you have learned something new and useful about Solar Smash. If you liked this article, please share it with your friends and family who might also enjoy playing Solar Smash. You can also leave a comment below and let me know what you think about the game or the article. I would love to hear your feedback and suggestions.
-Until next time, happy smashing!
401be4b1e0" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "
{plaintext_to_html(str(key))}
-{plaintext_to_html(str(text))}
-{message}
Afrodreams.AI
', unsafe_allow_html=True) -st.subheader("This app takes in your image and styles it with a unique african art.") - -#Create two columns with different width -col1, col2 = st.columns( [0.8, 0.2]) -import time - - - -with col1: # To display the header text using css style - st.markdown(""" """, unsafe_allow_html=True) - st.markdown('Upload your photo here...
', unsafe_allow_html=True) - - - - - -#Add file uploader to allow users to upload photos -uploaded_file = st.file_uploader("", type=['jpg','png','jpeg']) - -# add slider to side bar -style_weight = st.slider("Select Style Weight", min_value=10, max_value=100, value=12) -img_size_slider= st.select_slider(label= 'Seleet Output Quality Level', - options = ['Very Low', 'Low', 'Normal', 'High', 'Very High'], - value='Normal') -img_size_mapping = {'Very Low':128, 'Low':300, 'Normal':400, 'High':500, 'Very High':600} - - -def get_random_subset(list_, num_imgs): - return random.sample(list_, num_imgs) - - -def display_random_images(five_rand_imgs, display_type, size= (15, 6)): - fig = plt.figure(figsize=size) - fig.subplots_adjust(wspace=0.2) - for i in range(1, len(five_rand_imgs)+1): - ith_image = Image.open(five_rand_imgs[i-1]) - - ax = fig.add_subplot(1, 5, i) - ax.imshow(ith_image) - ax.set_title(f'{display_type} {i}') - plt.axis('off') - - st.pyplot(fig) - - - -path = 'stylesv2' - - -#expander for style selection -with st.expander("Expand to select style type"): - img_names = [os.path.join(path, img) for img in os.listdir(path)] - five_rand_imgs0 = get_random_subset(img_names, 5) - if 'selected_image' not in st.session_state: - st.session_state.selected_image = five_rand_imgs0 - five_rand_imgs = st.session_state.selected_image - display_random_images(five_rand_imgs, 'Style') - chosen_style = st.selectbox( - 'Select the style you want to use', - options = five_rand_imgs, format_func = lambda x: "Style " + str(five_rand_imgs.index(x) + 1), - key= 'expander1' - ) - - - -#put notificaation -#with st.empty(): - #for seconds in range(5): - #st.info('Please note that by using this app, you agree that your image be will be showcased on this app.') - #time.sleep(1) - #st.empty() - -#Add 'before' and 'after' columns -if uploaded_file is not None: - image = Image.open(uploaded_file) - - col1, col2 = st.columns( [0.5, 0.5]) - with col1: - st.markdown('Before
',unsafe_allow_html=True) - st.image(image,width=300) - - with col2: - st.markdown('After
',unsafe_allow_html=True) - - # add a button - run = st.button('Generate Art') - my_bar = st.progress(0) - params = neural_style.TransferParams() - params.gpu = "c" #0 - params.backend = "mkl" - - - params.image_size = img_size_mapping[img_size_slider] - - params.content_image = uploaded_file - params.style_weight = style_weight - - - - keep_style = False - if run==True: - # run image selection if keep style is false - if keep_style==False: - - styles = os.listdir(path) - #params.style_image = path + '/' + random.choice(styles) - params.style_image = chosen_style - - st.session_state.submitted = True - with st.spinner('Wait for it...'): - neural_style.transfer(params) - - #display image when done. - with col2: - if 'submitted' in st.session_state: - result = Image.open('out.png') - st.image(result, width=300) - buf = BytesIO() - result.save(buf, format="png") - - img_file_name = f"generated_samples/{str(len(os.listdir('generated_samples')))}.png" - - _ = upload_file(path_or_fileobj = 'out.png', - path_in_repo = img_file_name, - repo_id='AfrodreamsAI/afrodreams', - repo_type='space', - token=HF_TOKEN - ) - - byte_im = buf.getvalue() - run = ste.download_button("Download Image", data=byte_im, file_name="afrodreams.png") - - - #if run==True: -# selectiuing random iamges to be displayed -img_names = [os.path.join('generated_samples', img) for img in os.listdir('generated_samples')] -five_rand_imgs1 = get_random_subset(img_names, 5) -st.subheader('\n\n\n\n\n\n\n\n\n Examples of some Generate Images') -display_random_images(five_rand_imgs1, 'Generate image', size=(20, 15)) - - - - - - diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/QuestMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/QuestMethods.js deleted file mode 100644 index 90feacdc52cdbceb93f5f2e75d738be3e7a2e3f2..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/QuestMethods.js +++ /dev/null @@ -1,18 +0,0 @@ -export default { - start(key) { - this.questionManager - .restartQuest() - .getNextQuestion(key); - return this; - }, - - next(key) { - this.questionManager - .getNextQuestion(key); - return this; - }, - - isLast() { - return this.questionManager.isLastQuestion(); - }, -}; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.d.ts deleted file mode 100644 index fc386f2124c9223046bfe201e34db29368e80bcd..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.d.ts +++ /dev/null @@ -1,19 +0,0 @@ -import Checkbox from './Checkbox'; - -export default function ( - x: number, y: number, - width: number, height: number, - color?: number, - config?: Checkbox.IConfig -): Checkbox; - -export default function ( - x: number, y: number, - width: number, height: number, - config?: Checkbox.IConfig -): Checkbox; - - -export default function ( - config?: Checkbox.IConfig -): Checkbox; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/skew/Skew.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/skew/Skew.js deleted file mode 100644 index eda2f23aa50a1d398ced45d51b61768e1ae8a7b0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/skew/Skew.js +++ /dev/null @@ -1,2 +0,0 @@ -import { ContainerSkew } from '../../../plugins/quadimage.js'; -export default ContainerSkew; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenSpeechRecogntion/README.md b/spaces/Akmyradov/TurkmenSpeechRecogntion/README.md deleted file mode 100644 index 83b40678810357c51191df2978afa26e828f9418..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenSpeechRecogntion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TurkmenSpeechRecognition -emoji: ⚡ -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/run.py b/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/run.py deleted file mode 100644 index 6120213fe79c670212b2fc79e0ddb105fb178c45..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/run.py +++ /dev/null @@ -1,89 +0,0 @@ -import matplotlib -matplotlib.use('Agg') - -import os, sys -import yaml -from argparse import ArgumentParser -from time import gmtime, strftime -from shutil import copy -from frames_dataset import FramesDataset - -from modules.inpainting_network import InpaintingNetwork -from modules.keypoint_detector import KPDetector -from modules.bg_motion_predictor import BGMotionPredictor -from modules.dense_motion import DenseMotionNetwork -from modules.avd_network import AVDNetwork -import torch -from train import train -from train_avd import train_avd -from reconstruction import reconstruction -import os - - -if __name__ == "__main__": - - if sys.version_info[0] < 3: - raise Exception("You must use Python 3 or higher. Recommended version is Python 3.9") - - parser = ArgumentParser() - parser.add_argument("--config", default="config/vox-256.yaml", help="path to config") - parser.add_argument("--mode", default="train", choices=["train", "reconstruction", "train_avd"]) - parser.add_argument("--log_dir", default='log', help="path to log into") - parser.add_argument("--checkpoint", default=None, help="path to checkpoint to restore") - parser.add_argument("--device_ids", default="0,1", type=lambda x: list(map(int, x.split(','))), - help="Names of the devices comma separated.") - - opt = parser.parse_args() - with open(opt.config) as f: - config = yaml.load(f) - - if opt.checkpoint is not None: - log_dir = os.path.join(*os.path.split(opt.checkpoint)[:-1]) - else: - log_dir = os.path.join(opt.log_dir, os.path.basename(opt.config).split('.')[0]) - log_dir += ' ' + strftime("%d_%m_%y_%H.%M.%S", gmtime()) - - inpainting = InpaintingNetwork(**config['model_params']['generator_params'], - **config['model_params']['common_params']) - - if torch.cuda.is_available(): - cuda_device = torch.device('cuda:'+str(opt.device_ids[0])) - inpainting.to(cuda_device) - - kp_detector = KPDetector(**config['model_params']['common_params']) - dense_motion_network = DenseMotionNetwork(**config['model_params']['common_params'], - **config['model_params']['dense_motion_params']) - - if torch.cuda.is_available(): - kp_detector.to(opt.device_ids[0]) - dense_motion_network.to(opt.device_ids[0]) - - bg_predictor = None - if (config['model_params']['common_params']['bg']): - bg_predictor = BGMotionPredictor() - if torch.cuda.is_available(): - bg_predictor.to(opt.device_ids[0]) - - avd_network = None - if opt.mode == "train_avd": - avd_network = AVDNetwork(num_tps=config['model_params']['common_params']['num_tps'], - **config['model_params']['avd_network_params']) - if torch.cuda.is_available(): - avd_network.to(opt.device_ids[0]) - - dataset = FramesDataset(is_train=(opt.mode.startswith('train')), **config['dataset_params']) - - if not os.path.exists(log_dir): - os.makedirs(log_dir) - if not os.path.exists(os.path.join(log_dir, os.path.basename(opt.config))): - copy(opt.config, log_dir) - - if opt.mode == 'train': - print("Training...") - train(config, inpainting, kp_detector, bg_predictor, dense_motion_network, opt.checkpoint, log_dir, dataset) - elif opt.mode == 'train_avd': - print("Training Animation via Disentaglement...") - train_avd(config, inpainting, kp_detector, bg_predictor, dense_motion_network, avd_network, opt.checkpoint, log_dir, dataset) - elif opt.mode == 'reconstruction': - print("Reconstruction...") - reconstruction(config, inpainting, kp_detector, bg_predictor, dense_motion_network, opt.checkpoint, log_dir, dataset) diff --git a/spaces/Aloento/9Nine-PITS/data_utils.py b/spaces/Aloento/9Nine-PITS/data_utils.py deleted file mode 100644 index 99d641733f7b89a6f1e4dbfb5cd982e881c4f9ad..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/data_utils.py +++ /dev/null @@ -1,358 +0,0 @@ -# modified from https://github.com/jaywalnut310/vits -import os -import random - -import torch -import torch.utils.data - -import commons -from analysis import Pitch -from mel_processing import spectrogram_torch -from text import cleaned_text_to_sequence -from utils import load_wav_to_torch, load_filepaths_and_text - -""" Modified from Multi speaker version of VITS""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams, pt_run=False): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - - self.add_blank = hparams.add_blank - self.min_text_len = 1 - self.max_text_len = 190 - - self.speaker_dict = { - speaker: idx - for idx, speaker in enumerate(hparams.speakers) - } - self.data_path = hparams.data_path - - self.pitch = Pitch(sr=hparams.sampling_rate, - W=hparams.tau_max, - tau_max=hparams.tau_max, - midi_start=hparams.midi_start, - midi_end=hparams.midi_end, - octave_range=hparams.octave_range) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - if pt_run: - for _audiopaths_sid_text in self.audiopaths_sid_text: - _ = self.get_audio_text_speaker_pair(_audiopaths_sid_text, - True) - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, spk, text, lang in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len( - text) <= self.max_text_len: - audiopath = os.path.join(self.data_path, audiopath) - if not os.path.exists(audiopath): - print(audiopath, "not exist!") - continue - try: - audio, sampling_rate = load_wav_to_torch(audiopath) - except: - print(audiopath, "load error!") - continue - audiopaths_sid_text_new.append([audiopath, spk, text, lang]) - lengths.append( - os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text, pt_run=False): - # separate filename, speaker_id and text - audiopath, spk, text, lang = audiopath_sid_text - text, lang = self.get_text(text, lang) - spec, ying, wav = self.get_audio(audiopath, pt_run) - sid = self.get_sid(self.speaker_dict[spk]) - return (text, spec, ying, wav, sid, lang) - - def get_audio(self, filename, pt_run=False): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - ying_filename = filename.replace(".wav", ".ying.pt") - if os.path.exists(spec_filename) and not pt_run: - spec = torch.load(spec_filename, map_location='cpu') - else: - spec = spectrogram_torch(audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - if os.path.exists(ying_filename) and not pt_run: - ying = torch.load(ying_filename, map_location='cpu') - else: - wav = torch.nn.functional.pad( - audio_norm.unsqueeze(0), - (self.filter_length - self.hop_length, - self.filter_length - self.hop_length + - (-audio_norm.shape[1]) % self.hop_length + self.hop_length * (audio_norm.shape[1] % self.hop_length == 0)), - mode='constant').squeeze(0) - ying = self.pitch.yingram(wav)[0] - torch.save(ying, ying_filename) - return spec, ying, audio_norm - - def get_text(self, text, lang): - text_norm = cleaned_text_to_sequence(text) - lang = [int(i) for i in lang.split(" ")] - if self.add_blank: - text_norm, lang = commons.intersperse_with_language_id(text_norm, lang, 0) - text_norm = torch.LongTensor(text_norm) - lang = torch.LongTensor(lang) - return text_norm, lang - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair( - self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort(torch.LongTensor( - [x[1].size(1) for x in batch]), - dim=0, - descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_ying_len = max([x[2].size(1) for x in batch]) - max_wav_len = max([x[3].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - ying_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), - max_spec_len) - ying_padded = torch.FloatTensor(len(batch), batch[0][2].size(0), - max_ying_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - spec_padded.zero_() - ying_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - ying = row[2] - ying_padded[i, :, :ying.size(1)] = ying - ying_lengths[i] = ying.size(1) - - wav = row[3] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - tone = row[5] - tone_padded[i, :text.size(0)] = tone - - sid[i] = row[4] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, ying_padded, ying_lengths, wav_padded, wav_lengths, sid, tone_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler - ): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True): - super().__init__(dataset, - num_replicas=num_replicas, - rank=rank, - shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, -1, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append( - torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * \ - (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[j * self.batch_size:(j + 1) * - self.batch_size] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size - - -def create_spec(audiopaths_sid_text, hparams): - audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - for audiopath, _, _, _ in audiopaths_sid_text: - audiopath = os.path.join(hparams.data_path, audiopath) - if not os.path.exists(audiopath): - print(audiopath, "not exist!") - continue - try: - audio, sampling_rate = load_wav_to_torch(audiopath) - except: - print(audiopath, "load error!") - continue - if sampling_rate != hparams.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, hparams.sampling_rate)) - audio_norm = audio.unsqueeze(0) - specpath = audiopath.replace(".wav", ".spec.pt") - - if not os.path.exists(specpath): - spec = spectrogram_torch(audio_norm, - hparams.filter_length, - hparams.sampling_rate, - hparams.hop_length, - hparams.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, specpath) diff --git a/spaces/Amrrs/hubble-jwst-compare/README.md b/spaces/Amrrs/hubble-jwst-compare/README.md deleted file mode 100644 index 949fe8a98af180de2e9cb83db905e1514903da7c..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/hubble-jwst-compare/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hubble Jwst Compare -emoji: 😻 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/conceptual/philosophy.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/conceptual/philosophy.md deleted file mode 100644 index 733f741b2b873435440177381ea964c1367a0603..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/conceptual/philosophy.md +++ /dev/null @@ -1,110 +0,0 @@ - - -# Philosophy - -🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities. -Its purpose is to serve as a **modular toolbox** for both inference and training. - -We aim at building a library that stands the test of time and therefore take API design very seriously. - -In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones: - -## Usability over Performance - -- While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. -- Diffusers aim at being a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. -- Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. - -## Simple over easy - -As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library: -- We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management. -- Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. -- Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. -- Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. Dreambooth or textual inversion training -is very simple thanks to diffusers' ability to separate single components of the diffusion pipeline. - -## Tweakable, contributor-friendly over abstraction - -For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). -In short, just like Transformers does for modeling files, diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. -Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. -**However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: -- Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. -- Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. -- Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. - -At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look -at [this blog post](https://huggingface.co/blog/transformers-design-philosophy). - -In diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such -as [DDPM](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [UnCLIP (Dalle-2)](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/unclip#overview) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models#diffusers.UNet2DConditionModel). - -Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. -We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=). - -## Design Philosophy in Details - -Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consist of three major classes, [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers). -Let's walk through more in-detail design decisions for each class. - -### Pipelines - -Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference. - -The following design principles are followed: -- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [#Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251). -- Pipelines all inherit from [`DiffusionPipeline`]. -- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function. -- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function. -- Pipelines should be used **only** for inference. -- Pipelines should be very readable, self-explanatory, and easy to tweak. -- Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. -- Pipelines are **not** intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner). -- Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines. -- Pipelines should be named after the task they are intended to solve. -- In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. - -### Models - -Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**. - -The following design principles are followed: -- Models correspond to **a type of model architecture**. *E.g.* the [`UNet2DConditionModel`] class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. -- All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py), [`transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformer_2d.py), etc... -- Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy. -- Models intend to expose complexity, just like PyTorch's module does, and give clear error messages. -- Models all inherit from `ModelMixin` and `ConfigMixin`. -- Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. -- Models should by default have the highest precision and lowest performance setting. -- To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. -- Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. -- The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and -readable longterm, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - -### Schedulers - -Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**. - -The following design principles are followed: -- All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers). -- Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained. -- One scheduler python file corresponds to one scheduler algorithm (as might be defined in a paper). -- If schedulers share similar functionalities, we can make use of the `#Copied from` mechanism. -- Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`. -- Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](./using-diffusers/schedulers.md). -- Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called. -- Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon. -- The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1). -- Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box". -- In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_pndm.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_pndm.py deleted file mode 100644 index c1519f7c7e8e113aca61c8749c3a08f6f390309f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_pndm.py +++ /dev/null @@ -1,242 +0,0 @@ -import tempfile - -import torch - -from diffusers import PNDMScheduler - -from .test_schedulers import SchedulerCommonTest - - -class PNDMSchedulerTest(SchedulerCommonTest): - scheduler_classes = (PNDMScheduler,) - forward_default_kwargs = (("num_inference_steps", 50),) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - scheduler.ets = dummy_past_residuals[:] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - new_scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - new_scheduler.ets = dummy_past_residuals[:] - - output = scheduler.step_prk(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step_prk(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step_plms(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step_plms(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - pass - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residuals (must be after setting timesteps) - scheduler.ets = dummy_past_residuals[:] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - # copy over dummy past residuals - new_scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residual (must be after setting timesteps) - new_scheduler.ets = dummy_past_residuals[:] - - output = scheduler.step_prk(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step_prk(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step_plms(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step_plms(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def full_loop(self, **config): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps = 10 - model = self.dummy_model() - sample = self.dummy_sample_deter - scheduler.set_timesteps(num_inference_steps) - - for i, t in enumerate(scheduler.prk_timesteps): - residual = model(sample, t) - sample = scheduler.step_prk(residual, t, sample).prev_sample - - for i, t in enumerate(scheduler.plms_timesteps): - residual = model(sample, t) - sample = scheduler.step_plms(residual, t, sample).prev_sample - - return sample - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - sample = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - scheduler.set_timesteps(num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - # copy over dummy past residuals (must be done after set_timesteps) - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - scheduler.ets = dummy_past_residuals[:] - - output_0 = scheduler.step_prk(residual, 0, sample, **kwargs).prev_sample - output_1 = scheduler.step_prk(residual, 1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - output_0 = scheduler.step_plms(residual, 0, sample, **kwargs).prev_sample - output_1 = scheduler.step_plms(residual, 1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_timesteps(self): - for timesteps in [100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_steps_offset(self): - for steps_offset in [0, 1]: - self.check_over_configs(steps_offset=steps_offset) - - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(steps_offset=1) - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(10) - assert torch.equal( - scheduler.timesteps, - torch.LongTensor( - [901, 851, 851, 801, 801, 751, 751, 701, 701, 651, 651, 601, 601, 501, 401, 301, 201, 101, 1] - ), - ) - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001], [0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_time_indices(self): - for t in [1, 5, 10]: - self.check_over_forward(time_step=t) - - def test_inference_steps(self): - for t, num_inference_steps in zip([1, 5, 10], [10, 50, 100]): - self.check_over_forward(num_inference_steps=num_inference_steps) - - def test_pow_of_3_inference_steps(self): - # earlier version of set_timesteps() caused an error indexing alpha's with inference steps as power of 3 - num_inference_steps = 27 - - for scheduler_class in self.scheduler_classes: - sample = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(num_inference_steps) - - # before power of 3 fix, would error on first step, so we only need to do two - for i, t in enumerate(scheduler.prk_timesteps[:2]): - sample = scheduler.step_prk(residual, t, sample).prev_sample - - def test_inference_plms_no_past_residuals(self): - with self.assertRaises(ValueError): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.step_plms(self.dummy_sample, 1, self.dummy_sample).prev_sample - - def test_full_loop_no_noise(self): - sample = self.full_loop() - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 198.1318) < 1e-2 - assert abs(result_mean.item() - 0.2580) < 1e-3 - - def test_full_loop_with_v_prediction(self): - sample = self.full_loop(prediction_type="v_prediction") - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 67.3986) < 1e-2 - assert abs(result_mean.item() - 0.0878) < 1e-3 - - def test_full_loop_with_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=True, beta_start=0.01) - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 230.0399) < 1e-2 - assert abs(result_mean.item() - 0.2995) < 1e-3 - - def test_full_loop_with_no_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=False, beta_start=0.01) - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 186.9482) < 1e-2 - assert abs(result_mean.item() - 0.2434) < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index cc40f26020731817dd3c3ff702427280760e67d1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './retinanet_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/mean_ap.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/mean_ap.py deleted file mode 100644 index 1d653a35497f6a0135c4374a09eb7c11399e3244..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/mean_ap.py +++ /dev/null @@ -1,469 +0,0 @@ -from multiprocessing import Pool - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps -from .class_names import get_classes - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (ndarray): shape (num_scales, num_dets) or (num_dets, ) - precisions (ndarray): shape (num_scales, num_dets) or (num_dets, ) - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or ndarray: calculated average precision - """ - no_scale = False - if recalls.ndim == 1: - no_scale = True - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - assert recalls.shape == precisions.shape and recalls.ndim == 2 - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - if no_scale: - ap = ap[0] - return ap - - -def tpfp_imagenet(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - default_iou_thr=0.5, - area_ranges=None): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - default_iou_thr (float): IoU threshold to be considered as matched for - medium and large bboxes (small ones have special rules). - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. Default: None. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp - # of a certain scale. - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = (det_bboxes[:, 2] - det_bboxes[:, 0]) * ( - det_bboxes[:, 3] - det_bboxes[:, 1]) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - ious = bbox_overlaps(det_bboxes, gt_bboxes - 1) - gt_w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - gt_h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - iou_thrs = np.minimum((gt_w * gt_h) / ((gt_w + 10.0) * (gt_h + 10.0)), - default_iou_thr) - # sort all detections by scores in descending order - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = gt_w * gt_h - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - max_iou = -1 - matched_gt = -1 - # find best overlapped available gt - for j in range(num_gts): - # different from PASCAL VOC: allow finding other gts if the - # best overlapped ones are already matched by other det bboxes - if gt_covered[j]: - continue - elif ious[i, j] >= iou_thrs[j] and ious[i, j] > max_iou: - max_iou = ious[i, j] - matched_gt = j - # there are 4 cases for a det bbox: - # 1. it matches a gt, tp = 1, fp = 0 - # 2. it matches an ignored gt, tp = 0, fp = 0 - # 3. it matches no gt and within area range, tp = 0, fp = 1 - # 4. it matches no gt but is beyond area range, tp = 0, fp = 0 - if matched_gt >= 0: - gt_covered[matched_gt] = 1 - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - tp[k, i] = 1 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0]) * (bbox[3] - bbox[1]) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_default(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. Default: None. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = (det_bboxes[:, 2] - det_bboxes[:, 0]) * ( - det_bboxes[:, 3] - det_bboxes[:, 1]) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - - ious = bbox_overlaps(det_bboxes, gt_bboxes) - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0]) * (bbox[3] - bbox[1]) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def get_cls_results(det_results, annotations, class_id): - """Get det results and gt information of a certain class. - - Args: - det_results (list[list]): Same as `eval_map()`. - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes - """ - cls_dets = [img_res[class_id] for img_res in det_results] - cls_gts = [] - cls_gts_ignore = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - cls_gts.append(ann['bboxes'][gt_inds, :]) - - if ann.get('labels_ignore', None) is not None: - ignore_inds = ann['labels_ignore'] == class_id - cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :]) - else: - cls_gts_ignore.append(np.empty((0, 4), dtype=np.float32)) - - return cls_dets, cls_gts, cls_gts_ignore - - -def eval_map(det_results, - annotations, - scale_ranges=None, - iou_thr=0.5, - dataset=None, - logger=None, - tpfp_fn=None, - nproc=4): - """Evaluate mAP of a dataset. - - Args: - det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. - The outer list indicates images, and the inner list indicates - per-class detected bboxes. - annotations (list[dict]): Ground truth annotations where each item of - the list indicates an image. Keys of annotations are: - - - `bboxes`: numpy array of shape (n, 4) - - `labels`: numpy array of shape (n, ) - - `bboxes_ignore` (optional): numpy array of shape (k, 4) - - `labels_ignore` (optional): numpy array of shape (k, ) - scale_ranges (list[tuple] | None): Range of scales to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. A range of - (32, 64) means the area range between (32**2, 64**2). - Default: None. - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - dataset (list[str] | str | None): Dataset name or dataset classes, - there are minor differences in metrics for different datsets, e.g. - "voc07", "imagenet_det", etc. Default: None. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - tpfp_fn (callable | None): The function used to determine true/ - false positives. If None, :func:`tpfp_default` is used as default - unless dataset is 'det' or 'vid' (:func:`tpfp_imagenet` in this - case). If it is given as a function, then this function is used - to evaluate tp & fp. Default None. - nproc (int): Processes used for computing TP and FP. - Default: 4. - - Returns: - tuple: (mAP, [dict, dict, ...]) - """ - assert len(det_results) == len(annotations) - - num_imgs = len(det_results) - num_scales = len(scale_ranges) if scale_ranges is not None else 1 - num_classes = len(det_results[0]) # positive class num - area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges] - if scale_ranges is not None else None) - - pool = Pool(nproc) - eval_results = [] - for i in range(num_classes): - # get gt and det bboxes of this class - cls_dets, cls_gts, cls_gts_ignore = get_cls_results( - det_results, annotations, i) - # choose proper function according to datasets to compute tp and fp - if tpfp_fn is None: - if dataset in ['det', 'vid']: - tpfp_fn = tpfp_imagenet - else: - tpfp_fn = tpfp_default - if not callable(tpfp_fn): - raise ValueError( - f'tpfp_fn has to be a function or None, but got {tpfp_fn}') - - # compute tp and fp for each image with multiple processes - tpfp = pool.starmap( - tpfp_fn, - zip(cls_dets, cls_gts, cls_gts_ignore, - [iou_thr for _ in range(num_imgs)], - [area_ranges for _ in range(num_imgs)])) - tp, fp = tuple(zip(*tpfp)) - # calculate gt number of each scale - # ignored gts or gts beyond the specific scale are not counted - num_gts = np.zeros(num_scales, dtype=int) - for j, bbox in enumerate(cls_gts): - if area_ranges is None: - num_gts[0] += bbox.shape[0] - else: - gt_areas = (bbox[:, 2] - bbox[:, 0]) * ( - bbox[:, 3] - bbox[:, 1]) - for k, (min_area, max_area) in enumerate(area_ranges): - num_gts[k] += np.sum((gt_areas >= min_area) - & (gt_areas < max_area)) - # sort all det bboxes by score, also sort tp and fp - cls_dets = np.vstack(cls_dets) - num_dets = cls_dets.shape[0] - sort_inds = np.argsort(-cls_dets[:, -1]) - tp = np.hstack(tp)[:, sort_inds] - fp = np.hstack(fp)[:, sort_inds] - # calculate recall and precision with tp and fp - tp = np.cumsum(tp, axis=1) - fp = np.cumsum(fp, axis=1) - eps = np.finfo(np.float32).eps - recalls = tp / np.maximum(num_gts[:, np.newaxis], eps) - precisions = tp / np.maximum((tp + fp), eps) - # calculate AP - if scale_ranges is None: - recalls = recalls[0, :] - precisions = precisions[0, :] - num_gts = num_gts.item() - mode = 'area' if dataset != 'voc07' else '11points' - ap = average_precision(recalls, precisions, mode) - eval_results.append({ - 'num_gts': num_gts, - 'num_dets': num_dets, - 'recall': recalls, - 'precision': precisions, - 'ap': ap - }) - pool.close() - if scale_ranges is not None: - # shape (num_classes, num_scales) - all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results]) - all_num_gts = np.vstack( - [cls_result['num_gts'] for cls_result in eval_results]) - mean_ap = [] - for i in range(num_scales): - if np.any(all_num_gts[:, i] > 0): - mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean()) - else: - mean_ap.append(0.0) - else: - aps = [] - for cls_result in eval_results: - if cls_result['num_gts'] > 0: - aps.append(cls_result['ap']) - mean_ap = np.array(aps).mean().item() if aps else 0.0 - - print_map_summary( - mean_ap, eval_results, dataset, area_ranges, logger=logger) - - return mean_ap, eval_results - - -def print_map_summary(mean_ap, - results, - dataset=None, - scale_ranges=None, - logger=None): - """Print mAP and results of each class. - - A table will be printed to show the gts/dets/recall/AP of each class and - the mAP. - - Args: - mean_ap (float): Calculated from `eval_map()`. - results (list[dict]): Calculated from `eval_map()`. - dataset (list[str] | str | None): Dataset name or dataset classes. - scale_ranges (list[tuple] | None): Range of scales to be evaluated. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - - if logger == 'silent': - return - - if isinstance(results[0]['ap'], np.ndarray): - num_scales = len(results[0]['ap']) - else: - num_scales = 1 - - if scale_ranges is not None: - assert len(scale_ranges) == num_scales - - num_classes = len(results) - - recalls = np.zeros((num_scales, num_classes), dtype=np.float32) - aps = np.zeros((num_scales, num_classes), dtype=np.float32) - num_gts = np.zeros((num_scales, num_classes), dtype=int) - for i, cls_result in enumerate(results): - if cls_result['recall'].size > 0: - recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1] - aps[:, i] = cls_result['ap'] - num_gts[:, i] = cls_result['num_gts'] - - if dataset is None: - label_names = [str(i) for i in range(num_classes)] - elif mmcv.is_str(dataset): - label_names = get_classes(dataset) - else: - label_names = dataset - - if not isinstance(mean_ap, list): - mean_ap = [mean_ap] - - header = ['class', 'gts', 'dets', 'recall', 'ap'] - for i in range(num_scales): - if scale_ranges is not None: - print_log(f'Scale range {scale_ranges[i]}', logger=logger) - table_data = [header] - for j in range(num_classes): - row_data = [ - label_names[j], num_gts[i, j], results[j]['num_dets'], - f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}' - ] - table_data.append(row_data) - table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}']) - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/_distutils_hack/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/_distutils_hack/__init__.py deleted file mode 100644 index f987a5367fdfaa4f17cd4bf700d56f4b50992368..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/_distutils_hack/__init__.py +++ /dev/null @@ -1,222 +0,0 @@ -# don't import any costly modules -import sys -import os - - -is_pypy = '__pypy__' in sys.builtin_module_names - - -def warn_distutils_present(): - if 'distutils' not in sys.modules: - return - if is_pypy and sys.version_info < (3, 7): - # PyPy for 3.6 unconditionally imports distutils, so bypass the warning - # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250 - return - import warnings - - warnings.warn( - "Distutils was imported before Setuptools, but importing Setuptools " - "also replaces the `distutils` module in `sys.modules`. This may lead " - "to undesirable behaviors or errors. To avoid these issues, avoid " - "using distutils directly, ensure that setuptools is installed in the " - "traditional way (e.g. not an editable install), and/or make sure " - "that setuptools is always imported before distutils." - ) - - -def clear_distutils(): - if 'distutils' not in sys.modules: - return - import warnings - - warnings.warn("Setuptools is replacing distutils.") - mods = [ - name - for name in sys.modules - if name == "distutils" or name.startswith("distutils.") - ] - for name in mods: - del sys.modules[name] - - -def enabled(): - """ - Allow selection of distutils by environment variable. - """ - which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local') - return which == 'local' - - -def ensure_local_distutils(): - import importlib - - clear_distutils() - - # With the DistutilsMetaFinder in place, - # perform an import to cause distutils to be - # loaded from setuptools._distutils. Ref #2906. - with shim(): - importlib.import_module('distutils') - - # check that submodules load as expected - core = importlib.import_module('distutils.core') - assert '_distutils' in core.__file__, core.__file__ - assert 'setuptools._distutils.log' not in sys.modules - - -def do_override(): - """ - Ensure that the local copy of distutils is preferred over stdlib. - - See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401 - for more motivation. - """ - if enabled(): - warn_distutils_present() - ensure_local_distutils() - - -class _TrivialRe: - def __init__(self, *patterns): - self._patterns = patterns - - def match(self, string): - return all(pat in string for pat in self._patterns) - - -class DistutilsMetaFinder: - def find_spec(self, fullname, path, target=None): - # optimization: only consider top level modules and those - # found in the CPython test suite. - if path is not None and not fullname.startswith('test.'): - return - - method_name = 'spec_for_{fullname}'.format(**locals()) - method = getattr(self, method_name, lambda: None) - return method() - - def spec_for_distutils(self): - if self.is_cpython(): - return - - import importlib - import importlib.abc - import importlib.util - - try: - mod = importlib.import_module('setuptools._distutils') - except Exception: - # There are a couple of cases where setuptools._distutils - # may not be present: - # - An older Setuptools without a local distutils is - # taking precedence. Ref #2957. - # - Path manipulation during sitecustomize removes - # setuptools from the path but only after the hook - # has been loaded. Ref #2980. - # In either case, fall back to stdlib behavior. - return - - class DistutilsLoader(importlib.abc.Loader): - def create_module(self, spec): - mod.__name__ = 'distutils' - return mod - - def exec_module(self, module): - pass - - return importlib.util.spec_from_loader( - 'distutils', DistutilsLoader(), origin=mod.__file__ - ) - - @staticmethod - def is_cpython(): - """ - Suppress supplying distutils for CPython (build and tests). - Ref #2965 and #3007. - """ - return os.path.isfile('pybuilddir.txt') - - def spec_for_pip(self): - """ - Ensure stdlib distutils when running under pip. - See pypa/pip#8761 for rationale. - """ - if self.pip_imported_during_build(): - return - clear_distutils() - self.spec_for_distutils = lambda: None - - @classmethod - def pip_imported_during_build(cls): - """ - Detect if pip is being imported in a build script. Ref #2355. - """ - import traceback - - return any( - cls.frame_file_is_setup(frame) for frame, line in traceback.walk_stack(None) - ) - - @staticmethod - def frame_file_is_setup(frame): - """ - Return True if the indicated frame suggests a setup.py file. - """ - # some frames may not have __file__ (#2940) - return frame.f_globals.get('__file__', '').endswith('setup.py') - - def spec_for_sensitive_tests(self): - """ - Ensure stdlib distutils when running select tests under CPython. - - python/cpython#91169 - """ - clear_distutils() - self.spec_for_distutils = lambda: None - - sensitive_tests = ( - [ - 'test.test_distutils', - 'test.test_peg_generator', - 'test.test_importlib', - ] - if sys.version_info < (3, 10) - else [ - 'test.test_distutils', - ] - ) - - -for name in DistutilsMetaFinder.sensitive_tests: - setattr( - DistutilsMetaFinder, - f'spec_for_{name}', - DistutilsMetaFinder.spec_for_sensitive_tests, - ) - - -DISTUTILS_FINDER = DistutilsMetaFinder() - - -def add_shim(): - DISTUTILS_FINDER in sys.meta_path or insert_shim() - - -class shim: - def __enter__(self): - insert_shim() - - def __exit__(self, exc, value, tb): - remove_shim() - - -def insert_shim(): - sys.meta_path.insert(0, DISTUTILS_FINDER) - - -def remove_shim(): - try: - sys.meta_path.remove(DISTUTILS_FINDER) - except ValueError: - pass diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py deleted file mode 100644 index c326e80dd117458ff6e71741ca57359629b05ae4..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py +++ /dev/null @@ -1,216 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module contains provisional support for SOCKS proxies from within -urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and -SOCKS5. To enable its functionality, either install PySocks or install this -module with the ``socks`` extra. - -The SOCKS implementation supports the full range of urllib3 features. It also -supports the following SOCKS features: - -- SOCKS4A (``proxy_url='socks4a://...``) -- SOCKS4 (``proxy_url='socks4://...``) -- SOCKS5 with remote DNS (``proxy_url='socks5h://...``) -- SOCKS5 with local DNS (``proxy_url='socks5://...``) -- Usernames and passwords for the SOCKS proxy - -.. note:: - It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in - your ``proxy_url`` to ensure that DNS resolution is done from the remote - server instead of client-side when connecting to a domain name. - -SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 -supports IPv4, IPv6, and domain names. - -When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` -will be sent as the ``userid`` section of the SOCKS request: - -.. code-block:: python - - proxy_url="socks4a://Hay Day es uno de los juegos de simulación de agricultura más populares en dispositivos Android e iOS. En este juego, puedes crear tu propia granja, cultivar, criar animales, comerciar con otros jugadores y más. Sin embargo, para disfrutar de todas las características y beneficios del juego, necesitas monedas y diamantes, que son las principales monedas en Hay Day. Las monedas se utilizan para comprar objetos, mejorar edificios, ampliar tu terreno y más. Los diamantes se utilizan para acelerar los procesos, desbloquear objetos especiales y mucho más. Sin embargo, ganar monedas y diamantes en el juego puede ser lento y desafiante, especialmente si quieres progresar más rápido y divertirte más. Es por eso que muchos jugadores están buscando una manera de obtener monedas y diamantes ilimitados en Hay Day sin gastar dinero real.
-Si eres uno de ellos, entonces estás de suerte. En este artículo, le mostraremos cómo descargar e instalar Hay Day Hack APK, que es una versión modificada del juego original que le da monedas ilimitadas y diamantes gratis. También le mostraremos cómo usarlo, qué características y beneficios ofrece, y algunos consejos y trucos para jugar Hay Day con Hay Day Hack APK. Así que, sin más preámbulos, empecemos.
-DOWNLOAD ->->->-> https://bltlly.com/2v6JfM
Descargar e instalar Hay Day Hack APK es muy fácil y simple. Solo tienes que seguir estos pasos:
-El uso de Hay Día Hack APK es muy fácil y simple, así. Solo tienes que seguir estos pasos:
-Hay Día Hack APK no es solo un simple mod que le da monedas ilimitadas y diamantes. También ofrece muchas otras características y beneficios que lo convierten en uno de los mejores hacks para Hay Day. Estos son algunos de ellos:
-Ahora que tienes monedas y diamantes ilimitados en Hay Day, puedes preguntarte cómo sacarles el máximo partido. Aquí hay algunos consejos y trucos para jugar Hay Día con Hay Día Hack APK:
-En conclusión, Hay Día Hack APK es una gran manera de disfrutar de Hay Día con monedas ilimitadas y diamantes. Puede descargarlo e instalarlo de forma fácil y segura, y usarlo para comprar artículos, actualizar edificios, expandir su tierra, acelerar los procesos, desbloquear artículos especiales y más. También puedes usarlo para ganar más dinero vendiendo tus productos y comprando y revendiendo artículos de otros jugadores. Hay Día Hack APK es compatible con todos los dispositivos y versiones del juego, y no requiere ninguna raíz o jailbreak. También tiene un sistema anti-van que protege tu cuenta de ser detectada o prohibida por los servidores del juego.
-Si usted es un fan de Hay Día y quiere tener más diversión y libertad en el juego, entonces usted debe probar definitivamente Hay Día Hack APK. Hará que su experiencia agrícola sea más agradable y gratificante. Sin embargo, también debes ser cuidadoso y responsable al usarlo, ya que puede afectar el equilibrio y la equidad del juego. También debes respetar a otros jugadores y no abusar ni acosarlos con tus recursos ilimitados. Recuerde, Hay Day es un juego para el entretenimiento y la relajación, no para hacer trampa o intimidación.
- -Si usted tiene alguna pregunta o duda acerca de Hay Day Hack APK, puede consultar estas preguntas frecuentes a continuación:
- { using type = Default; };
-
-template class Predicate, typename Default, typename... Ts>
-using exactly_one_t = typename exactly_one
-
-"""
- )
- with gr.Row():
- with gr.Column():
- with gr.Row():
- instruction = gr.Textbox(placeholder="Enter your question here", label="Question", elem_id="q-input")
- with gr.Row():
- with gr.Column():
- with gr.Row():
- temperature = gr.Slider(
- label="Temperature",
- value=0.5,
- minimum=0.0,
- maximum=2.0,
- step=0.1,
- interactive=True,
- info="Higher values produce more diverse outputs",
- )
- with gr.Column():
- with gr.Row():
- top_p = gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.95,
- minimum=0.0,
- maximum=1,
- step=0.05,
- interactive=True,
- info="Higher values sample fewer low-probability tokens",
- )
- with gr.Column():
- with gr.Row():
- top_k = gr.Slider(
- label="Top-k",
- value=50,
- minimum=0.0,
- maximum=100,
- step=1,
- interactive=True,
- info="Sample from a shortlist of top-k tokens",
- )
- with gr.Column():
- with gr.Row():
- max_new_tokens = gr.Slider(
- label="Maximum new tokens",
- value=256,
- minimum=0,
- maximum=2048,
- step=5,
- interactive=True,
- info="The maximum number of new tokens to generate",
- )
- with gr.Row():
- submit = gr.Button("Generate Answers")
- with gr.Row():
- with gr.Column():
- with gr.Box():
- gr.Markdown("**Falcon 7B instruct**")
- output_falcon = gr.Markdown()
- with gr.Column():
- with gr.Box():
- gr.Markdown("**LLaMA 7B instruct**")
- output_llama = gr.Markdown()
- with gr.Row():
- gr.Examples(
- examples=examples,
- inputs=[instruction],
- cache_examples=False,
- fn=process_example,
- outputs=[output_falcon, output_llama],
- )
- submit.click(generate, inputs=[instruction, temperature, top_p, top_k, max_new_tokens], outputs=[output_falcon, output_llama ])
- instruction.submit(generate, inputs=[instruction, temperature, top_p, top_k, max_new_tokens ], outputs=[output_falcon, output_llama])
-
-demo.queue(concurrency_count=16).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/scripts/run.sh b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/scripts/run.sh
deleted file mode 100644
index ee9d16cf59621eeb762e4a9f3f46be17db934637..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/scripts/run.sh
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/bin/env bash
-
-
-python3 run_data_measurements.py --dataset="hate_speech18" --config="default" --split="train" --label_field="label" --feature="text"
-python3 run_data_measurements.py --dataset="hate_speech_offensive" --config="default" --split="train" --label_field="label" --feature="tweet"
-
-
-python3 run_data_measurements.py --dataset="imdb" --config="plain_text" --split="train" --label_field="label" --feature="text"
-python3 run_data_measurements.py --dataset="imdb" --config="plain_text" --split="unsupervised" --label_field="label" --feature="text"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="cola" --split="train" --label_field="label" --feature="sentence"
-python3 run_data_measurements.py --dataset="glue" --config="cola" --split="validation" --label_field="label" --feature="sentence"
-
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="train" --label_field="label" --feature="hypothesis"
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="train" --label_field="label" --feature="premise"
-
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_matched" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_matched" --label_field="label" --feature="hypothesis"
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_mismatched" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_mismatched" --label_field="label" --feature="hypothesis"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="train" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="train" --label_field="label" --feature="sentence2"
-python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="validation" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="validation" --label_field="label" --feature="sentence2"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="rte" --split="train" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="rte" --split="train" --label_field="label" --feature="sentence2"
-python3 run_data_measurements.py --dataset="glue" --config="rte" --split="validation" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="rte" --split="validation" --label_field="label" --feature="sentence2"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="train" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="train" --label_field="label" --feature="sentence2"
-python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="validation" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="validation" --label_field="label" --feature="sentence2"
-
-python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="train" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="train" --label_field="label" --feature="sentence2"
-python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="validation" --label_field="label" --feature="sentence1"
-python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="validation" --label_field="label" --feature="sentence2"
-
-python3 run_data_measurements.py --dataset="glue" --config="sst2" --split="train" --label_field="label" --feature="sentence"
-python3 run_data_measurements.py --dataset="glue" --config="sst2" --split="validation" --label_field="label" --feature="sentence"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="train" --label_field="label" --feature="question"
-python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="train" --label_field="label" --feature="sentence"
-python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="validation" --label_field="label" --feature="question"
-python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="validation" --label_field="label" --feature="sentence"
-
-
-python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="train" --label_field="label" --feature="question1"
-python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="train" --label_field="label" --feature="question2"
-python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="validation" --label_field="label" --feature="question1"
-python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="validation" --label_field="label" --feature="question2"
-
-python3 run_data_measurements.py --dataset="glue" --config="mnli_matched" --split="validation" --label_field="label" --feature="hypothesis"
-python3 run_data_measurements.py --dataset="glue" --config="mnli_matched" --split="validation" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="glue" --config="mnli_mismatched" --split="validation" --label_field="label" --feature="hypothesis"
-python3 run_data_measurements.py --dataset="glue" --config="mnli_mismatched" --split="validation" --label_field="label" --feature="premise"
-
-
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-v1" --split="train" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-raw-v1" --split="train" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-v1" --split="train" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-raw-v1" --split="train" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-v1" --split="validation" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-raw-v1" --split="validation" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-v1" --split="validation" --feature="text"
-python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-raw-v1" --split="validation" --feature="text"
-
-
-# Superglue wsc? wic? rte? record? multirc?
-
-python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="train" --label_field="label" --feature="question"
-python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="validation" --label_field="label" --feature="question"
-python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="train" --label_field="label" --feature="passage"
-python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="validation" --label_field="label" --feature="passage"
-
-python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="train" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="validation" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="train" --label_field="label" --feature="hypothesis"
-python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="validation" --label_field="label" --feature="hypothesis"
-
-
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="premise"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="choice1"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="choice1"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="choice2"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="choice2"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="question"
-python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="question"
-
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="context"
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="question"
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="title"
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="context"
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="question"
-python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="title"
-
-
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="context"
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="question"
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="title"
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="context"
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="question"
-python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="title"
diff --git a/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py b/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py
deleted file mode 100644
index 2c184bf85cd3cf32c6619c7ed0b7649cfdf62b84..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py
+++ /dev/null
@@ -1,318 +0,0 @@
-"""make variations of input image"""
-
-import argparse, os, sys, glob
-import PIL
-import torch
-import numpy as np
-import torchvision
-from omegaconf import OmegaConf
-from PIL import Image
-from tqdm import tqdm, trange
-from itertools import islice
-from einops import rearrange, repeat
-from torchvision.utils import make_grid
-from torch import autocast
-from contextlib import nullcontext
-import time
-from pytorch_lightning import seed_everything
-
-from ldm.util import instantiate_from_config
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.plms import PLMSSampler
-import math
-import copy
-from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization
-
-def space_timesteps(num_timesteps, section_counts):
- """
- Create a list of timesteps to use from an original diffusion process,
- given the number of timesteps we want to take from equally-sized portions
- of the original process.
- For example, if there's 300 timesteps and the section counts are [10,15,20]
- then the first 100 timesteps are strided to be 10 timesteps, the second 100
- are strided to be 15 timesteps, and the final 100 are strided to be 20.
- If the stride is a string starting with "ddim", then the fixed striding
- from the DDIM paper is used, and only one section is allowed.
- :param num_timesteps: the number of diffusion steps in the original
- process to divide up.
- :param section_counts: either a list of numbers, or a string containing
- comma-separated numbers, indicating the step count
- per section. As a special case, use "ddimN" where N
- is a number of steps to use the striding from the
- DDIM paper.
- :return: a set of diffusion steps from the original process to use.
- """
- if isinstance(section_counts, str):
- if section_counts.startswith("ddim"):
- desired_count = int(section_counts[len("ddim"):])
- for i in range(1, num_timesteps):
- if len(range(0, num_timesteps, i)) == desired_count:
- return set(range(0, num_timesteps, i))
- raise ValueError(
- f"cannot create exactly {num_timesteps} steps with an integer stride"
- )
- section_counts = [int(x) for x in section_counts.split(",")] #[250,]
- size_per = num_timesteps // len(section_counts)
- extra = num_timesteps % len(section_counts)
- start_idx = 0
- all_steps = []
- for i, section_count in enumerate(section_counts):
- size = size_per + (1 if i < extra else 0)
- if size < section_count:
- raise ValueError(
- f"cannot divide section of {size} steps into {section_count}"
- )
- if section_count <= 1:
- frac_stride = 1
- else:
- frac_stride = (size - 1) / (section_count - 1)
- cur_idx = 0.0
- taken_steps = []
- for _ in range(section_count):
- taken_steps.append(start_idx + round(cur_idx))
- cur_idx += frac_stride
- all_steps += taken_steps
- start_idx += size
- return set(all_steps)
-
-def chunk(it, size):
- it = iter(it)
- return iter(lambda: tuple(islice(it, size)), ())
-
-def load_model_from_config(config, ckpt, verbose=False):
- print(f"Loading model from {ckpt}")
- pl_sd = torch.load(ckpt, map_location="cpu")
- if "global_step" in pl_sd:
- print(f"Global Step: {pl_sd['global_step']}")
- sd = pl_sd["state_dict"]
- model = instantiate_from_config(config.model)
- m, u = model.load_state_dict(sd, strict=False)
- if len(m) > 0 and verbose:
- print("missing keys:")
- print(m)
- if len(u) > 0 and verbose:
- print("unexpected keys:")
- print(u)
-
- model.cuda()
- model.eval()
- return model
-
-def load_img(path):
- image = Image.open(path).convert("RGB")
- w, h = image.size
- print(f"loaded input image of size ({w}, {h}) from {path}")
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL.Image.LANCZOS)
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return 2.*image - 1.
-
-
-def main():
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--init-img",
- type=str,
- nargs="?",
- help="path to the input image",
- default="inputs/user_upload",
- )
- parser.add_argument(
- "--outdir",
- type=str,
- nargs="?",
- help="dir to write results to",
- default="outputs/user_upload",
- )
- parser.add_argument(
- "--ddpm_steps",
- type=int,
- default=1000,
- help="number of ddpm sampling steps",
- )
- parser.add_argument(
- "--C",
- type=int,
- default=4,
- help="latent channels",
- )
- parser.add_argument(
- "--f",
- type=int,
- default=8,
- help="downsampling factor, most often 8 or 16",
- )
- parser.add_argument(
- "--n_samples",
- type=int,
- default=2,
- help="how many samples to produce for each given prompt. A.k.a batch size",
- )
- parser.add_argument(
- "--config",
- type=str,
- default="configs/stableSRNew/v2-finetune_text_T_512.yaml",
- help="path to config which constructs model",
- )
- parser.add_argument(
- "--ckpt",
- type=str,
- default="models/ldm/stable-diffusion-v1/model.ckpt",
- help="path to checkpoint of model",
- )
- parser.add_argument(
- "--vqgan_ckpt",
- type=str,
- default="models/ldm/stable-diffusion-v1/epoch=000011.ckpt",
- help="path to checkpoint of VQGAN model",
- )
- parser.add_argument(
- "--seed",
- type=int,
- default=42,
- help="the seed (for reproducible sampling)",
- )
- parser.add_argument(
- "--precision",
- type=str,
- help="evaluate at this precision",
- choices=["full", "autocast"],
- default="autocast"
- )
- parser.add_argument(
- "--input_size",
- type=int,
- default=512,
- help="input size",
- )
- parser.add_argument(
- "--dec_w",
- type=float,
- default=0.5,
- help="weight for combining VQGAN and Diffusion",
- )
- parser.add_argument(
- "--colorfix_type",
- type=str,
- default="nofix",
- help="Color fix type to adjust the color of HR result according to LR input: adain (used in paper); wavelet; nofix",
- )
-
- opt = parser.parse_args()
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
-
- print('>>>>>>>>>>color correction>>>>>>>>>>>')
- if opt.colorfix_type == 'adain':
- print('Use adain color correction')
- elif opt.colorfix_type == 'wavelet':
- print('Use wavelet color correction')
- else:
- print('No color correction')
- print('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
-
- vqgan_config = OmegaConf.load("configs/autoencoder/autoencoder_kl_64x64x4_resi.yaml")
- vq_model = load_model_from_config(vqgan_config, opt.vqgan_ckpt)
- vq_model = vq_model.to(device)
- vq_model.decoder.fusion_w = opt.dec_w
-
- seed_everything(opt.seed)
-
- transform = torchvision.transforms.Compose([
- torchvision.transforms.Resize(opt.input_size),
- torchvision.transforms.CenterCrop(opt.input_size),
- ])
-
- config = OmegaConf.load(f"{opt.config}")
- model = load_model_from_config(config, f"{opt.ckpt}")
- model = model.to(device)
-
- os.makedirs(opt.outdir, exist_ok=True)
- outpath = opt.outdir
-
- batch_size = opt.n_samples
-
- img_list_ori = os.listdir(opt.init_img)
- img_list = copy.deepcopy(img_list_ori)
- init_image_list = []
- for item in img_list_ori:
- if os.path.exists(os.path.join(outpath, item)):
- img_list.remove(item)
- continue
- cur_image = load_img(os.path.join(opt.init_img, item)).to(device)
- cur_image = transform(cur_image)
- cur_image = cur_image.clamp(-1, 1)
- init_image_list.append(cur_image)
- init_image_list = torch.cat(init_image_list, dim=0)
- niters = math.ceil(init_image_list.size(0) / batch_size)
- init_image_list = init_image_list.chunk(niters)
-
- model.register_schedule(given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=0.00085, linear_end=0.0120, cosine_s=8e-3)
- model.num_timesteps = 1000
-
- sqrt_alphas_cumprod = copy.deepcopy(model.sqrt_alphas_cumprod)
- sqrt_one_minus_alphas_cumprod = copy.deepcopy(model.sqrt_one_minus_alphas_cumprod)
-
- use_timesteps = set(space_timesteps(1000, [opt.ddpm_steps]))
- last_alpha_cumprod = 1.0
- new_betas = []
- timestep_map = []
- for i, alpha_cumprod in enumerate(model.alphas_cumprod):
- if i in use_timesteps:
- new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
- last_alpha_cumprod = alpha_cumprod
- timestep_map.append(i)
- new_betas = [beta.data.cpu().numpy() for beta in new_betas]
- model.register_schedule(given_betas=np.array(new_betas), timesteps=len(new_betas))
- model.num_timesteps = 1000
- model.ori_timesteps = list(use_timesteps)
- model.ori_timesteps.sort()
- model = model.to(device)
-
- precision_scope = autocast if opt.precision == "autocast" else nullcontext
- niqe_list = []
- with torch.no_grad():
- with precision_scope("cuda"):
- with model.ema_scope():
- tic = time.time()
- all_samples = list()
- for n in trange(niters, desc="Sampling"):
- init_image = init_image_list[n]
- init_latent_generator, enc_fea_lq = vq_model.encode(init_image)
- init_latent = model.get_first_stage_encoding(init_latent_generator)
- text_init = ['']*init_image.size(0)
- semantic_c = model.cond_stage_model(text_init)
-
- noise = torch.randn_like(init_latent)
- # If you would like to start from the intermediate steps, you can add noise to LR to the specific steps.
- t = repeat(torch.tensor([999]), '1 -> b', b=init_image.size(0))
- t = t.to(device).long()
- x_T = model.q_sample_respace(x_start=init_latent, t=t, sqrt_alphas_cumprod=sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod=sqrt_one_minus_alphas_cumprod, noise=noise)
- x_T = None
-
- samples, _ = model.sample(cond=semantic_c, struct_cond=init_latent, batch_size=init_image.size(0), timesteps=opt.ddpm_steps, time_replace=opt.ddpm_steps, x_T=x_T, return_intermediates=True)
- x_samples = vq_model.decode(samples * 1. / model.scale_factor, enc_fea_lq)
- if opt.colorfix_type == 'adain':
- x_samples = adaptive_instance_normalization(x_samples, init_image)
- elif opt.colorfix_type == 'wavelet':
- x_samples = wavelet_reconstruction(x_samples, init_image)
- x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
-
- for i in range(init_image.size(0)):
- img_name = img_list.pop(0)
- basename = os.path.splitext(os.path.basename(img_name))[0]
- x_sample = 255. * rearrange(x_samples[i].cpu().numpy(), 'c h w -> h w c')
- Image.fromarray(x_sample.astype(np.uint8)).save(
- os.path.join(outpath, basename+'.png'))
-
- toc = time.time()
-
- print(f"Your samples are ready and waiting for you here: \n{outpath} \n"
- f" \nEnjoy.")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/IvaElen/find_my_pic/README.md b/spaces/IvaElen/find_my_pic/README.md
deleted file mode 100644
index 967d02d647278b95f04c47d6bd6ab06eaad262a0..0000000000000000000000000000000000000000
--- a/spaces/IvaElen/find_my_pic/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Find My Pic
-emoji: 🏢
-colorFrom: indigo
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/IvaElen/nlp_proj/README.md b/spaces/IvaElen/nlp_proj/README.md
deleted file mode 100644
index 421893aeafff31ad4dcacff674385b38851c8612..0000000000000000000000000000000000000000
--- a/spaces/IvaElen/nlp_proj/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Nlp Proj
-emoji: 🚀
-colorFrom: pink
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/training_parameter_plugin_input.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/training_parameter_plugin_input.py
deleted file mode 100644
index a11dad10315358f2d95cf77e12d6644ca6b6bd64..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/training_parameter_plugin_input.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from __future__ import annotations
-
-from typing import Dict, Optional
-
-from steamship.base.model import CamelModel
-from steamship.plugin.inputs.export_plugin_input import ExportPluginInput
-
-
-class TrainingParameterPluginInput(CamelModel):
- # The plugin instance handle that should perform the training.
- plugin_instance: Optional[str] = None
- # An export request to produce the training data file, if training data is required.
- export_plugin_input: Optional[ExportPluginInput] = None
-
- # How many epochs to train (if supported by the supplied `pluginInstance`)
- training_epochs: Optional[int] = None
-
- # How much of the data to hold out for testing (if supported by the supplied `pluginInstance`)
- testing_holdout_percent: Optional[float] = None
-
- # Random seed for performing the train/test split (if supported by the supplied `pluginInstance`)
- test_split_seed: Optional[int] = None
-
- # Custom training-time parameters, specific to the pluginInstance
- training_params: Optional[Dict] = None
-
- # Custom inference-time parameters, specific to the pluginInstance
- inference_params: Optional[Dict] = None
diff --git a/spaces/JoPmt/Txt-to-video/index.html b/spaces/JoPmt/Txt-to-video/index.html
deleted file mode 100644
index 57ef4fa2711d16c5c9e5155b93070b5a6c9b50f1..0000000000000000000000000000000000000000
--- a/spaces/JoPmt/Txt-to-video/index.html
+++ /dev/null
@@ -1,369 +0,0 @@
-
-
-
-HuggingFace text-to-Stable-Diffusion-to-canvas video?
-StabilityAI OpenJourney Runwayml Stable-Diffusion AI Models API text-to-image,text-to-video Demo
-
-
-
-
-[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://www.timothybrooks.com/instruct-pix2pix/)
- [Tim Brooks](https://www.timothybrooks.com/)\*,
- [Aleksander Holynski](https://holynski.org/)\*,
- [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros/)
- UC Berkeley
- \*denotes equal contribution
-
-
-
-## TL;DR: quickstart
-
-Set up a conda environment, and download a pretrained model:
-```
-conda env create -f environment.yaml
-conda activate ip2p
-bash scripts/download_checkpoints.sh
-```
-
-Edit a single image:
-```
-python edit_cli.py --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
-
-# Optionally, you can specify parameters to tune your result:
-# python edit_cli.py --steps 100 --resolution 512 --seed 1371 --cfg-text 7.5 --cfg-image 1.2 --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
-```
-
-Or launch your own interactive editing Gradio app:
-```
-python edit_app.py
-```
-
-
-_(For advice on how to get the best results by tuning parameters, see the [Tips](https://github.com/timothybrooks/instruct-pix2pix#tips) section)._
-
-## Setup
-
-Install all dependencies with:
-```
-conda env create -f environment.yaml
-```
-
-Download the pretrained models by running:
-```
-bash scripts/download_checkpoints.sh
-```
-
-## Generated Dataset
-
-Our image editing model is trained on a generated dataset consisting of 454,445 examples. Each example contains (1) an input image, (2) an editing instruction, and (3) an output edited image. We provide two versions of the dataset, one in which each pair of edited images is generated 100 times, and the best examples are chosen based on CLIP metrics (Section 3.1.2 in the paper) (`clip-filtered-dataset`), and one in which examples are randomly chosen (`random-sample-dataset`).
-
-For the released version of this dataset, we've additionally filtered prompts and images for NSFW content. After NSFW filtering, the GPT-3 generated dataset contains 451,990 examples. The final image-pair datasets contain:
-
-| | # of image editing examples | Dataset size |
-|--|-----------------------|----------------------- |
-| `random-sample-dataset` |451990|727GB|
-| `clip-filtered-dataset` |313010|436GB|
-
-To download one of these datasets, along with the entire NSFW-filtered text data, run the following command with the appropriate dataset name:
-
-```
-bash scripts/download_data.sh clip-filtered-dataset
-```
-
-
-## Training InstructPix2Pix
-
-InstructPix2Pix is trained by fine-tuning from an initial StableDiffusion checkpoint. The first step is to download a Stable Diffusion checkpoint. For our trained models, we used the v1.5 checkpoint as the starting point. To download the same ones we used, you can run the following script:
-```
-bash scripts/download_pretrained_sd.sh
-```
-If you'd like to use a different checkpoint, point to it in the config file `configs/train.yaml`, on line 8, after `ckpt_path:`.
-
-Next, we need to change the config to point to our downloaded (or generated) dataset. If you're using the `clip-filtered-dataset` from above, you can skip this. Otherwise, you may need to edit lines 85 and 94 of the config (`data.params.train.params.path`, `data.params.validation.params.path`).
-
-Finally, start a training job with the following command:
-
-```
-python main.py --name default --base configs/train.yaml --train --gpus 0,1,2,3,4,5,6,7
-```
-
-
-## Creating your own dataset
-
-Our generated dataset of paired images and editing instructions is made in two phases: First, we use GPT-3 to generate text triplets: (a) a caption describing an image, (b) an edit instruction, (c) a caption describing the image after the edit. Then, we turn pairs of captions (before/after the edit) into pairs of images using Stable Diffusion and Prompt-to-Prompt.
-
-### (1) Generate a dataset of captions and instructions
-
-We provide our generated dataset of captions and edit instructions [here](https://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl). If you plan to use our captions+instructions, skip to step (2). Otherwise, if you would like to create your own text dataset, please follow steps (1.1-1.3) below. Note that generating very large datasets using GPT-3 can be expensive.
-
-#### (1.1) Manually write a dataset of instructions and captions
-
-The first step of the process is fine-tuning GPT-3. To do this, we made a dataset of 700 examples broadly covering of edits that we might want our model to be able to perform. Our examples are available [here](https://instruct-pix2pix.eecs.berkeley.edu/human-written-prompts.jsonl). These should be diverse and cover a wide range of possible captions and types of edits. Ideally, they should avoid duplication or significant overlap of captions and instructions. It is also important to be mindful of limitations of Stable Diffusion and Prompt-to-Prompt in writing these examples, such as inability to perform large spatial transformations (e.g., moving the camera, zooming in, swapping object locations).
-
-Input prompts should closely match the distribution of input prompts used to generate the larger dataset. We sampled the 700 input prompts from the _LAION Improved Aesthetics 6.5+_ dataset and also use this dataset for generating examples. We found this dataset is quite noisy (many of the captions are overly long and contain irrelevant text). For this reason, we also considered MSCOCO and LAION-COCO datasets, but ultimately chose _LAION Improved Aesthetics 6.5+_ due to its diversity of content, proper nouns, and artistic mediums. If you choose to use another dataset or combination of datasets as input to GPT-3 when generating examples, we recommend you sample the input prompts from the same distribution when manually writing training examples.
-
-#### (1.2) Finetune GPT-3
-
-The next step is to finetune a large language model on the manually written instructions/outputs to generate edit instructions and edited caption from a new input caption. For this, we finetune GPT-3's Davinci model via the OpenAI API, although other language models could be used.
-
-To prepare training data for GPT-3, one must first create an OpenAI developer account to access the needed APIs, and [set up the API keys on your local device](https://beta.openai.com/docs/api-reference/introduction). Also, run the `prompts/prepare_for_gpt.py` script, which forms the prompts into the correct format by concatenating instructions and captions and adding delimiters and stop sequences.
-
-```bash
-python dataset_creation/prepare_for_gpt.py --input-path data/human-written-prompts.jsonl --output-path data/human-written-prompts-for-gpt.jsonl
-```
-
-Next, finetune GPT-3 via the OpenAI CLI. We provide an example below, although please refer to OpenAI's official documentation for this, as best practices may change. We trained the Davinci model for a single epoch. You can experiment with smaller less expensive GPT-3 variants or with open source language models, although this may negatively affect performance.
-
-```bash
-openai api fine_tunes.create -t data/human-written-prompts-for-gpt.jsonl -m davinci --n_epochs 1 --suffix "instruct-pix2pix"
-```
-
-You can test out the finetuned GPT-3 model by launching the provided Gradio app:
-
-```bash
-python prompt_app.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
-```
-
-
-
-#### (1.3) Generate a large dataset of captions and instructions
-
-We now use the finetuned GPT-3 model to generate a large dataset. Our dataset cost thousands of dollars to create. See `prompts/gen_instructions_and_captions.py` for the script which generates these examples. We recommend first generating a small number of examples (by setting a low value of `--num-samples`) and gradually increasing the scale to ensure the results are working as desired before increasing scale.
-
-```bash
-python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
-```
-
-If you are generating at a very large scale (e.g., 100K+), it will be noteably faster to generate the dataset with multiple processes running in parallel. This can be accomplished by setting `--partitions=N` to a higher number and running multiple processes, setting each `--partition` to the corresponding value.
-
-```bash
-python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME --partitions=10 --partition=0
-```
-
-### (2) Turn paired captions into paired images
-
-The next step is to turn pairs of text captions into pairs of images. For this, we need to copy some pre-trained Stable Diffusion checkpoints to `stable_diffusion/models/ldm/stable-diffusion-v1/`. You may have already done this if you followed the instructions above for training with our provided data, but if not, you can do this by running:
-
-```bash
-bash scripts/download_pretrained_sd.sh
-```
-
-For our model, we used [checkpoint v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt), and the [new autoencoder](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt), but other models may work as well. If you choose to use other models, make sure to change point to the corresponding checkpoints by passing in the `--ckpt` and `--vae-ckpt` arguments. Once all checkpoints have been downloaded, we can generate the dataset with the following command:
-
-```
-python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl
-```
-
-This command operates on a single GPU (typically a V100 or A100). To parallelize over many GPUs/machines, set `--n-partitions` to the total number of parallel jobs and `--partition` to the index of each job.
-
-```
-python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-partitions 100 --partition 0
-```
-
-The default parameters match that of our dataset, although in practice you can use a smaller number of steps (e.g., `--steps=25`) to generate high quality data faster. By default, we generate 100 samples per prompt and use CLIP filtering to keep a max of 4 per prompt. You can experiment with fewer samples by setting `--n-samples`. The command below turns off CLIP filtering entirely and is therefore faster:
-
-```
-python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-samples 4 --clip-threshold 0 --clip-dir-threshold 0 --clip-img-threshold 0 --n-partitions 100 --partition 0
-```
-
-After generating all of the dataset examples, run the following command below to create a list of the examples. This is needed for the dataset onject to efficiently be able to sample examples without needing to iterate over the entire dataset directory at the start of each training run.
-
-```
-python dataset_creation/prepare_dataset.py data/instruct-pix2pix-dataset-000
-```
-
-## Evaluation
-
-To generate plots like the ones in Figures 8 and 10 in the paper, run the following command:
-
-```
-python metrics/compute_metrics.py --ckpt /path/to/your/model.ckpt
-```
-
-## Tips
-
-If you're not getting the quality result you want, there may be a few reasons:
-1. **Is the image not changing enough?** Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
- * Decreasing the Image CFG weight, or
- * Incerasing the Text CFG weight, or
-2. Conversely, **is the image changing too much**, such that the details in the original image aren't preserved? Try:
- * Increasing the Image CFG weight, or
- * Decreasing the Text CFG weight
-3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
-4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
-5. Increasing the number of steps sometimes improves results.
-6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try cropping the image so the face takes up a larger portion of the frame.
-
-## Comments
-
-- Our codebase is based on the [Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
-
-## BibTeX
-
-```
-@article{brooks2022instructpix2pix,
- title={InstructPix2Pix: Learning to Follow Image Editing Instructions},
- author={Brooks, Tim and Holynski, Aleksander and Efros, Alexei A},
- journal={arXiv preprint arXiv:2211.09800},
- year={2022}
-}
-```
-
-
-
diff --git a/spaces/Lianjd/stock_dashboard/stockchart.py b/spaces/Lianjd/stock_dashboard/stockchart.py
deleted file mode 100644
index 4ce5ab064e9d3cfb63cc6e63f7a0fa125efd3c1a..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/stockchart.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import pandas as pd
-import numpy as np
-import streamlit as st
-from yahoofinancials import YahooFinancials
-from datetime import datetime, timedelta
-import matplotlib.pyplot as pt
-import RSI
-import file
-
-
-def basic_data(ticker):
-
- yahoo_financials = YahooFinancials(ticker)
- ps = yahoo_financials.get_price_to_sales()
- if ps is None:
- ps = np.nan
- pe = yahoo_financials.get_pe_ratio()
- if pe is None:
- pe = np.nan
- mktcap = yahoo_financials.get_market_cap()
- divd = yahoo_financials.get_dividend_yield()
- if divd is None:
- divd = np.nan
- high = yahoo_financials.get_yearly_high()
- low = yahoo_financials.get_yearly_low()
- beta = yahoo_financials.get_beta()
- if beta is None:
- beta = np.nan
- df = {'P/S': [ps], 'P/E': [pe], 'Beta': [beta],
- 'Mktcap(M)': [mktcap/1000000], 'Dividend yield %': [divd],
- 'Yearly High': [high],
- 'Yearly Low': [low]
-
- }
- index = ['Data']
- df = pd.DataFrame(data=df,index=index)
- st.write("General Market Data")
- st.table(df.style.format("{:.2f}"))
-
-
-
-def display_stock(period_view, data, rsi_period, stock_ticker, mv_fast, mv_slow):
-
- if(period_view==True):
- period_options = ['1y', '3mo', '6mo','ytd','2y', '5y', '10y', 'max']
- period = st.sidebar.selectbox("period", period_options)
-
- if (period == '3mo' or '6mo' or '1y' or 'ytd' or '2y' or '5y' or '10y' or 'max'):
- interval_options = ['1d', '5d', '1wk', '1mo', '3mo']
- interval = st.sidebar.selectbox("interval", interval_options)
- plot_price_volume(period,interval,data,rsi_period, stock_ticker,mv_fast, mv_slow)
-
- elif (period_view==False):
- start = st.sidebar.text_input("start", datetime.strftime(datetime.today()-timedelta(365),"%Y-%m-%d"))
- end = st.sidebar.text_input("end", datetime.strftime(datetime.today(),"%Y-%m-%d"))
- display_price_volume(data,start,end,rsi_period,stock_ticker, mv_fast, mv_slow)
-
-
-def display_price_volume(data,start,end,rsi_period,stock_ticker, mv_fast, mv_slow):
- interval_options = ['1d', '5d', '1wk', '1mo']
- interval = st.sidebar.selectbox("interval", interval_options)
- if(end == ''):
- plot_price_volume_3(start,interval,data,rsi_period,stock_ticker, mv_fast, mv_slow)
- else:
- plot_price_volume_2(start,end,interval,data,rsi_period,stock_ticker, mv_fast, mv_slow)
-
-
-
-def plot_price_volume(period, interval, data, rsi_period, stock_ticker, mv_fast, mv_slow):
-
- stock_data = data.history(period = period, interval = interval)
- st.subheader('Close Price')
- st.line_chart(stock_data.Close)
-
- basic_data(stock_ticker)
-
- st.subheader('Volume')
- st.line_chart(stock_data.Volume)
-
-
- stock_data = RSI.RSI_function(stock_data, rsi_period)
- st.subheader("RSI Data")
- st.line_chart(stock_data.RSI)
-
- fig = pt.figure(figsize=(8, 5))
- fast = stock_data.Close.rolling(window = int(mv_fast)).mean()
- slow = stock_data.Close.rolling(window = int (mv_slow)).mean()
-
- pt.plot(stock_data.Close, label='Close Price')
- pt.plot(fast,label = 'mvag ' + mv_fast + ' days')
- pt.plot(slow, label ='mvag ' + mv_slow + ' days')
- pt.legend()
- st.subheader("Moving Averages")
- st.pyplot(fig)
-
- stock_data = stock_data.reset_index()
- stock_data.Date = convert_datetime(stock_data)
- stock_data = stock_data.iloc[::-1]
- file.download_interface(stock_data,stock_ticker)
- st.write(stock_data)
-
-
-
-
-def plot_price_volume_2(start, end, interval, data, rsi_period, stock_ticker, mv_fast, mv_slow):
- stock_data = data.history(start = start, end = end, interval = interval)
- st.subheader('Close Price')
- st.line_chart(stock_data.Close)
- basic_data(stock_ticker)
-
-
- st.subheader('Volume')
- st.line_chart(stock_data.Volume)
-
- stock_data = RSI.RSI_function(stock_data, rsi_period)
- st.subheader("RSI Data")
- st.line_chart(stock_data.RSI)
-
-
- fig = pt.figure(figsize=(8, 5))
- fast = stock_data.Close.rolling(window = int(mv_fast)).mean()
- slow = stock_data.Close.rolling(window = int (mv_slow)).mean()
- pt.plot(stock_data.Close, label='Close Price')
- pt.plot(fast,label = 'mvag ' + mv_fast + ' days')
- pt.plot(slow, label ='mvag ' + mv_slow + ' days')
- pt.legend()
- st.subheader("Moving Averages")
- st.pyplot(fig)
-
-
- stock_data = stock_data.reset_index()
- stock_data.Date = convert_datetime(stock_data)
- st.subheader("Stock Data")
- stock_data = stock_data.iloc[::-1]
- st.write(stock_data)
- file.download_interface(stock_data,stock_ticker)
-
-
-
-def plot_price_volume_3(start,interval,data, rsi_period, stock_ticker, mv_fast, mv_slow):
- stock_data = data.history(start = start, interval = interval)
-
- st.subheader('Close Price')
- st.line_chart(stock_data.Close)
-
- st.subheader("Basic Data")
- basic_data(stock_ticker)
-
- st.subheader('Volume')
- st.line_chart(stock_data.Volume)
-
- stock_data = RSI.RSI_function(stock_data, rsi_period)
- st.subheader("RSI Data")
- st.line_chart(stock_data.RSI)
-
-
- fig = pt.figure(figsize=(8, 5))
- fast = stock_data.Close.rolling(window = int(mv_fast)).mean()
- slow = stock_data.Close.rolling(window = int (mv_slow)).mean()
- pt.plot(stock_data.Close, label='Close Price')
- pt.plot(fast,label = 'mvag ' + mv_fast + ' days')
- pt.plot(slow, label ='mvag ' + mv_slow + ' days')
- pt.legend()
- st.pyplot(fig)
-
-
- stock_data = stock_data.reset_index()
- stock_data.Date = convert_datetime(stock_data)
- stock_data = stock_data.iloc[::-1]
- st.subheader("Stock Data")
- st.write(stock_data)
- file.download_interface(stock_data,stock_ticker)
-
-
-
-def convert_datetime(stock_data):
- dates = []
- for date in stock_data.Date:
- date_obj = date.to_pydatetime()
- dt = date_obj.strftime("%Y-%m-%d")
- dates.append(dt)
- return dates
diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\272\244\344\272\222\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\272\244\344\272\222\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py"
deleted file mode 100644
index d57fc2b0f0fb604be1dc19f789815eb7833bef7f..0000000000000000000000000000000000000000
--- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\272\244\344\272\222\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py"
+++ /dev/null
@@ -1,63 +0,0 @@
-from toolbox import CatchException, update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-
-@CatchException
-def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- """
- txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
- llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
- plugin_kwargs 插件模型的参数, 如温度和top_p等, 一般原样传递下去就行
- chatbot 聊天显示框的句柄,用于显示给用户
- history 聊天历史,前情提要
- system_prompt 给gpt的静默提醒
- web_port 当前软件运行的端口号
- """
- history = [] # 清空历史,以免输入溢出
- chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- state = chatbot._cookies.get('plugin_state_0001', None) # 初始化插件状态
-
- if state is None:
- chatbot._cookies['lock_plugin'] = 'crazy_functions.交互功能函数模板->交互功能模板函数' # 赋予插件锁定 锁定插件回调路径,当下一次用户提交时,会直接转到该函数
- chatbot._cookies['plugin_state_0001'] = 'wait_user_keyword' # 赋予插件状态
-
- chatbot.append(("第一次调用:", "请输入关键词, 我将为您查找相关壁纸, 建议使用英文单词, 插件锁定中,请直接提交即可。"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- if state == 'wait_user_keyword':
- chatbot._cookies['lock_plugin'] = None # 解除插件锁定,避免遗忘导致死锁
- chatbot._cookies['plugin_state_0001'] = None # 解除插件状态,避免遗忘导致死锁
-
- # 解除插件锁定
- chatbot.append((f"获取关键词:{txt}", ""))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- page_return = get_image_page_by_keyword(txt)
- inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=inputs, inputs_show_user=inputs_show_user,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
- sys_prompt="When you want to show an image, use markdown format. e.g. . If there are no image url provided, answer 'no image url provided'"
- )
- chatbot[-1] = [chatbot[-1][0], gpt_say]
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
-
-
-# ---------------------------------------------------------------------------------
-
-def get_image_page_by_keyword(keyword):
- import requests
- from bs4 import BeautifulSoup
- response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2)
- res = "image urls: \n"
- for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"):
- try:
- res += image_element["data-src"]
- res += "\n"
- except:
- pass
- return res
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py
deleted file mode 100644
index 44bbfcd55a2efc29f441e06fb33079a48de61905..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_sgd_1500e.py',
- '../../_base_/det_models/fcenet_r50dcnv2_fpn.py',
- '../../_base_/det_datasets/ctw1500.py',
- '../../_base_/det_pipelines/fcenet_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline_ctw1500 = {{_base_.train_pipeline_ctw1500}}
-test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}}
-
-data = dict(
- samples_per_gpu=6,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline_ctw1500),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/train/train_mem.py b/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/train/train_mem.py
deleted file mode 100644
index 51070c121a8d5b616cf8e9659a733762522ab394..0000000000000000000000000000000000000000
--- a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/train/train_mem.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Adopted from https://github.com/lm-sys/FastChat. Below is the original copyright:
-# Adopted from tatsu-lab@stanford_alpaca. Below is the original copyright:
-# Make it more memory efficient by monkey patching the LLaMA model with FlashAttn.
-
-# Need to call this before importing transformers.
-from mplug_owl2.train.llama_flash_attn_monkey_patch import replace_llama_attn_with_flash_attn
-
-replace_llama_attn_with_flash_attn()
-
-from mplug_owl2.train.train import train
-
-if __name__ == "__main__":
- train()
\ No newline at end of file
diff --git a/spaces/Marshalls/testmtd/script_generate_dev.sh b/spaces/Marshalls/testmtd/script_generate_dev.sh
deleted file mode 100644
index b107c4667d634ffad3fadb0b41be5c7106c85b5e..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/script_generate_dev.sh
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/bin/bash
-
-#if using XLA
-export XRT_WORKERS="localservice:0;grpc://localhost:40934"
-export XRT_DEVICE_MAP="CPU:0;/job:localservice/replica:0/task:0/device:XLA_CPU:0|GPU:0;/job:localservice/replica:0/task:0/device:XLA_GPU:0"
-
-py=python3
-
-#exp=transglower_residual_aistpp_expmap
-#exp=moglow_aistpp_expmap
-exp=testing
-#exp=transflower_expmap_old
-#exp=mowgli_expmap_stage2_newdata
-#exp=$1
-#exp=mowgli_aistpp_expmap_future3
-#exp=aistpp_residual
-#seq_id=gKR_sFM_cAll_d28_mKR5_ch06
-#seq_id=gLH_sFM_cAll_d16_mLH3_ch04
-#seq_id=gPO_sFM_cAll_d12_mPO4_ch19
-#seq_id=aistpp_gMH_sFM_cAll_d22_mMH3_ch04
-seq_id=groovenet_2
-echo $exp $seq_id
-
-mkdir inference/generated/
-mkdir inference/generated/${exp}
-mkdir inference/generated/${exp}/predicted_mods
-mkdir inference/generated/${exp}/videos
-fps=20
-#fps=60
-#data_dir=data/aistpp_20hz
-data_dir=data/dance_combined2
-#data_dir=data/aistpp_60hz
-
-# if we don't pass seq_id it will choose a random one from the test set
-$py inference/generate.py --data_dir=$data_dir --output_folder=inference/generated --experiment_name=$exp \
- --generate_video \
- --seq_id $seq_id \
- --max_length 300
-
-
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/wav_upload.py b/spaces/MashiroSA/sovits-emu-voice-transform/wav_upload.py
deleted file mode 100644
index cac679de78634e638e9a998615406b1c36374fb5..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/wav_upload.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from google.colab import files
-import shutil
-import os
-import argparse
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--type", type=str, required=True, help="type of file to upload")
- args = parser.parse_args()
- file_type = args.type
-
- basepath = os.getcwd()
- uploaded = files.upload() # 上传文件
- assert(file_type in ['zip', 'audio'])
- if file_type == "zip":
- upload_path = "./upload/"
- for filename in uploaded.keys():
- #将上传的文件移动到指定的位置上
- shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, "userzip.zip"))
- elif file_type == "audio":
- upload_path = "./raw/"
- for filename in uploaded.keys():
- #将上传的文件移动到指定的位置上
- shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename))
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/version.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/version.py
deleted file mode 100644
index 1cce4e50bd692d4002e3cac3c545a3fb2efe95d0..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/version.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-__version__ = '1.3.17'
-
-
-def parse_version_info(version_str: str, length: int = 4) -> tuple:
- """Parse a version string into a tuple.
-
- Args:
- version_str (str): The version string.
- length (int): The maximum number of version levels. Default: 4.
-
- Returns:
- tuple[int | str]: The version info, e.g., "1.3.0" is parsed into
- (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into
- (2, 0, 0, 0, 'rc', 1) (when length is set to 4).
- """
- from packaging.version import parse
- version = parse(version_str)
- assert version.release, f'failed to parse version {version_str}'
- release = list(version.release)
- release = release[:length]
- if len(release) < length:
- release = release + [0] * (length - len(release))
- if version.is_prerelease:
- release.extend(list(version.pre))
- elif version.is_postrelease:
- release.extend(list(version.post))
- else:
- release.extend([0, 0])
- return tuple(release)
-
-
-version_info = tuple(int(x) for x in __version__.split('.')[:3])
-
-__all__ = ['__version__', 'version_info', 'parse_version_info']
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py
deleted file mode 100644
index e87e639eb94993c3e4068d6bd4d21f902aee7694..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import numpy as np
-
-
-def create_grid(resX, resY, resZ, b_min=np.array([0, 0, 0]), b_max=np.array([1, 1, 1]), transform=None):
- '''
- Create a dense grid of given resolution and bounding box
- :param resX: resolution along X axis
- :param resY: resolution along Y axis
- :param resZ: resolution along Z axis
- :param b_min: vec3 (x_min, y_min, z_min) bounding box corner
- :param b_max: vec3 (x_max, y_max, z_max) bounding box corner
- :return: [3, resX, resY, resZ] coordinates of the grid, and transform matrix from mesh index
- '''
- coords = np.mgrid[:resX, :resY, :resZ]
- coords = coords.reshape(3, -1)
- coords_matrix = np.eye(4)
- length = b_max - b_min
- coords_matrix[0, 0] = length[0] / resX
- coords_matrix[1, 1] = length[1] / resY
- coords_matrix[2, 2] = length[2] / resZ
- coords_matrix[0:3, 3] = b_min
- coords = np.matmul(coords_matrix[:3, :3], coords) + coords_matrix[:3, 3:4]
- if transform is not None:
- coords = np.matmul(transform[:3, :3], coords) + transform[:3, 3:4]
- coords_matrix = np.matmul(transform, coords_matrix)
- coords = coords.reshape(3, resX, resY, resZ)
- return coords, coords_matrix
-
-
-def batch_eval(points, eval_func, num_samples=512 * 512 * 512):
- num_pts = points.shape[1]
- sdf = np.zeros(num_pts)
-
- num_batches = num_pts // num_samples
- for i in range(num_batches):
- sdf[i * num_samples:i * num_samples + num_samples] = eval_func(
- points[:, i * num_samples:i * num_samples + num_samples])
- if num_pts % num_samples:
- sdf[num_batches * num_samples:] = eval_func(points[:, num_batches * num_samples:])
-
- return sdf
-
-
-def eval_grid(coords, eval_func, num_samples=512 * 512 * 512):
- resolution = coords.shape[1:4]
- coords = coords.reshape([3, -1])
- sdf = batch_eval(coords, eval_func, num_samples=num_samples)
- return sdf.reshape(resolution)
-
-
-def eval_grid_octree(coords, eval_func,
- init_resolution=64, threshold=0.01,
- num_samples=512 * 512 * 512):
- resolution = coords.shape[1:4]
-
- sdf = np.zeros(resolution)
-
- dirty = np.ones(resolution, dtype=np.bool)
- grid_mask = np.zeros(resolution, dtype=np.bool)
-
- reso = resolution[0] // init_resolution
-
- while reso > 0:
- # subdivide the grid
- grid_mask[0:resolution[0]:reso, 0:resolution[1]:reso, 0:resolution[2]:reso] = True
- # test samples in this iteration
- test_mask = np.logical_and(grid_mask, dirty)
- #print('step size:', reso, 'test sample size:', test_mask.sum())
- points = coords[:, test_mask]
-
- sdf[test_mask] = batch_eval(points, eval_func, num_samples=num_samples)
- dirty[test_mask] = False
-
- # do interpolation
- if reso <= 1:
- break
- for x in range(0, resolution[0] - reso, reso):
- for y in range(0, resolution[1] - reso, reso):
- for z in range(0, resolution[2] - reso, reso):
- # if center marked, return
- if not dirty[x + reso // 2, y + reso // 2, z + reso // 2]:
- continue
- v0 = sdf[x, y, z]
- v1 = sdf[x, y, z + reso]
- v2 = sdf[x, y + reso, z]
- v3 = sdf[x, y + reso, z + reso]
- v4 = sdf[x + reso, y, z]
- v5 = sdf[x + reso, y, z + reso]
- v6 = sdf[x + reso, y + reso, z]
- v7 = sdf[x + reso, y + reso, z + reso]
- v = np.array([v0, v1, v2, v3, v4, v5, v6, v7])
- v_min = v.min()
- v_max = v.max()
- # this cell is all the same
- if (v_max - v_min) < threshold:
- sdf[x:x + reso, y:y + reso, z:z + reso] = (v_max + v_min) / 2
- dirty[x:x + reso, y:y + reso, z:z + reso] = False
- reso //= 2
-
- return sdf.reshape(resolution)
diff --git a/spaces/MoonQiu/LongerCrafter/lvdm/distributions.py b/spaces/MoonQiu/LongerCrafter/lvdm/distributions.py
deleted file mode 100644
index 0b69b6984880ec24279b658384ed8031335e3474..0000000000000000000000000000000000000000
--- a/spaces/MoonQiu/LongerCrafter/lvdm/distributions.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import torch
-import numpy as np
-
-
-class AbstractDistribution:
- def sample(self):
- raise NotImplementedError()
-
- def mode(self):
- raise NotImplementedError()
-
-
-class DiracDistribution(AbstractDistribution):
- def __init__(self, value):
- self.value = value
-
- def sample(self):
- return self.value
-
- def mode(self):
- return self.value
-
-
-class DiagonalGaussianDistribution(object):
- def __init__(self, parameters, deterministic=False):
- self.parameters = parameters
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
- self.deterministic = deterministic
- self.std = torch.exp(0.5 * self.logvar)
- self.var = torch.exp(self.logvar)
- if self.deterministic:
- self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
-
- def sample(self, noise=None):
- if noise is None:
- noise = torch.randn(self.mean.shape)
-
- x = self.mean + self.std * noise.to(device=self.parameters.device)
- return x
-
- def kl(self, other=None):
- if self.deterministic:
- return torch.Tensor([0.])
- else:
- if other is None:
- return 0.5 * torch.sum(torch.pow(self.mean, 2)
- + self.var - 1.0 - self.logvar,
- dim=[1, 2, 3])
- else:
- return 0.5 * torch.sum(
- torch.pow(self.mean - other.mean, 2) / other.var
- + self.var / other.var - 1.0 - self.logvar + other.logvar,
- dim=[1, 2, 3])
-
- def nll(self, sample, dims=[1,2,3]):
- if self.deterministic:
- return torch.Tensor([0.])
- logtwopi = np.log(2.0 * np.pi)
- return 0.5 * torch.sum(
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
- dim=dims)
-
- def mode(self):
- return self.mean
-
-
-def normal_kl(mean1, logvar1, mean2, logvar2):
- """
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
- Compute the KL divergence between two gaussians.
- Shapes are automatically broadcasted, so batches can be compared to
- scalars, among other use cases.
- """
- tensor = None
- for obj in (mean1, logvar1, mean2, logvar2):
- if isinstance(obj, torch.Tensor):
- tensor = obj
- break
- assert tensor is not None, "at least one argument must be a Tensor"
-
- # Force variances to be Tensors. Broadcasting helps convert scalars to
- # Tensors, but it does not work for torch.exp().
- logvar1, logvar2 = [
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
- for x in (logvar1, logvar2)
- ]
-
- return 0.5 * (
- -1.0
- + logvar2
- - logvar1
- + torch.exp(logvar1 - logvar2)
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
- )
diff --git a/spaces/MrVicente/RA-BART/custom_bart/bart_for_conditional_generation.py b/spaces/MrVicente/RA-BART/custom_bart/bart_for_conditional_generation.py
deleted file mode 100644
index eb2a9034a5cefa1745843c96072947467846fecb..0000000000000000000000000000000000000000
--- a/spaces/MrVicente/RA-BART/custom_bart/bart_for_conditional_generation.py
+++ /dev/null
@@ -1,205 +0,0 @@
-#############################
-# Imports
-#############################
-
-# Python modules
-from typing import (
- Optional,
- Tuple,
- Union,
- List,
-)
-
-# Remote modules
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss
-from transformers import (
- BartConfig,
- BartPretrainedModel,
-)
-from transformers.modeling_outputs import Seq2SeqLMOutput
-from transformers.models.bart.modeling_bart import shift_tokens_right
-
-from transformers.utils import (
- add_end_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- logging,
- replace_return_docstrings,
-)
-
-from .bart_model import BartCustomModel
-from .config import BartCustomConfig
-from .custom_constants import BartConstants
-from .bart_generation_mixin import GenerationMixin
-from .custom_outputs import CustomSeq2SeqLMOutput
-
-logger = logging.get_logger(__name__)
-
-@add_start_docstrings(
- "The BART Model with a language modeling head. Can be used for summarization.", BartConstants.BART_START_DOCSTRING
-)
-class BartCustomForConditionalGeneration(BartPretrainedModel, GenerationMixin):
- base_model_prefix = "model"
- _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head\.weight"]
-
- def __init__(self, config: BartCustomConfig):
- super().__init__(config)
- self.model = BartCustomModel(config)
- self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
- self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_encoder(self):
- return self.model.get_encoder()
-
- def get_decoder(self):
- return self.model.get_decoder()
-
- def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding:
- new_embeddings = super().resize_token_embeddings(new_num_tokens)
- self._resize_final_logits_bias(new_num_tokens)
- return new_embeddings
-
- def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
- old_num_tokens = self.final_logits_bias.shape[-1]
- if new_num_tokens <= old_num_tokens:
- new_bias = self.final_logits_bias[:, :new_num_tokens]
- else:
- extra_bias = torch.zeros((1, new_num_tokens - old_num_tokens), device=self.final_logits_bias.device)
- new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1)
- self.register_buffer("final_logits_bias", new_bias)
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- @add_start_docstrings_to_model_forward(BartConstants.BART_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=BartConstants.CONFIG_FOR_DOC)
- @add_end_docstrings(BartConstants.BART_GENERATION_EXAMPLE)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- decoder_head_mask: Optional[torch.Tensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[List[torch.FloatTensor]] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- input_commonsense_relations: Optional[torch.Tensor] = None,
- reduce_ce=True,
- ) -> Union[Tuple, CustomSeq2SeqLMOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
-
- Returns:
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if labels is not None:
- if use_cache:
- logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.")
- use_cache = False
- if decoder_input_ids is None and decoder_inputs_embeds is None:
- decoder_input_ids = shift_tokens_right(
- labels, self.config.pad_token_id, self.config.decoder_start_token_id
- )
- outputs = self.model(
- input_ids,
- attention_mask=attention_mask,
- decoder_input_ids=decoder_input_ids,
- encoder_outputs=encoder_outputs,
- decoder_attention_mask=decoder_attention_mask,
- head_mask=head_mask,
- decoder_head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- decoder_inputs_embeds=decoder_inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- relation_inputs=input_commonsense_relations
- )
- lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
-
- masked_lm_loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss(reduce=reduce_ce, ignore_index=self.config.pad_token_id) # added ignore_index=self.config.pad_token_id
- masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (lm_logits,) + outputs[1:]
- return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
-
- return CustomSeq2SeqLMOutput(
- loss=masked_lm_loss,
- logits=lm_logits,
- past_key_values=outputs.past_key_values,
- decoder_hidden_states=outputs.decoder_hidden_states,
- decoder_attentions=outputs.decoder_attentions,
- cross_attentions=outputs.cross_attentions,
- encoder_last_hidden_state=outputs.encoder_last_hidden_state,
- encoder_hidden_states=outputs.encoder_hidden_states,
- encoder_attentions=outputs.encoder_attentions,
- head_mask=outputs.encoder_head_mask
- )
-
- def prepare_inputs_for_generation(
- self,
- decoder_input_ids,
- past=None,
- attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- use_cache=None,
- encoder_outputs=None,
- **kwargs
- ):
- # cut decoder_input_ids if past is used
- if past is not None:
- decoder_input_ids = decoder_input_ids[:, -1:]
-
- return {
- "input_ids": None, # encoder_outputs is defined. input_ids not needed
- "encoder_outputs": encoder_outputs,
- "past_key_values": past,
- "decoder_input_ids": decoder_input_ids,
- "attention_mask": attention_mask,
- "head_mask": head_mask,
- "decoder_head_mask": decoder_head_mask,
- "cross_attn_head_mask": cross_attn_head_mask,
- "use_cache": use_cache, # change this to avoid caching (presumably for debugging)
- }
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id)
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- reordered_past = ()
- for layer_past in past:
- # cached cross_attention states don't have to be reordered -> they are always the same
- reordered_past += (
- tuple(past_state.index_select(0, beam_idx) for past_state in layer_past[:2]) + layer_past[2:],
- )
- return reordered_past
diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/misc/tpu_lib.py b/spaces/NCTCMumbai/NCTC/models/official/utils/misc/tpu_lib.py
deleted file mode 100644
index 4d4cddb1c6b015091ed2da57df49277e3008c252..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/utils/misc/tpu_lib.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Initializes TPU system for TF 2.0."""
-
-import tensorflow as tf
-
-
-def tpu_initialize(tpu_address):
- """Initializes TPU for TF 2.0 training.
-
- Args:
- tpu_address: string, bns address of master TPU worker.
-
- Returns:
- A TPUClusterResolver.
- """
- cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
- tpu=tpu_address)
- if tpu_address not in ('', 'local'):
- tf.config.experimental_connect_to_cluster(cluster_resolver)
- tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
- return cluster_resolver
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/factory.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/factory.py
deleted file mode 100644
index b140416dfdba90420f99a8bcb3b07cc04a63cc3e..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/factory.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Factory to build detection model."""
-
-
-from official.vision.detection.modeling import maskrcnn_model
-from official.vision.detection.modeling import retinanet_model
-from official.vision.detection.modeling import shapemask_model
-
-
-def model_generator(params):
- """Model function generator."""
- if params.type == 'retinanet':
- model_fn = retinanet_model.RetinanetModel(params)
- elif params.type == 'mask_rcnn':
- model_fn = maskrcnn_model.MaskrcnnModel(params)
- elif params.type == 'shapemask':
- model_fn = shapemask_model.ShapeMaskModel(params)
- else:
- raise ValueError('Model %s is not supported.'% params.type)
-
- return model_fn
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/common_modules.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/common_modules.py
deleted file mode 100644
index 9c9c2097d2398ec78cae5e1265478f804860f944..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/common_modules.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Common modeling utilities."""
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-import numpy as np
-import tensorflow as tf
-import tensorflow.compat.v1 as tf1
-from typing import Text, Optional
-
-from tensorflow.python.tpu import tpu_function
-
-
-@tf.keras.utils.register_keras_serializable(package='Vision')
-class TpuBatchNormalization(tf.keras.layers.BatchNormalization):
- """Cross replica batch normalization."""
-
- def __init__(self, fused: Optional[bool] = False, **kwargs):
- if fused in (True, None):
- raise ValueError('TpuBatchNormalization does not support fused=True.')
- super(TpuBatchNormalization, self).__init__(fused=fused, **kwargs)
-
- def _cross_replica_average(self, t: tf.Tensor, num_shards_per_group: int):
- """Calculates the average value of input tensor across TPU replicas."""
- num_shards = tpu_function.get_tpu_context().number_of_shards
- group_assignment = None
- if num_shards_per_group > 1:
- if num_shards % num_shards_per_group != 0:
- raise ValueError(
- 'num_shards: %d mod shards_per_group: %d, should be 0' %
- (num_shards, num_shards_per_group))
- num_groups = num_shards // num_shards_per_group
- group_assignment = [[
- x for x in range(num_shards) if x // num_shards_per_group == y
- ] for y in range(num_groups)]
- return tf1.tpu.cross_replica_sum(t, group_assignment) / tf.cast(
- num_shards_per_group, t.dtype)
-
- def _moments(self, inputs: tf.Tensor, reduction_axes: int, keep_dims: int):
- """Compute the mean and variance: it overrides the original _moments."""
- shard_mean, shard_variance = super(TpuBatchNormalization, self)._moments(
- inputs, reduction_axes, keep_dims=keep_dims)
-
- num_shards = tpu_function.get_tpu_context().number_of_shards or 1
- if num_shards <= 8: # Skip cross_replica for 2x2 or smaller slices.
- num_shards_per_group = 1
- else:
- num_shards_per_group = max(8, num_shards // 8)
- if num_shards_per_group > 1:
- # Compute variance using: Var[X]= E[X^2] - E[X]^2.
- shard_square_of_mean = tf.math.square(shard_mean)
- shard_mean_of_square = shard_variance + shard_square_of_mean
- group_mean = self._cross_replica_average(shard_mean, num_shards_per_group)
- group_mean_of_square = self._cross_replica_average(
- shard_mean_of_square, num_shards_per_group)
- group_variance = group_mean_of_square - tf.math.square(group_mean)
- return (group_mean, group_variance)
- else:
- return (shard_mean, shard_variance)
-
-
-def get_batch_norm(batch_norm_type: Text) -> tf.keras.layers.BatchNormalization:
- """A helper to create a batch normalization getter.
-
- Args:
- batch_norm_type: The type of batch normalization layer implementation. `tpu`
- will use `TpuBatchNormalization`.
-
- Returns:
- An instance of `tf.keras.layers.BatchNormalization`.
- """
- if batch_norm_type == 'tpu':
- return TpuBatchNormalization
-
- return tf.keras.layers.BatchNormalization
-
-
-def count_params(model, trainable_only=True):
- """Returns the count of all model parameters, or just trainable ones."""
- if not trainable_only:
- return model.count_params()
- else:
- return int(np.sum([tf.keras.backend.count_params(p)
- for p in model.trainable_weights]))
-
-
-def load_weights(model: tf.keras.Model,
- model_weights_path: Text,
- weights_format: Text = 'saved_model'):
- """Load model weights from the given file path.
-
- Args:
- model: the model to load weights into
- model_weights_path: the path of the model weights
- weights_format: the model weights format. One of 'saved_model', 'h5',
- or 'checkpoint'.
- """
- if weights_format == 'saved_model':
- loaded_model = tf.keras.models.load_model(model_weights_path)
- model.set_weights(loaded_model.get_weights())
- else:
- model.load_weights(model_weights_path)
diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns.py
deleted file mode 100644
index c7203ffcff972207795b4ef5b1e755d35559033a..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns.py
+++ /dev/null
@@ -1,183 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Configuration to read FSNS dataset https://goo.gl/3Ldm8v."""
-
-import os
-import re
-import tensorflow as tf
-from tensorflow.contrib import slim
-import logging
-
-DEFAULT_DATASET_DIR = os.path.join(os.path.dirname(__file__), 'data', 'fsns')
-
-# The dataset configuration, should be used only as a default value.
-DEFAULT_CONFIG = {
- 'name': 'FSNS',
- 'splits': {
- 'train': {
- 'size': 1044868,
- 'pattern': 'train/train*'
- },
- 'test': {
- 'size': 20404,
- 'pattern': 'test/test*'
- },
- 'validation': {
- 'size': 16150,
- 'pattern': 'validation/validation*'
- }
- },
- 'charset_filename': 'charset_size=134.txt',
- 'image_shape': (150, 600, 3),
- 'num_of_views': 4,
- 'max_sequence_length': 37,
- 'null_code': 133,
- 'items_to_descriptions': {
- 'image': 'A [150 x 600 x 3] color image.',
- 'label': 'Characters codes.',
- 'text': 'A unicode string.',
- 'length': 'A length of the encoded text.',
- 'num_of_views': 'A number of different views stored within the image.'
- }
-}
-
-
-def read_charset(filename, null_character=u'\u2591'):
- """Reads a charset definition from a tab separated text file.
-
- charset file has to have format compatible with the FSNS dataset.
-
- Args:
- filename: a path to the charset file.
- null_character: a unicode character used to replace '