diff --git a/spaces/0x7194633/mbrat-ru-sum/README.md b/spaces/0x7194633/mbrat-ru-sum/README.md deleted file mode 100644 index 3be7e55dd99d461d88df8763f1af8a1fcaa40155..0000000000000000000000000000000000000000 --- a/spaces/0x7194633/mbrat-ru-sum/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mbrat Ru Sum -emoji: 🦀 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md deleted file mode 100644 index 85c0afb5decf993193749684c506dda38699cbec..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1 The Best Bluetooth Software for Windows and Mobile Devices.md +++ /dev/null @@ -1,156 +0,0 @@ - -

What is BlueSoleil 6.4.275.0WithMobile?

-

BlueSoleil is a Bluetooth driver and software that allows you to easily connect to your Bluetooth devices, such as headsets, mobile phones, mice and GPS.

-

BlueSoleil 6.4.275.0WithMobile is a special version of BlueSoleil that comes with a mobile phone management software called Mobile Phone Tool.

-

BlueSoleil 6.4.275.0WithMobile Serial Number - 24 1


Download Filehttps://byltly.com/2uKxmj



-

With this version, you can not only connect your Bluetooth devices, but also manage your mobile phone data, such as contacts, messages, photos, music and videos.

-

You can also use your mobile phone as a remote control for your computer, or transfer files between your phone and computer via Bluetooth.

-

In this article, we will show you how to download, install, use and activate BlueSoleil 6.4.275.0WithMobile, as well as some tips and tricks for troubleshooting common problems.

-

How to download and install BlueSoleil 6.4.275.0WithMobile?

-

To download and install BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:

-
    -
  1. Go to https://www.bluesoleil.com/products/S0001201005190001.html and click on "Download" button.
  2. -
  3. Save the file "BlueSoleil_6_4_275_0_with_Mobile.zip" on your computer.
  4. -
  5. Extract the file using a zip extractor program, such as WinZip or WinRAR.
  6. -
  7. Open the folder "BlueSoleil_6_4_275_0_with_Mobile" and double-click on "setup.exe" file.
  8. -
  9. Follow the instructions on the screen to complete the installation process.
  10. -
  11. Restart your computer after the installation is finished.
  12. -
-

What are the benefits of using BlueSoleil 6.4.275.0WithMobile?

-

Using BlueSoleil 6.4.275.0WithMobile has many benefits, such as:

- -

What are the drawbacks of using BlueSoleil 6.4.275.0WithMobile?

-

Using BlueSoleil 6.4.275.0WithMobile also has some drawbacks, such as:

-

How to activate BlueSoleil 6.4.275.0WithMobile with serial number
-BlueSoleil 6.4.275.0WithMobile crack download free
-BlueSoleil 6.4.275.0WithMobile license key generator
-BlueSoleil 6.4.275.0WithMobile full version for Windows 10
-BlueSoleil 6.4.275.0WithMobile bluetooth software review
-BlueSoleil 6.4.275.0WithMobile compatible devices list
-BlueSoleil 6.4.275.0WithMobile user manual pdf
-BlueSoleil 6.4.275.0WithMobile update patch download
-BlueSoleil 6.4.275.0WithMobile alternative software comparison
-BlueSoleil 6.4.275.0WithMobile best price and discount code
-BlueSoleil 6.4.275.0WithMobile installation error and troubleshooting
-BlueSoleil 6.4.275.0WithMobile customer service and support number
-BlueSoleil 6.4.275.0WithMobile features and benefits overview
-BlueSoleil 6.4.275.0WithMobile system requirements and specifications
-BlueSoleil 6.4.275.0WithMobile refund policy and guarantee
-BlueSoleil 6.4.275.0WithMobile testimonials and feedback from users
-BlueSoleil 6.4.275.0WithMobile pros and cons analysis
-BlueSoleil 6.4.275.0WithMobile vs other bluetooth software comparison
-BlueSoleil 6.4.275.0WithMobile official website and download link
-BlueSoleil 6.4.275.0WithMobile serial number validity check online
-How to uninstall BlueSoleil 6.4.275.0WithMobile completely
-BlueSoleil 6.4.275.0WithMobile latest version and changelog
-BlueSoleil 6.4.275.0WithMobile tips and tricks for better performance
-BlueSoleil 6.4.275.0WithMobile FAQ and common questions answered
-BlueSoleil 6.4.275.0WithMobile forum and community discussion
-How to transfer files with BlueSoleil 6.4.275

- -

However, these drawbacks can be overcome by following some tips and tricks that we will share in the next sections.

-

How to get a serial number for BlueSoleil 6.4.275.0WithMobile?

-

To get a serial number for BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:

-
    -
  1. Go to https://www.bluesoleil.com/products/S0001201005190001.html and click on "Buy Now" button.
  2. -
  3. Select the payment method and fill in the required information.
  4. -
  5. Confirm your order and complete the payment process.
  6. -
  7. You will receive an email with your serial number and a download link for the software.
  8. -
  9. Copy the serial number and paste it in the activation window of the software.
  10. -
-

Congratulations! You have successfully activated BlueSoleil 6.4.275.0WithMobile and unlocked all the features.

-

Why do you need a serial number for BlueSoleil 6.4.275.0WithMobile?

-

You need a serial number for BlueSoleil 6.4.275.0WithMobile because:

- -

Where can you find a serial number for BlueSoleil 6.4.275.0WithMobile?

-

You can find a serial number for BlueSoleil 6.4.275.0WithMobile in these places:

- -

However, we recommend that you only use the first option, as it is the safest and most reliable way of getting a serial number for BlueSoleil 6.4.275.0WithMobile.

-

The second option may not work if you have lost or damaged your disc or cover, or if you have bought a pirated copy of the software.

-

The third option may not work if the serial number is invalid, expired, blocked or already used by someone else, or if the keygen program contains viruses or malware that can harm your computer.

-

How to enter a serial number for BlueSoleil 6.4.275.0WithMobile?

-

To enter a serial number for BlueSoleil 6.4.275.0WithMobile, you need to follow these steps:

-
    -
  1. Launch BlueSoleil from your desktop or start menu.
  2. -
  3. Click on "Help" menu and select "Activate BlueSoleil".
  4. -
  5. A new window will open asking you to enter your serial number.
  6. -
  7. Copy and paste your serial number in the text box and click on "Activate" button.
  8. -
  9. Wait for the activation process to complete.
  10. -
  11. A message will appear confirming that your activation is successful.
  12. -
-

Congratulations! You have successfully entered your serial number for BlueSoleil 6.4.275.0WithMobile and activated the software.

-

How to troubleshoot common problems with BlueSoleil 6.4.275.0WithMobile?

-

Sometimes, you may encounter some problems with BlueSoleil 6.4.275.0WithMobile, such as:

- -

Don't worry, these problems can be fixed by following some tips and tricks, such as:

- -

If these tips and tricks do not work, you can also contact customer support for BlueSoleil 6.4.275.0WithMobile for further assistance.

-

How to contact customer support for BlueSoleil 6.4.275.0WithMobile?

-

If you have any questions or feedback about BlueSoleil 6.4.275.0WithMobile, you can contact customer support in these ways:

- -

The customer support team of BlueSoleil is friendly and professional, and will try to help you as soon as possible.

-

Conclusion

-

BlueSoleil 6.4.275.0WithMobile is a powerful and versatile Bluetooth driver and software that allows you to connect and manage your Bluetooth devices with ease.

-

With this software, you can enjoy wireless audio, file transfer, mobile phone management and remote control functions with your Bluetooth devices.

-

You can also activate the software with a serial number and unlock all the features and functions.

-

If you encounter any problems with the software, you can follow some tips and tricks or contact customer support for help.

-

If you are looking for a Bluetooth solution that is easy to use and has a lot of features, BlueSoleil 6.4.275.0WithMobile is a great choice for you.

-

So what are you waiting for? Download and install BlueSoleil 6.4.275.0WithMobile today and enjoy the wireless freedom!

-

FAQs

-

Q: What are the system requirements for BlueSoleil 6.4.275.0WithMobile?

-

A: The system requirements for BlueSoleil 6.4.275.0WithMobile are:

- -

Q: How many Bluetooth devices can I connect with BlueSoleil 6.4.275.0WithMobile?

-

A: You can connect up to 17 Bluetooth devices at the same time with BlueSoleil 6.4.275.0WithMobile.

-

Q: How long is the trial period for BlueSoleil 6.4.275.0WithMobile?

-

A: The trial period for BlueSoleil 6.4.275.0WithMobile is 30 days. During the trial period, you can use all the features and functions of the software, but you will see a watermark on the screen and hear a voice reminder every few minutes.

-

Q: How much does BlueSoleil 6.4.275.0WithMobile cost?

-

A: BlueSoleil 6.4.275.0WithMobile costs $27.99 USD for a single license. You can buy it online from the official website of BlueSoleil or from other authorized resellers.

-

Q: Is BlueSoleil 6.4.275.0WithMobile safe and reliable?

-

A: Yes, BlueSoleil 6.4.275.0WithMobile is safe and reliable. It has been tested and certified by various organizations, such as Microsoft, Intel, Broadcom and IVT Corporation. It has also received positive reviews and ratings from many users and experts.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md deleted file mode 100644 index 10f484eadd5177aa55da270c00e10a9f9566228c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Darkstalkers Collection (PC) Download Everything You Need to Know About the Legendary Fighting Game.md +++ /dev/null @@ -1,142 +0,0 @@ -
-

Darkstalkers Collection (PC) Download: How to Play the Classic Capcom Fighting Games on Your Computer

-

If you are a fan of 2D fighting games, you have probably heard of Darkstalkers, the iconic series by Capcom that features a cast of monstrous and supernatural characters. From vampires and werewolves to zombies and mummies, Darkstalkers has something for everyone who loves dark fantasy and horror themes.

-

Darkstalkers was first released in arcades in 1994, and since then it has spawned several sequels, spin-offs, comics, anime, and merchandise. However, despite its popularity and cult status, Darkstalkers has not seen a new game in over a decade. The last official release was Darkstalkers Resurrection, a compilation of two classic titles that came out in 2013 for PlayStation 3 and Xbox 360.

-

Darkstalkers Collection (PC) download


Download File ○○○ https://byltly.com/2uKyGt



-

But don't despair, because there is still a way to enjoy Darkstalkers on your PC. In fact, there are two options that you can choose from depending on your preference and budget. In this article, we will show you how to download Darkstalkers Collection on PC and how to play it like a pro.

-

How to Download Darkstalkers Collection on PC

-

Darkstalkers Collection is not an official name, but rather a term that we use to refer to any compilation of Darkstalkers games that you can play on your PC. There are two main options that you can choose from:

-

Option 1: Buy Capcom Fighting Collection on Steam

-

If you want the most convenient and legal way to play Darkstalkers on your PC, you can buy Capcom Fighting Collection on Steam. This is a bundle of ten arcade games by Capcom that includes four titles from the Darkstalkers series:

- -

To buy and install Capcom Fighting Collection on Steam, you need to follow these steps:

-
    -
  1. Create a Steam account if you don't have one already.
  2. -
  3. Go to the official page of Capcom Fighting Collection on Steam.
  4. -
  5. Click on "Add to Cart" and proceed to checkout.
  6. -
  7. Pay $39.99 using your preferred payment method.
  8. -
  9. Download and install Capcom Fighting Collection on your PC.
  10. -
  11. Launch Capcom Fighting Collection from your Steam library.
  12. -
-

To switch between different games in Capcom Fighting Collection, you need to follow these steps:

-

How to download Darkstalkers Collection for PC
-Darkstalkers Collection PC game free download
-Darkstalkers Collection PC full version download
-Darkstalkers Collection PC torrent download
-Darkstalkers Collection PC crack download
-Darkstalkers Collection PC iso download
-Darkstalkers Collection PC steam download
-Darkstalkers Collection PC emulator download
-Darkstalkers Collection PC cheats download
-Darkstalkers Collection PC mods download
-Download Darkstalkers Collection for Windows 10
-Download Darkstalkers Collection for Windows 7
-Download Darkstalkers Collection for Mac
-Download Darkstalkers Collection for Linux
-Download Darkstalkers Collection for Android
-Download Darkstalkers Collection for iOS
-Download Darkstalkers Collection for PS4
-Download Darkstalkers Collection for Xbox One
-Download Darkstalkers Collection for Switch
-Download Darkstalkers Collection for PSP
-Best site to download Darkstalkers Collection for PC
-Safe site to download Darkstalkers Collection for PC
-Fast site to download Darkstalkers Collection for PC
-Easy site to download Darkstalkers Collection for PC
-Trusted site to download Darkstalkers Collection for PC
-Where can I download Darkstalkers Collection for PC
-Where to download Darkstalkers Collection for PC free
-Where to download Darkstalkers Collection for PC full version
-Where to download Darkstalkers Collection for PC torrent
-Where to download Darkstalkers Collection for PC crack
-Where to download Darkstalkers Collection for PC iso
-Where to download Darkstalkers Collection for PC steam
-Where to download Darkstalkers Collection for PC emulator
-Where to download Darkstalkers Collection for PC cheats
-Where to download Darkstalkers Collection for PC mods
-How to install Darkstalkers Collection on PC
-How to play Darkstalkers Collection on PC
-How to run Darkstalkers Collection on PC
-How to update Darkstalkers Collection on PC
-How to fix Darkstalkers Collection on PC errors
-How to optimize Darkstalkers Collection on PC performance
-How to unlock all characters in Darkstalkers Collection on PC
-How to use controller in Darkstalkers Collection on PC
-How to change language in Darkstalkers Collection on PC
-How to save progress in Darkstalkers Collection on PC
-How to stream Darkstalkers Collection on PC online
-How to record gameplay of Darkstalkers Collection on PC
-How to make a review of Darkstalkers Collection on PC
-How to get a refund of Darkstalkers Collection on PC
-How to buy a physical copy of Darkstalkers Collection on PC

-
    -
  1. Select "Game Select" from the main menu.
  2. -
  3. Select the game that you want to play from the list.
  4. -
  5. Select "Play Game" or "Online Play" depending on whether you want to play offline or online.
  6. -
  7. Select your character and mode from the game menu.
  8. -
  9. Enjoy playing Darkstalkers!
  10. -
-

To play online and access the museum mode in Capcom Fighting Collection, you need to follow these steps:

-
    -
  1. Select "Online Play" from the main menu or the game select menu.
  2. -
  3. Select "Ranked Match" or "Lobby Match" depending on whether you want to play competitively or casually with other players.
  4. -
  5. Select your region, game title, character, mode, and other settings.
  6. -
  7. Wait for an opponent or join an existing lobby.
  8. -
  9. Have fun playing online!
  10. -
  11. Select "Museum" from the main menu or the game select menu.
  12. -
  13. Select "Gallery" or "Sound Player" depending on whether you want to view illustrations or listen to music from the games.
  14. -
  15. Browse through hundreds of artworks and tracks from the arcade versions of each title.
  16. -
-

Option 2: Download Darkstalkers Resurrection from Internet Archive

-

If you don't want to spend money or if you prefer a more retro experience, you can download Darkstalkers Resurrection from Internet Archive. This is a compilation of two classic titles that was released in 2013 for PlayStation 3 and Xbox 360:

- -

To download and extract Darkstalkers Resurrection from Internet Archive, you need to follow these steps:

-
    -
  1. Create an Internet Archive account if you don't have one already.
  2. -
  3. Go to the official page of Darkstalkers Resurrection on Internet Archive.
  4. -
  5. Click on "DOWNLOAD OPTIONS" and select "RAR".
  6. -
  7. Download darkstalkers-ressurection.rar (4.7 GB) using your preferred download manager.
  8. -
  9. Extract darkstalkers-ressurection.rar using WinRAR or any other software that can handle RAR files.
  10. -
  11. You will get two files: DARKSTALKERS_RESSURECTION.iso (4.7 GB) and DARKSTALKERS_RESSURECTION.dvd (4 KB).
  12. -
-

To run Darkstalkers Resurrection on your PC using an emulator, you need to follow these steps:

-
    -
  1. Download Xenia, an emulator that can run Xbox 360 games on PC.
  2. -
  3. Extract xenia-master.zip (14 MB) using WinRAR or any other software that can handle ZIP files.
  4. -
  5. You will get a folder called xenia-master with several files inside it.
  6. -the game select menu. -
  7. Select "Play Game" or "Online Play" from the game select menu.
  8. -
  9. Select "Arcade Mode" or "Versus Mode" from the game menu.
  10. -
  11. Select your character from the character select screen. You can also select a different color scheme by pressing different buttons.
  12. -
  13. Notice that some characters are different from their original versions in Night Warriors or Vampire Savior. For example, Morrigan has a new move called Soul Eraser, and Jedah has a new move called Prova di Servo.
  14. -
  15. Try out their new moves and see how they affect their gameplay and strategies.
  16. -
-

Conclusion

-

Darkstalkers is one of the most beloved and influential 2D fighting games of all time. It has a unique and diverse roster of characters, a fast and fluid gameplay system, and a dark and stylish aesthetic. If you want to experience this classic series on your PC, you have two options: buy Capcom Fighting Collection on Steam or download Darkstalkers Resurrection from Internet Archive.

-

Both options have their pros and cons, but they both allow you to play four titles from the Darkstalkers series: Darkstalkers: The Night Warriors, Night Warriors: Darkstalkers' Revenge, Vampire Savior: The Lord of Vampire, and Vampire Hunter 2: Darkstalkers' Revenge/Vampire Savior 2: The Lord of Vampire. You can also play online with other players and access the museum mode with hundreds of artworks and tracks from the games.

-

Whether you are a beginner or an expert, you can enjoy Darkstalkers on your PC by learning the basics and the advanced techniques of the gameplay. You can also master the unique abilities of each character and their variants by reading their profiles and trying out their moves. Darkstalkers is a game that rewards skill, creativity, and experimentation.

-

If you are ready to enter the world of Darkstalkers, don't hesitate to download Darkstalkers Collection on PC today. You won't regret it!

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md deleted file mode 100644 index b487f3702cf80aa66cc86e4ae971325b8e420a78..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Update Kostum Pes 6 Menjadi Pes 13 Langkah-Langkah Instalasi dan Konfigurasi Update Jersey Terbaru untuk PES 6.md +++ /dev/null @@ -1,120 +0,0 @@ - -

How to Download and Install the Latest Costume Update for PES 6 to PES 13

-

Introduction

-

Pro Evolution Soccer (PES) is a popular soccer video game series that has been around since 2001. The game features realistic graphics, gameplay, and physics, as well as licensed teams, players, and stadiums from various leagues and competitions around the world.

-

One of the aspects that makes PES stand out from other soccer games is its customization options. You can edit and create your own teams, players, stadiums, logos, balls, boots, and more. You can also download and install updates and mods from other users that enhance or change various aspects of the game.

-

download update kostum pes 6 menjadi pes 13


Download File - https://byltly.com/2uKz1l



-

One of the most common updates that PES fans look for is costume updates. Costumes are the outfits that players wear on the field, such as jerseys, shorts, socks, gloves, etc. Costume updates change the appearance of these outfits to match the latest designs and trends of real-life soccer teams.

-

Updating costumes can make your game look more realistic and up-to-date. It can also make your game more fun and enjoyable by adding variety and diversity to your teams and players. You can choose from different styles, colors, patterns, logos, sponsors, etc.

-

In this article, we will show you how to download and install the latest costume update for PES 6 to PES 13. This update will transform your old PES 6 costumes into new PES 13 costumes. You will be able to play with updated costumes for over 200 teams from various leagues and competitions around the world.

-

Before we start, you will need some requirements for updating costumes. You will need:

- -

How to Download the Update File

-

The first step is to download the update file that contains the new costumes for PES 6. The update file is a large file that weighs about 1 GB. You can find it on various websites that offer PES 6 updates and mods.

-

One of these websites is tribe54.com. Tribe54.com is a community platform that allows users to share their passion for soccer games. You can find many updates and mods for different versions of PES on this website.

-

download patch pes 6 to pes 13 terbaru
-cara update jersey pes 6 ke pes 13 gratis
-download kitserver pes 6 update pes 13 full version
-tutorial mengubah tampilan pes 6 menjadi pes 13
-download option file pes 6 terbaru pes 13
-download stadium pes 6 update pes 13 hd
-cara install update kostum pes 6 ke pes 13
-download face and hair pes 6 update pes 13
-download game pes 6 mod pes 13 pc
-download sound pes 6 update pes 13
-download scoreboard pes 6 update pes 13
-download boots pes 6 update pes 13
-download balls pes 6 update pes 13
-download adboard pes 6 update pes 13
-download logo and emblem pes 6 update pes 13
-download master game pes 6 update pes 13
-download e_text and o_text pes 6 update pes 13
-download commentary and callname pes 6 update pes 13
-download menu and background pes 6 update pes 13
-download net and goalpost pes 6 update pes 13
-download player rating and ability pes 6 update pes 13
-download formation and tactics pes 6 update pes 13
-download referee and linesman pes 6 update pes 13
-download crowd and supporter pes 6 update pes 13
-download banner and flag pes 6 update pes 13
-download replay logo and animation pes 6 update pes 13
-download intro video and music pes 6 update pes 13
-download font and number style pes 6 update pes 13
-download league and cup name and logo pes 6 update pes 13
-download team name and logo and kitpespespespespespespespespespespespespespespespespespespespespespespespespespespesssssssssssssssssssssssssssssssssssssssssssssssss

-

To download the update file from tribe54.com, follow these steps:

-
    -
  1. Go to this link.
  2. -
  3. Click on the "Download" button at the bottom of the page.
  4. -
  5. Wait for a few seconds until a new page opens.
  6. -
  7. Click on the "Download" button again at the top right corner of the page.
  8. -
  9. Choose a location on your PC where you want to save the file.
  10. -
  11. Wait for the download to finish.
  12. -
-

Alternatively, you can also download the update file from other websites such as pes-patch.com or pesnewupdate.com. Just make sure you download the correct file that matches your version of PES 6.

-

After downloading the file, you should check its size and integrity. The file size should be around 1 GB. The file name should be "Update Kostum Pes 6 Menjadi Pes 13.rar". The file type should be RAR.

-

To check these details, you can right-click on the file icon and select "Properties". A window will pop up that shows you these information.

-

If everything looks fine, you can proceed to extract the file. If not, you may have downloaded a corrupted or incomplete file. In that case, you should delete it and try downloading it again from another source.

-

How to Extract the File

-

The next step is to extract the file that you have downloaded. The file is compressed in RAR format. This means that it contains multiple files inside it that are packed together to reduce its size.

-

To extract these files, you will need a file extractor program such as WinRAR or 7-Zip. These programs allow you to open and decompress RAR files easily.

-

To extract the file using WinRAR, follow these steps:

-
    -
  1. Right-click on the file icon and select "Extract Here".
  2. -
  3. Wait for WinRAR to extract all files into a new folder named "Update Kostum Pes 6 Menjadi Pes 13".
  4. -
  5. Open this folder and check its contents. You should see several subfolders named "0_text", "e_text", "0_sound", etc., as well as some files named "PES6.exe", "settings.exe", etc.
  6. -
-

To extract using 7-Zip instead of WinRAR follow these steps:

-
    -
  1. Right-click on file icon then select "7-Zip" then select "Extract Here".
  2. -
  3. Wait for 7-Zip extract all files into new folder named "Update Kostum Pes 6 Menjadi Pes 13".
  4. -
  5. Open this folder then check its contents same way as above.
  6. -
-

How To Install The Update File

-

The final step is install update file that you have extracted into your PES 6 folder. This will overwrite original files with new ones that contain updated costumes.

-

Before you do this though make sure backup your original files case something goes wrong or want revert back old costumes later.

-

To backup your original files follow these steps:

-
    -
  1. Navigate your PES 6 folder where game installed on PC usually located at C:\Program Files\KONAMI\Pro Evolution Soccer 6\ .
  2. -
  3. Select all files folders inside then copy them another location PC such as Desktop Documents etc.
  4. -
  5. Rename copied folder something like "PES 6 Backup" so know what it is later.
  6. -
-

To install update file follow these steps:

-
    -
  1. Navigate folder where extracted update file earlier named "Update Kostum Pes Menjadi Pes".
  2. -
  3. Select all files folders inside then copy them same location where game installed overwriting existing ones when prompted confirm replace .
  4. -finish. -
  5. Run game enjoy new costumes.
  6. -
-

Conclusion

-

Congratulations! You have successfully downloaded and installed the latest costume update for PES 6 to PES 13. You can now play with updated costumes for over 200 teams from various leagues and competitions around the world.

-

Updating costumes can make your game look more realistic and up-to-date. It can also make your game more fun and enjoyable by adding variety and diversity to your teams and players. You can choose from different styles, colors, patterns, logos, sponsors, etc.

-

Here are some tips and tricks for using the update:

- -

We hope you enjoyed this article and found it helpful. If you have any feedback or questions, please feel free to leave a comment below. We would love to hear from you!

-

FAQs

-

Q: Can I use this update for other versions of PES?

-

A: No, this update is only compatible with PES 6. If you try to use it for other versions of PES, you may encounter errors or bugs that may damage your game or PC.

-

Q: Will this update affect my saved games or online play?

-

A: No, this update only changes the appearance of the costumes, not the gameplay or data. Your saved games and online play will not be affected by this update.

-

Q: What if I encounter any errors or bugs after installing the update?

-

A: You can try to reinstall the update or restore your original files from the backup. If that does not work, you can contact the creator of the update or visit some websites and forums that offer support and solutions for PES 6 issues.

-

Q: Where can I find more updates and mods for PES 6?

-

A: You can visit some popular websites and forums that offer PES 6 updates and mods, such as pes-patch.com, pesnewupdate.com, or evo-web.co.uk. You can find updates and mods for various aspects of the game, such as teams, players, stadiums, balls, boots, logos, etc.

-

Q: How can I create my own costumes for PES 6?

-

A: You can use some tools and software that allow you to edit and create costumes for PES 6, such as Kitserver, GDB Manager, or Photoshop. You can find tutorials and guides on how to use these tools and software on some websites and forums that offer PES 6 updates and mods.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md deleted file mode 100644 index 0ab4442ec55dd33328803f1c6cb4acb4a02e16e3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download manley massive passive eq plugin.rar 16 and master your tracks with the synthesis of the best passive equalizers of the last 70 years.md +++ /dev/null @@ -1,195 +0,0 @@ -
-

Download manley massive passive eq plugin.rar 16: How to Get the Best Tube EQ for Mixing and Mastering

-

If you are looking for a high-end tube EQ that can shape your tracks and masters with musical curves and unparalleled clarity, you might want to consider downloading manley massive passive eq plugin.rar 16. This is a plugin emulation of one of the most popular and sought-after passive EQs in the audio industry, the Manley Massive Passive EQ. In this article, we will explain what this EQ is, why you should download it, and how to use it effectively.

-

What is Manley Massive Passive EQ?

-

The Manley Massive Passive EQ is a two-channel, four-band tube EQ that was designed by Manley Labs in 1998. It is based on the design strengths of various classic EQs, such as console, parametric, graphic, and Pultec EQs. It uses only passive components, such as resistors, inductors, and capacitors, to create all frequency changes. This gives it a natural and organic sound that is different from active or digital EQs.

-

Download manley massive passive eq plugin.rar 16


Downloadhttps://byltly.com/2uKxF8



-

The history and features of the hardware EQ

-

The Manley Massive Passive EQ was created by EveAnna Manley and Hutch Hutchison, who wanted to make a versatile and musical EQ that could handle any source material. They combined elements from different types of EQs, such as shelving filters, bell curves, resonant filters, and cut filters. They also added some unique features, such as:

- -

All these features allow the user to create complex and musical EQ shapes that can enhance or transform any sound source.

-

The benefits and drawbacks of the hardware EQ

-

The Manley Massive Passive EQ has been praised by many engineers and producers for its sound quality, flexibility, and character. Some of the benefits of using this hardware EQ are:

- -

However, like any hardware device, it also has some drawbacks that might limit its usability or availability. Some of these drawbacks are:

- -

These drawbacks might make it difficult or impractical for some users to own or use this hardware EQ in their studios or projects.

-

The official UAD plugin emulation of the hardware EQ

-

To address these drawbacks and make this hardware EQ more accessible and convenient for users, Universal Audio (UAD) developed an official plugin emulation of the Manley Massive Passive EQ in 2010. This plugin was modeled by UAD engineers with the help of Manley Labs, who provided them with schematics, measurements, samples, and feedback. The plugin captures every aspect of the hardware's behavior, from its unique filter curves, to its multiple band interdependencies, right down to the tube amplifier distortion, and all-important transformer/inductor hysteresis.

Why download manley massive passive eq plugin.rar 16?

-

Downloading manley massive passive eq plugin.rar 16 is a great way to get the best of both worlds: the sound of the hardware EQ and the convenience of the plugin format. Here are some reasons why you should download this plugin:

-

The advantages of using the plugin version over the hardware version

-

While the hardware version of the Manley Massive Passive EQ is undoubtedly a masterpiece of audio engineering, it also has some limitations that might make it less suitable for some users or situations. The plugin version, on the other hand, offers some advantages that can overcome these limitations, such as:

-

How to install manley massive passive eq plugin.rar 16
-Manley massive passive eq plugin.rar 16 free download link
-Manley massive passive eq plugin.rar 16 crack file
-Manley massive passive eq plugin.rar 16 tutorial video
-Manley massive passive eq plugin.rar 16 review and rating
-Manley massive passive eq plugin.rar 16 compatible software
-Manley massive passive eq plugin.rar 16 license key generator
-Manley massive passive eq plugin.rar 16 user manual pdf
-Manley massive passive eq plugin.rar 16 best settings and presets
-Manley massive passive eq plugin.rar 16 comparison with other plugins
-Manley massive passive eq plugin.rar 16 discount code and coupon
-Manley massive passive eq plugin.rar 16 troubleshooting and support
-Manley massive passive eq plugin.rar 16 features and benefits
-Manley massive passive eq plugin.rar 16 testimonials and feedback
-Manley massive passive eq plugin.rar 16 alternatives and competitors
-Manley massive passive eq plugin.rar 16 system requirements and specifications
-Manley massive passive eq plugin.rar 16 update and upgrade
-Manley massive passive eq plugin.rar 16 refund policy and guarantee
-Manley massive passive eq plugin.rar 16 demo and trial version
-Manley massive passive eq plugin.rar 16 tips and tricks
-Manley massive passive eq plugin.rar 16 pros and cons
-Manley massive passive eq plugin.rar 16 history and background
-Manley massive passive eq plugin.rar 16 FAQs and answers
-Manley massive passive eq plugin.rar 16 forum and community
-Manley massive passive eq plugin.rar 16 blog and news
-Manley massive passive eq plugin.rar 16 affiliate program and commission
-Manley massive passive eq plugin.rar 16 case studies and success stories
-Manley massive passive eq plugin.rar 16 awards and recognition
-Manley massive passive eq plugin.rar 16 webinar and training
-Manley massive passive eq plugin.rar 16 podcast and interview
-Manley massive passive eq plugin.rar 16 ebook and guide
-Manley massive passive eq plugin.rar 16 course and certification
-Manley massive passive eq plugin.rar 16 infographic and chart
-Manley massive passive eq plugin.rar 16 checklist and template
-Manley massive passive eq plugin.rar 16 cheat sheet and summary
-Manley massive passive eq plugin.rar 16 glossary and terminology
-Manley massive passive eq plugin.rar 16 calculator and tool
-Manley massive passive eq plugin.rar 16 quiz and survey
-Manley massive passive eq plugin.rar 16 meme and joke
-Manley massive passive eq plugin.rar 16 wallpaper and screensaver

- -

These advantages make the plugin version more suitable for users who want to use the Manley Massive Passive EQ in different settings, such as home studios, mobile rigs, or live performances.

-

The compatibility and requirements of the plugin version

-

The plugin version of the Manley Massive Passive EQ is available for both Windows and Mac operating systems. It supports VST, AU, AAX, and RTAS plugin formats. It can be used in any DAW that supports these formats, such as Pro Tools, Logic Pro, Cubase, Ableton Live, FL Studio, Reaper, etc. However, there are some requirements that you need to meet in order to use this plugin:

- -

These requirements might make it difficult or impossible for some users to use this plugin if they don't have a UAD device or enough DSP power or disk space.

-

The best sources and methods to download the plugin version

-

If you meet the requirements and want to download manley massive passive eq plugin.rar 16 , you have a few options to choose from. Here are some of the best sources and methods to download this plugin:

-

The official UAD website

-

The most reliable and secure way to download manley massive passive eq plugin.rar 16 is to get it from the official UAD website. This way, you can be sure that you are getting the latest and most authentic version of the plugin, as well as the best customer support and updates. To download the plugin from the UAD website, you need to follow these steps:

-
    -
  1. Log into your UAD account or create one if you don't have one already.
  2. -
  3. Go to the Manley Massive Passive EQ product page and click on the "Add to Cart" button.
  4. -
  5. Proceed to checkout and complete your payment. The plugin costs $299, but you might be able to get it for a lower price if there is a promotion or a coupon available.
  6. -
  7. After your payment is confirmed, go to the "My Products" section of your account and click on the "Download" button for the plugin.
  8. -
  9. Save the file to your computer and open it to start the installation process.
  10. -
  11. Follow the instructions on the screen to install the plugin and authorize it with your UAD device.
  12. -
-

Note: You can also download a 14-day free trial of the plugin from the UAD website if you want to test it before buying it.

-

The torrent websites

-

Another way to download manley massive passive eq plugin.rar 16 is to use torrent websites. These are websites that allow users to share files with each other using a peer-to-peer network. Torrent websites can offer some advantages over the official UAD website, such as:

- -

However, torrent websites also have some disadvantages and risks that you should be aware of, such as:

- -

If you decide to use torrent websites to download manley massive passive eq plugin.rar 16, you need to follow these steps:

-
    -
  1. Find a reputable and trustworthy torrent website that has the file you are looking for. Some of the most popular torrent websites are The Pirate Bay, 1337x, RARBG, etc.
  2. -
  3. Search for "manley massive passive eq plugin.rar 16" on the website and look for a file that has a high number of seeders (sources) and leechers (downloaders), as well as positive comments and ratings from other users.
  4. -
  5. Download a torrent client software that can open and manage torrent files. Some of the most popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc.
  6. -
  7. Open the torrent file with your torrent client and choose a location to save the file on your computer.
  8. -
  9. Wait for the download to finish and then open the file to start the installation process.
  10. -
  11. Follow the instructions on the screen to install the plugin and crack it if necessary.
  12. -
-

Note: You might need a VPN service or a proxy server to access some torrent websites or files if they are blocked or restricted in your location.

-

How to use manley massive passive eq plugin.rar 16 effectively?

-

Now that you have downloaded manley massive passive eq plugin.rar 16 , you might be wondering how to use it effectively. The Manley Massive Passive EQ plugin is a powerful and versatile tool that can help you shape your sounds in various ways. However, it also requires some knowledge and skill to use it properly. Here are some tips and tricks on how to use manley massive passive eq plugin.rar 16 effectively:

-

The basic controls and functions of the plugin

-

The plugin interface of the Manley Massive Passive EQ is very similar to the hardware version, except for some minor differences. The plugin has two channels: left and right. Each channel has four bands: low, low-mid, high-mid, and high. Each band has four controls: frequency, gain, bandwidth, and filter type. There are also some global controls: input level, output level, phase invert, link mode, and bypass.

-

The frequency control allows you to select the center or corner frequency of each band. You can choose from 11 fixed frequencies for each band, ranging from 22 Hz to 27 kHz. The gain control allows you to boost or cut the selected frequency by up to 20 dB. The bandwidth control allows you to adjust the width or slope of the filter curve. You can choose from 5 fixed values for each band, ranging from narrow to wide. The filter type control allows you to switch between two modes: normal and bandpass. The normal mode offers a conventional boost or cut response, while the bandpass mode offers a narrower and steeper response that can create resonant peaks or notches.

-

The input level control allows you to adjust the level of the incoming signal before it reaches the EQ section. You can boost or attenuate the input level by up to 12 dB. The output level control allows you to adjust the level of the outgoing signal after it passes through the EQ section. You can boost or attenuate the output level by up to 12 dB. The phase invert control allows you to flip the polarity of the signal for each channel. This can help you correct phase issues or create interesting effects. The link mode control allows you to link the left and right channels for stereo operation. You can choose from three modes: off, L/R, and M/S. The off mode allows you to adjust each channel independently. The L/R mode allows you to adjust both channels simultaneously with the same settings. The M/S mode allows you to adjust the mid and side signals separately with different settings. The bypass control allows you to bypass the EQ section for each channel. This can help you compare the processed and unprocessed signals.

-

The tips and tricks to get the most out of the plugin

-

The Manley Massive Passive EQ plugin is a very flexible and musical EQ that can be used for various purposes and genres. However, it also has some quirks and characteristics that you need to be aware of and take advantage of. Here are some tips and tricks to get the most out of this plugin:

- -

The examples and presets of using the plugin on different sources

-

The Manley Massive Passive EQ plugin can be used on different sources, such as vocals, drums, guitars, bass, keyboards, synths, etc. Depending on the source and the desired result, you can use different settings and techniques to achieve various effects. Here are some examples and presets of using the plugin on different sources:

-

Vocals

-

The Manley Massive Passive EQ plugin can be used to enhance or correct vocals in various ways. You can use it to add warmth, presence, brightness, airiness, or smoothness to vocals. You can also use it to remove harshness, sibilance, muddiness, or boominess from vocals. Here are some presets for vocal processing:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md deleted file mode 100644 index a999532803e4f067e8f7c6fca0af42c9fd49e69d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forex Hacked Pro Free Download ((LINK)).md +++ /dev/null @@ -1,30 +0,0 @@ -
-

Forex Hacked Pro Free Download: A Powerful EA for Scalping and Hedging

-

Forex Hacked Pro is an expert advisor (EA) that can help you make money from forex trading. It is designed for the MetaTrader 4 platform and works with any broker that supports it. Forex Hacked Pro can perform various tasks such as flashing, unlocking, and repairing your forex account. It can also remove FRP lock, reset locks, enable diag mode, and more on your forex account. In this article, we will show you how to download Forex Hacked Pro for free and how to use it to boost your forex profits.

-

forex hacked pro free download


Download ->>->>->> https://byltly.com/2uKyu4



-

What is Forex Hacked Pro?

-

Forex Hacked Pro is a modified version of the original Forex Hacked EA that was released in 2009. It has more features and options than the basic version, such as the ability to trade on multiple currency pairs, use different strategies, and optimize the settings for each pair. Forex Hacked Pro uses a combination of martingale and hedging techniques to increase the chances of winning trades. It also has a built-in news filter that avoids trading during high-impact news events. Forex Hacked Pro is a very profitable EA, but it also comes with a high risk of losing your account if not used properly. Therefore, it is recommended to use it with caution and withdraw your profits regularly.

-

How to Download Forex Hacked Pro for Free?

-

You can download Forex Hacked Pro for free from various websites that offer cracked versions of the EA. However, these versions may not be reliable or safe to use, as they may contain viruses, malware, or hidden codes that can harm your computer or forex account. Therefore, it is better to download Forex Hacked Pro from the official website of Forex Hacked, where you can get the latest and updated version of the EA for a one-time fee of $329.99. This fee includes both the basic and pro versions of Forex Hacked, as well as lifetime support and updates. You can also get access to the members area where you can find detailed guides, tutorials, and optimized settings for each currency pair.

-

How to Use Forex Hacked Pro?

-

To use Forex Hacked Pro, you need to follow these steps:

-

-
    -
  1. Install MetaTrader 4 on your computer and create an account with a broker that supports MT4.
  2. -
  3. Download Forex Hacked Pro from the official website or any other source that you trust.
  4. -
  5. Extract the zip file and copy the Forex Hacked Pro.ex4 file to the Experts folder of your MT4 installation directory.
  6. -
  7. Copy the .set files for each currency pair that you want to trade to the Presets folder of your MT4 installation directory.
  8. -
  9. Connect your MT4 account to your broker and make sure you have enough balance to trade.
  10. -
  11. Open MT4 and go to Tools > Options > Expert Advisors. Check the boxes that allow automated trading and DLL imports.
  12. -
  13. Go to the Navigator window and drag the Forex Hacked Pro EA to the chart of the currency pair that you want to trade.
  14. -
  15. A pop-up window will appear with the input parameters of the EA. You can either use the default settings or load the .set file for that pair from the Presets folder.
  16. -
  17. Click OK and make sure there is a smiley face on the top right corner of the chart. This means that the EA is activated and ready to trade.
  18. -
-

You have successfully installed and activated Forex Hacked Pro on your MT4 account. Now you can sit back and watch it trade for you.

-

Tips and Warnings

-
Preset NameDescriptionSettings
Vocal WarmthThis preset adds some warmth and body to vocals by boosting some low-mid frequencies.Low band: 220 Hz / +6 dB / wide / normal / bell
Low-mid band: 390 Hz / +3 dB / wide / normal / bell
High-mid band: off
High band: off
Vocal PresenceThis preset adds some presence and clarity to vocals by boosting some high-mid frequencies.Low band: off
Low-mid band: off
High-mid band: 3.9 kHz / +6 dB / narrow / normal / bell
High band: off
Vocal BrightnessThis preset adds some brightness and sparkle to vocals by boosting some high frequencies.Low band: off
Low-mid band: off
High-mid band: off
High band: 16 kHz / +6 dB / wide / normal / shelf
Vocal AirinessThis preset adds some airiness and openness to vocals by boosting some very high frequencies.Low band: off
Low-mid band: off
High-mid band: off
High band: 27 kHz / +6 dB / wide / normal / shelf
Vocal SmoothnessThis preset adds some smoothness and silkiness to vocals by cutting some harsh frequencies.Low band: off
Low-mid band: 1.5 kHz / -6 dB / narrow / normal / bell
High-mid band: 6.8 kHz / -6 dB / narrow / normal / bell
High band: off
Vocal De-EsserThis preset reduces sibilance and harshness from vocals by cutting some high frequencies with a bandpass filter.Low band: off
Low-mid band: off
High-mid band: 8.2 kHz / -12 dB / narrow / bandpass / bell
High band: off
Vocal De-MudThis preset removes muddiness and boominess from vocals by cutting some low frequencies with a shelf filter.Low band: 82 Hz / -12 dB / wide / normal / shelf
Low-mid band: off
High-mid band: off
High band: off
Vocal De-BoomThis preset removes boominess and plosives from vocals by cutting some low frequencies with a bell filter.
- - - - -
EditionPriceBonuses
Standard Edition$59.99• 5,000 Virtual Currency
• 5,000 MyTEAM Points
• 5 MyCAREER Skill Boosts
• MyPLAYER Clothing Capsule
• 10 MyTEAM League packs (delivered one a week)
• 5 Heat Check packs (delivered one a week beginning at the start of the NBA season)
Digital Deluxe Edition$79.99• 35,000 Virtual Currency
• 10,000 MyTEAM Points
• 10 MyCAREER Skill Boosts
• MyPLAYER Clothing Capsule
• 10 MyTEAM League Packs (delivered one a week)
• 10 MyTEAM Heat Check packs (delivered one a week beginning at the start of the NBA season)
• 1 Sapphire MyTEAM Cover Athlete Card
Legend Edition$99.99• 100,000 Virtual Currency
• 50,000 MyTEAM Points
• 20 MyCAREER Skill Boosts
• MyPLAYER Clothing Capsule
• MyPLAYER Apparel Collection
• MyPLAYER Shoe Collection
• 20 MyTEAM League Packs (delivered one a week)
• 20 MyTEAM Heat Check Packs (delivered one a week beginning at the start of the NBA season)
• 5 MyTEAM Theme Packs (one per theme release across the first five releases)
• 2 Sapphire MyTEAM Cover Athlete Cards
-

You can pre-order any of these editions from the Xbox Store or from other retailers. By pre-ordering, you can also get some additional bonuses, such as:

- -

How to download NBA 2K20 digitally on Xbox One?

-

The steps to pre-order and pre-download the game from the Xbox Store

-

If you want to download NBA 2K20 digitally on your Xbox One, you can follow these simple steps:

-
    -
  1. Go to the Xbox Store on your console or on your web browser.
  2. -
  3. Search for NBA 2K20 and select the edition you want to buy.
  4. -
  5. Add the game to your cart and proceed to checkout.
  6. -
  7. Enter your payment details and confirm your purchase.
  8. -
  9. Once the purchase is complete, you can start downloading the game on your console.
  10. -
  11. To pre-download the game, go to My Games & Apps and select NBA 2K20.
  12. -
  13. Select Manage Game and then Ready to Install.
  14. -
  15. Select Install All and wait for the download to finish.
  16. -
  17. You can check the progress of the download by going back to My Games & Apps and selecting Queue.
  18. -
  19. Once the download is complete, you can launch the game from My Games & Apps or from your Home screen.
  20. -
-

The benefits of digital download over physical disc

-

There are some advantages of downloading NBA 2K20 digitally over buying a physical disc. Some of them are:

- -

What are the system requirements and file size of NBA 2K20 on Xbox One?

-

The minimum and recommended specifications for running the game smoothly

-

NBA 2K20 is a demanding game that requires a powerful console to run smoothly. Here are the minimum and recommended specifications for playing NBA 2K20 on Xbox One:

- - - - - - - - - -
SpecificationMinimumRecommended
Xbox One ModelXbox One SXbox One X
CPUJaguar Evolved @1.75GHz (8 cores)Jaguar Evolved @2.3GHz (8 cores)
GPURadeon GCN @914MHz (12 CUs)Radeon GCN @1172MHz (40 CUs)
RAM8GB DDR3 + 32MB ESRAM @68GB/s12GB GDDR5 @326GB/s
HDD Space80GB free space80GB free space + external SSD for faster loading times
Internet Connection Broadband connection with at least 3 Mbps download speed and 1 Mbps upload speedBroadband connection with at least 10 Mbps download speed and 5 Mbps upload speed
Xbox Live Gold MembershipRequired for online multiplayer modesRequired for online multiplayer modes
-

The storage space needed for installing and updating the game

-

NBA 2K20 is a large game that takes up a lot of storage space on your Xbox One. The initial file size of the game is about 80GB, but it can increase with updates and patches. The latest update for NBA 2K20, which was released on June 15, 2023, added another 10GB to the file size, bringing the total to 90GB. Therefore, you need to make sure you have enough free space on your console's hard drive or on an external storage device before downloading and installing the game. You can check your available storage space by going to Settings > System > Storage on your Xbox One.

-

How to play NBA 2K20 online with friends and other players?

-

The online modes and features of NBA 2K20

-

NBA 2K20 offers a variety of online modes and features that let you play with and against other players from around the world. Some of the online modes and features are:

- -

The tips and tricks to improve your online performance and experience

-

Playing NBA 2K20 online can be challenging and rewarding, but also frustrating and disappointing at times. To improve your online performance and experience, here are some tips and tricks that you can follow:

-

nba 2k20 xbox one digital deluxe edition
-nba 2k20 xbox one cd key
-nba 2k20 xbox one demo download
-nba 2k20 xbox one best price
-nba 2k20 xbox one gamestop
-nba 2k20 xbox one online gameplay
-nba 2k20 xbox one mycareer mode
-nba 2k20 xbox one locker codes
-nba 2k20 xbox one update patch
-nba 2k20 xbox one cheats and tips
-nba 2k20 xbox one controller settings
-nba 2k20 xbox one graphics comparison
-nba 2k20 xbox one vs ps4
-nba 2k20 xbox one review and ratings
-nba 2k20 xbox one trailer and screenshots
-nba 2k20 xbox one download size and time
-nba 2k20 xbox one system requirements
-nba 2k20 xbox one pre order bonus
-nba 2k20 xbox one redeem code
-nba 2k20 xbox one free trial
-nba 2k20 xbox one black friday deal
-nba 2k20 xbox one ultimate edition
-nba 2k20 xbox one legends edition
-nba 2k20 xbox one myteam points
-nba 2k20 xbox one virtual currency
-nba 2k20 xbox one skill boosts
-nba 2k20 xbox one clothing capsule
-nba 2k20 xbox one league packs
-nba 2k20 xbox one heat check packs
-nba 2k20 xbox one sapphire card
-nba 2k20 xbox one wnba teams and players
-nba 2k20 xbox one dynamic soundtrack
-nba 2k20 xbox one legendary teams and players
-nba 2k20 xbox one next level presentation
-nba 2k20 xbox one mygm mode
-nba 2k20 xbox one myleague mode
-nba 2k20 xbox one best build and archetype
-nba 2k20 xbox one badges and attributes
-nba 2k20 xbox one dribble moves and animations
-nba 2k20 xbox one shooting controls and tips
-nba 2k20 xbox one defensive game and strategies
-nba 2k20 xbox one idris elba and rosario dawson
-nba 2k20 xbox one custom rosters and sliders
-nba 2k20 xbox one face scan and creation
-nba 2k20 xbox one neighborhood and park events
-nba 2k20 xbox one rec center and pro am games
-nba 2k20 xbox one glitches and bugs fixes

- -

Conclusion

-

A summary of the main points and a call to action

-

NBA 2K20 is a basketball simulation game that offers a realistic and immersive experience for basketball and video game fans. You can download NBA 2K20 digitally on your Xbox One from the Xbox Store or from other retailers. You can choose from different editions and bonuses that suit your preferences and budget. You can also enjoy various online modes and features that let you play with and against other players from around the world. To improve your online performance and experience, you can follow some tips and tricks that we have shared in this article.

-

If you are ready to join the NBA 2K20 community and have some fun, you can pre-order or purchase the game today and start downloading it on your Xbox One. You can also check out the official website, social media pages, and YouTube channel of NBA 2K20 for more information, updates, and news. We hope you found this article helpful and informative. Thank you for reading and happy gaming!

-

FAQs

-

Five unique questions and answers related to NBA 2K20 Xbox One digital download

-
    -
  1. Q: Can I play NBA 2K20 offline?
    A: Yes, you can play NBA 2K20 offline in some modes, such as MyCAREER, MyLEAGUE, MyGM, Play Now, and Blacktop. However, you will need an internet connection and an Xbox Live Gold membership to play online modes, such as MyPLAYER Nation, Neighborhood, Play Now Online, MyTEAM Unlimited, MyTEAM Triple Threat Online, MyTEAM Online Tournament, MyLEAGUE Online, Pro-Am, The Rec, and Park.
  2. -
  3. Q: Can I transfer my progress and data from NBA 2K19 to NBA 2K20?
    A: No, you cannot transfer your progress and data from NBA 2K19 to NBA 2K20. You will have to start from scratch in NBA 2K20.
  4. -
  5. Q: Can I play NBA 2K20 with a keyboard and mouse on Xbox One?
    A: No, you cannot play NBA 2K20 with a keyboard and mouse on Xbox One. You will need a compatible controller to play the game.
  6. -
  7. Q: Can I get a refund for NBA 2K20 if I don't like it?
    A: It depends on where you bought the game and what are their refund policies. If you bought the game from the Xbox Store, you can request a refund within 14 days of purchase if you have not downloaded or launched the game. If you bought the game from another retailer, you will have to contact them directly and follow their refund policies.
  8. -
  9. Q: How can I get free Virtual Currency (VC) in NBA 2K20?
    A: There are several ways to get free VC in NBA 2K20, such as playing games, completing challenges, watching ads, using the mobile app, entering locker codes, and participating in events.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlimited UC with PUBG Mobile Mod Apk Download Now and Play Like a Pro.md b/spaces/congsaPfin/Manga-OCR/logs/Unlimited UC with PUBG Mobile Mod Apk Download Now and Play Like a Pro.md deleted file mode 100644 index 0d1b5dcddfb33de2a475d36cc925d30075db8e59..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Unlimited UC with PUBG Mobile Mod Apk Download Now and Play Like a Pro.md +++ /dev/null @@ -1,151 +0,0 @@ -
-

Download PUBG Mobile Mod UC APK: How to Get Unlimited UC, Aimbot and Hack for Free

-

If you are a fan of online multiplayer battle royale games, you must have heard of PUBG Mobile. It is one of the most popular and addictive games in the world, with millions of players competing for the ultimate survival. But what if you want to get an edge over your opponents and enjoy some extra features that are not available in the official version? In this article, we will tell you everything you need to know about PUBG Mobile Mod UC APK, a modified version of the game that gives you unlimited UC, aimbot, hack and more. We will also show you how to download and use it safely and effectively.

-

download pubg mobile mod uc apk


Downloadhttps://urlca.com/2uO9C8



-

What is PUBG Mobile?

-

PUBG Mobile is a mobile version of PlayerUnknown's Battlegrounds, a game that was originally released for PC in 2017. It is a multiplayer online battle royale game, where up to 100 players parachute onto an island and fight each other until only one remains. The game has various modes, maps, weapons, vehicles and items that make each match unique and exciting. You can also customize your character, join a clan, chat with friends and participate in events and tournaments.

-

Features of PUBG Mobile

-

Some of the features that make PUBG Mobile stand out from other similar games are:

- -

How to play PUBG Mobile

-

To play PUBG Mobile, you need to have a compatible device and a stable internet connection. You can download the game for free from the Google Play Store or the Apple App Store. Once you install the game, you need to create an account and choose a server. Then you can select a game mode and start a match. You can either play solo or team up with other players. You can also invite your friends to join your team or join a random team. The game will match you with other players who have similar skills and preferences. Once the match starts, you will be on a plane that flies over an island. You can choose where to land by tapping on the map. You need to find weapons, items and vehicles as soon as possible and avoid the enemies. You also need to stay inside the safe zone that shrinks over time. The last player or team alive wins the match.

-

What is

What is PUBG Mobile Mod UC APK?

-

PUBG Mobile Mod UC APK is a modified version of PUBG Mobile that gives you access to some features that are not available in the official version. These features include unlimited UC, aimbot, hack and more. UC stands for Unknown Cash, which is the in-game currency that you can use to buy skins, outfits, emotes and other items. Aimbot is a feature that helps you aim and shoot your enemies automatically. Hack is a feature that gives you various advantages, such as wallhack, speedhack, no recoil and more.

-

Benefits of PUBG Mobile Mod UC APK

-

Some of the benefits that you can enjoy by using PUBG Mobile Mod UC APK are:

- -

Risks of PUBG Mobile Mod UC APK

-

However, using PUBG Mobile Mod UC APK also comes with some risks that you should be aware of. These risks include:

-

How to download pubg mobile mod uc apk for free
-Download pubg mobile mod uc apk latest version
-Download pubg mobile mod uc apk unlimited money and health
-Download pubg mobile mod uc apk with obb file
-Download pubg mobile mod uc apk no root required
-Download pubg mobile mod uc apk for android devices
-Download pubg mobile mod uc apk for ios devices
-Download pubg mobile mod uc apk for pc windows 10
-Download pubg mobile mod uc apk for mac os
-Download pubg mobile mod uc apk offline mode
-Download pubg mobile mod uc apk with anti-ban feature
-Download pubg mobile mod uc apk with aimbot and wallhack
-Download pubg mobile mod uc apk with all skins and outfits
-Download pubg mobile mod uc apk with all weapons and vehicles
-Download pubg mobile mod uc apk with all maps and modes
-Download pubg mobile mod uc apk with voice chat and team up
-Download pubg mobile mod uc apk with custom settings and controls
-Download pubg mobile mod uc apk with high graphics and fps
-Download pubg mobile mod uc apk with low data usage and battery consumption
-Download pubg mobile mod uc apk with fast download and installation
-Benefits of downloading pubg mobile mod uc apk
-Risks of downloading pubg mobile mod uc apk
-Alternatives to downloading pubg mobile mod uc apk
-Reviews of downloading pubg mobile mod uc apk
-FAQs of downloading pubg mobile mod uc apk
-Tips and tricks of downloading pubg mobile mod uc apk
-Tutorials of downloading pubg mobile mod uc apk
-Guides of downloading pubg mobile mod uc apk
-Videos of downloading pubg mobile mod uc apk
-Blogs of downloading pubg mobile mod uc apk
-Forums of downloading pubg mobile mod uc apk
-Websites of downloading pubg mobile mod uc apk
-Sources of downloading pubg mobile mod uc apk
-Links of downloading pubg mobile mod uc apk
-Tools of downloading pubg mobile mod uc apk
-Apps of downloading pubg mobile mod uc apk
-Games of downloading pubg mobile mod uc apk
-Software of downloading pubg mobile mod uc apk
-Programs of downloading pubg mobile mod uc apk
-Platforms of downloading pubg mobile mod uc apk
-Devices of downloading pubg mobile mod uc apk
-Systems of downloading pubg mobile mod uc apk
-Requirements of downloading pubg mobile mod uc apk
-Features of downloading pubg mobile mod uc apk
-Advantages of downloading pubg mobile mod uc apk
-Disadvantages of downloading pubg mobile mod uc apk
-Problems of downloading pubg mobile mod uc apk
-Solutions of downloading pubg mobile mod uc apk

- -

How to download PUBG Mobile Mod UC APK?

-

If you want to download PUBG Mobile Mod UC APK, you need to follow some steps carefully. Here are the steps that you need to follow:

-

Steps to download PUBG Mobile Mod UC APK

-
    -
  1. First, you need to uninstall the official version of PUBG Mobile from your device. You can do this by going to Settings > Apps > PUBG Mobile > Uninstall.
  2. -
  3. Next, you need to find a reliable source that provides the PUBG Mobile Mod UC APK file. You can search online for some websites or blogs that offer the APK file. Make sure that the source is trustworthy and has positive reviews from other users.
  4. -
  5. Then, you need to download the PUBG Mobile Mod UC APK file from the source. You can do this by clicking on the download link or button provided by the source. The file size may vary depending on the features and updates of the mod.
  6. -
  7. After that, you need to enable the installation of unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources > Enable.
  8. -
  9. Finally, you need to install the PUBG Mobile Mod UC APK file on your device. You can do this by locating the file in your downloads folder or notification bar and tapping on it. The installation process may take a few minutes depending on your device and internet speed.
  10. -
-

Tips to avoid malware and viruses

-

To avoid malware and viruses on your device, you should follow some tips when downloading PUBG Mobile Mod UC APK. These tips are:

- -

How to use PUBG Mobile Mod UC APK?

-

Once you have downloaded and installed PUBG Mobile Mod UC APK on your device, you can start using it to play PUBG Mobile with unlimited UC, aimbot, hack and more. Here are some tips on how to use PUBG Mobile Mod UC APK:

-

How to get unlimited UC with PUBG Mobile Mod UC APK

-

To get unlimited UC with PUBG Mobile Mod UC APK, you need to follow these steps:

-
    -
  1. Open the PUBG
  2. Open the PUBG Mobile Mod UC APK app on your device and log in with your account.
  3. -
  4. Go to the store section and select the UC option.
  5. -
  6. Choose the amount of UC that you want to buy and tap on the buy button.
  7. -
  8. You will see a confirmation message that says that you have successfully purchased the UC.
  9. -
  10. You can check your UC balance in your account and use it to buy anything you want in the game.
  11. -
-

How to use aimbot and hack with PUBG Mobile Mod UC APK

-

To use aimbot and hack with PUBG Mobile Mod UC APK, you need to follow these steps:

-
    -
  1. Open the PUBG Mobile Mod UC APK app on your device and log in with your account.
  2. -
  3. Go to the settings section and select the mod menu option.
  4. -
  5. You will see a list of features that you can enable or disable, such as aimbot, wallhack, speedhack, no recoil and more.
  6. -
  7. Select the features that you want to use and adjust the settings according to your preference.
  8. -
  9. Start a match and enjoy the advantages that the modded app gives you.
  10. -
-

Conclusion

-

PUBG Mobile Mod UC APK is a modified version of PUBG Mobile that gives you unlimited UC, aimbot, hack and more. It can help you enhance your game experience and have more fun. However, it also comes with some risks, such as getting banned, getting malware or viruses, losing your data or information, facing legal issues or ruining the game for others. Therefore, you should be careful when downloading and using PUBG Mobile Mod UC APK. You should also respect the rules and ethics of the game and the platform. We hope that this article has given you some useful information and tips on how to download and use PUBG Mobile Mod UC APK safely and effectively.

-

FAQs

-

Here are some frequently asked questions about PUBG Mobile Mod UC APK:

- -

I'm

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Why Master League is the best game mode on eFootball 2022.md b/spaces/congsaPfin/Manga-OCR/logs/Why Master League is the best game mode on eFootball 2022.md deleted file mode 100644 index 112014690f47ca033b775a00a4a41c4fb4073395..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Why Master League is the best game mode on eFootball 2022.md +++ /dev/null @@ -1,125 +0,0 @@ - -

Master League eFootball 2022: Everything You Need to Know

-

If you are a fan of football simulation games, you have probably heard of Master League, one of the most popular game modes in the PES series. But what is Master League exactly, and how can you play it in eFootball 2022, the successor of PES? In this article, we will answer these questions and more, so you can enjoy the ultimate football management experience in eFootball 2022.

-

master league efootball 2022


Download Ziphttps://urlca.com/2uObsC



-

What is Master League?

-

A popular game mode that lets you manage your own team

-

Master League is a game mode that allows you to create and manage your own football team, from signing players and staff, to setting tactics and strategies, to competing in various leagues and tournaments. You can choose from hundreds of real-life teams and players, or create your own custom ones. You can also select your manager avatar from a list of legendary football icons, such as Johan Cruyff, Diego Maradona, or Pep Guardiola.

-

The features and challenges of Master League

-

Master League is not just about playing matches, but also about managing every aspect of your team. You have to deal with transfers, contracts, budgets, injuries, morale, chemistry, and more. You have to balance your short-term and long-term goals, as well as your expectations and reality. You have to face different opponents with different styles and strengths, as well as different weather conditions and stadiums. You have to adapt to the dynamic gameplay and AI that change according to your performance and situation.

-

The benefits and drawbacks of Master League

-

Master League is a game mode that offers a lot of benefits for football fans who want to immerse themselves in the world of football management. You can enjoy the thrill of building your own team from scratch, or taking over an existing one and leading it to glory. You can experience the realistic simulation of football matches, with stunning graphics and animations, realistic physics and ball movement, and authentic commentary and crowd noise. You can also customize your game settings, such as difficulty level, match length, camera angle, etc.

-

However, Master League also has some drawbacks that you should be aware of before playing it. For one thing, Master League is not free to play in eFootball 2022, but rather a paid DLC that you have to purchase separately. For another thing, Master League can be very time-consuming and challenging, especially for beginners who are not familiar with the game mechanics and features. You may also encounter some bugs and glitches that may affect your gameplay experience.

-

How to play Master League in eFootball 2022?

-

The steps to start your Master League journey

-

If you want to play Master League in eFootball 2022, here are the steps you need to follow:

-
    -
  1. Download eFootball 2022 for free from the official website or your preferred platform (PS4, PS5, Xbox One, Xbox Series X/S, PC).
  2. -
  3. Purchase the Master League DLC from the in-game store or your preferred platform.
  4. -
  5. Select Master League from the main menu.
  6. -
  7. Choose your team from the available leagues or create your own custom team.
  8. -
  9. Select your manager avatar from the available options or create your own custom avatar.
  10. -
  11. Set up your game settings, such as difficulty level, match length, camera angle, etc.
  12. -
  13. Start your Master League journey and enjoy the game!
  14. -
-

The tips and tricks to succeed in Master League

-

Master League can be a very rewarding game mode, but also a very challenging one. Here are some tips and tricks that can help you succeed in Master League:

-

master league efootball 2022 tips and tricks
-master league efootball 2022 best players
-master league efootball 2022 review
-master league efootball 2022 gameplay
-master league efootball 2022 transfer guide
-master league efootball 2022 manager mode
-master league efootball 2022 legends
-master league efootball 2022 youth development
-master league efootball 2022 custom teams
-master league efootball 2022 online
-master league efootball 2022 patch
-master league efootball 2022 mod
-master league efootball 2022 difficulty
-master league efootball 2022 budget
-master league efootball 2022 trophies
-master league efootball 2022 cheats
-master league efootball 2022 editor
-master league efootball 2022 database
-master league efootball 2022 formation
-master league efootball 2022 tactics
-master league efootball 2022 career mode
-master league efootball 2022 update
-master league efootball 2022 download
-master league efootball 2022 free dlc
-master league efootball 2022 release date
-master league efootball 2022 trailer
-master league efootball 2022 news
-master league efootball 2022 features
-master league efootball 2022 comparison
-master league efootball 2022 reddit
-master league efootball 2022 forum
-master league efootball 2022 blog
-master league efootball 2022 podcast
-master league efootball 2022 video
-master league efootball 2022 stream
-master league efootball 2022 walkthrough
-master league efootball 2022 tutorial
-master league efootball 2022 beginner guide
-master league efootball 2022 advanced guide
-master league efootball 2022 expert guide
-master league efootball 2022 challenge mode
-master league efootball 2022 simulation mode
-master league efootball 2022 realistic mode
-master league efootball 2022 fantasy mode
-master league efootball 2022 classic mode
-master league efootball 2022 custom mode
-master league efootball 2022 pesmaster.com[^1^]
-master league efootball

- -

The best teams and players to use in Master League

-

Master League offers a lot of variety and diversity when it comes to choosing your team and players. You can choose from hundreds of real-life teams and players, or create your own custom ones. However, some teams and players may have an edge over others, depending on their ratings, attributes, skills, etc. Here are some of the best teams and players to use in Master League:

- - - - - - - -
TeamReason
BarcelonaOne of the most popular and successful teams in the world, with a lot of star players such as Lionel Messi, Antoine Griezmann, Sergio Busquets, etc.
JuventusThe exclusive partner of eFootball 2022, with a lot of quality players such as Cristiano Ronaldo, Paulo Dybala, Giorgio Chiellini, etc.
LiverpoolThe reigning champions of Europe, with a lot of balanced and versatile players such as Mohamed Salah, Sadio Mane, Virgil van Dijk, etc.
Borussia DortmundA young and exciting team, with a lot of potential and talent such as Erling Haaland, Jadon Sancho, Marco Reus, etc.
AjaxA classic and historic team, with a lot of promising and talented players such as Dusan Tadic, Hakim Ziyech, Matthijs de Ligt, etc.
- - - - - - - -
PlayerReason
Lionel MessiThe best player in the world according to eFootball 2022 ratings (94), with amazing skills such as dribbling, finishing, passing, shooting, etc.
Cristiano RonaldoThe second best player in the world according to eFootball 2022 ratings (93), with incredible skills such as speed, strength, heading, finishing, etc.
NeymarThe third best player in the world according to eFootball 2022 ratings (92), with superb skills such as dribbling, flair, creativity, agility, etc.
Kylian MbappeThe best young player in the world according to eFootball 2022 ratings (90), with outstanding skills such as pace, acceleration, dribbling, finishing, etc.
Virgil van DijkThe best defender in the world according to eFootball 2022 ratings (91), with excellent skills such as tackling, marking, positioning, aerial ability, etc.
-

What's new in Master League in eFootball 2022?

-

The availability and cost of Master League as a DLC

-

One of the biggest changes in eFootball 2022 is that Master League is no longer a part of the base game, but rather a downloadable content (DLC) that you have to purchase separately. This means that you can download eFootball 2022 for free and play other game modes such as Matchday and Online Divisions, but you have to pay extra to access Master League. The price of the Master League DLC is not yet confirmed, but it is expected to be around $10-$15.

-

The addition of Master League on mobile devices

-

Another big change in eFootball 2022 is that Master League is now available on mobile devices, such as smartphones and tablets. This means that you can play Master League on the go, using your Android or iOS device. You can also sync your progress and data between your mobile device and your console or PC, using your Konami ID. However, you have to purchase the Master League DLC separately for each platform you want to play on.

-

The new managers and options to customize your avatar

-

One of the most exciting features of Master League in eFootball 2022 is the addition of new managers and options to customize your avatar. You can now choose from 18 legendary football icons as your manager avatar, such as Zinedine Zidane, Thierry Henry, Roberto Carlos, etc. You can also create your own custom avatar, using a variety of options such as face shape, hair style, skin tone, clothing, accessories, etc. You can also change your avatar's name, nationality, age, and personality.

-

The improvements and changes in gameplay and graphics

-

One of the most noticeable improvements of Master League in eFootball 2022 is the enhancement of gameplay and graphics. The game uses a new engine called Unreal Engine 4, which allows for more realistic and immersive gameplay and graphics. The game also features new animations and movements for players and managers, new camera angles and perspectives for matches and cutscenes, new lighting and weather effects for stadiums and environments, new sound effects and music for atmosphere and mood, etc.

-

Conclusion

-

Master League is a game mode that lets you create and manage your own football team in eFootball 2022. It offers a lot of features and challenges that can appeal to football fans who want to immerse themselves in the world of football management. However, it also has some drawbacks that you should be aware of before playing it. Master League is not free to play in eFootball 2022, but rather a paid DLC that you have to purchase separately. Master League can also be very time-consuming and challenging, especially for beginners who are not familiar with the game mechanics and features. You may also encounter some bugs and glitches that may affect your gameplay experience.

-

However, if you are willing to pay the price and overcome the difficulties, Master League can offer you a lot of fun and satisfaction. You can enjoy the thrill of building your own team from scratch, or taking over an existing one and leading it to glory. You can experience the realistic simulation of football matches, with stunning graphics and animations, realistic physics and ball movement, and authentic commentary and crowd noise. You can also customize your game settings, such as difficulty level, match length, camera angle, etc.

-

So, what are you waiting for? Download eFootball 2022 for free today and purchase the Master League DLC to start your Master League journey. You won't regret it!

-

FAQs

-

What is eFootball 2022?

-

eFootball 2022 is the successor of PES, a football simulation game developed by Konami. It is a free-to-play game that offers various game modes such as Matchday, Online Divisions, and Master League (as a DLC). It is available on PS4, PS5, Xbox One, Xbox Series X/S, PC, Android, and iOS.

-

What is the difference between eFootball 2022 and PES?

-

eFootball 2022 is a rebranding and a reboot of PES, with a new name, a new engine, and a new business model. eFootball 2022 aims to be more accessible and inclusive for all football fans, by offering a free-to-play base game and optional paid DLCs for different game modes. eFootball 2022 also aims to be more realistic and immersive, by using Unreal Engine 4 and improving the gameplay and graphics.

-

How much does the Master League DLC cost?

-

The price of the Master League DLC is not yet confirmed, but it is expected to be around $10-$15. You have to purchase the Master League DLC separately for each platform you want to play on.

-

Can I play Master League offline?

-

Yes, you can play Master League offline, as long as you have downloaded the Master League DLC and have an active internet connection when you start the game. However, you will not be able to sync your progress and data between your devices or access some online features such as leaderboards and updates.

-

Can I transfer my data from PES to eFootball 2022?

-

No, you cannot transfer your data from PES to eFootball 2022, as they are different games with different engines and features. You have to start from scratch in eFootball 2022.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Elfes Sylvains V8 Pdf LINK Free.md b/spaces/contluForse/HuggingGPT/assets/Elfes Sylvains V8 Pdf LINK Free.md deleted file mode 100644 index 51a14d911d565a085b9ac538d0ada36b5e974e90..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Elfes Sylvains V8 Pdf LINK Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

Elfes Sylvains V8 Pdf Free


DOWNLOADhttps://ssurll.com/2uzwk5



-
-Available Formats. Download as PDF or read online from Scribd ... Warhammer - Elfes Sylvains Fr ... Livre d'armée VF Guerriers Du Chaos V8. 4d29de3e1b
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/drop.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/drop.py deleted file mode 100644 index b7b4fccd457a0d51fb10c789df3c8537fe7b67c1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/spaces/cozyanduofen/bingo/src/lib/bots/bing/sr.ts b/spaces/cozyanduofen/bingo/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/cozyanduofen/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/cozyanduofen/bingo/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/utils/face_restoration_helper.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/utils/face_restoration_helper.py deleted file mode 100644 index 5d3fb8f3b95ed9959610e64f6d7373ea8a56ece8..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/facelib/utils/face_restoration_helper.py +++ /dev/null @@ -1,460 +0,0 @@ -import cv2 -import numpy as np -import os -import torch -from torchvision.transforms.functional import normalize - -from facelib.detection import init_detection_model -from facelib.parsing import init_parsing_model -from facelib.utils.misc import img2tensor, imwrite, is_gray, bgr2gray - - -def get_largest_face(det_faces, h, w): - - def get_location(val, length): - if val < 0: - return 0 - elif val > length: - return length - else: - return val - - face_areas = [] - for det_face in det_faces: - left = get_location(det_face[0], w) - right = get_location(det_face[2], w) - top = get_location(det_face[1], h) - bottom = get_location(det_face[3], h) - face_area = (right - left) * (bottom - top) - face_areas.append(face_area) - largest_idx = face_areas.index(max(face_areas)) - return det_faces[largest_idx], largest_idx - - -def get_center_face(det_faces, h=0, w=0, center=None): - if center is not None: - center = np.array(center) - else: - center = np.array([w / 2, h / 2]) - center_dist = [] - for det_face in det_faces: - face_center = np.array([(det_face[0] + det_face[2]) / 2, (det_face[1] + det_face[3]) / 2]) - dist = np.linalg.norm(face_center - center) - center_dist.append(dist) - center_idx = center_dist.index(min(center_dist)) - return det_faces[center_idx], center_idx - - -class FaceRestoreHelper(object): - """Helper for the face restoration pipeline (base class).""" - - def __init__(self, - upscale_factor, - face_size=512, - crop_ratio=(1, 1), - det_model='retinaface_resnet50', - save_ext='png', - template_3points=False, - pad_blur=False, - use_parse=False, - device=None): - self.template_3points = template_3points # improve robustness - self.upscale_factor = int(upscale_factor) - # the cropped face ratio based on the square face - self.crop_ratio = crop_ratio # (h, w) - assert (self.crop_ratio[0] >= 1 and self.crop_ratio[1] >= 1), 'crop ration only supports >=1' - self.face_size = (int(face_size * self.crop_ratio[1]), int(face_size * self.crop_ratio[0])) - - if self.template_3points: - self.face_template = np.array([[192, 240], [319, 240], [257, 371]]) - else: - # standard 5 landmarks for FFHQ faces with 512 x 512 - # facexlib - self.face_template = np.array([[192.98138, 239.94708], [318.90277, 240.1936], [256.63416, 314.01935], - [201.26117, 371.41043], [313.08905, 371.15118]]) - - # dlib: left_eye: 36:41 right_eye: 42:47 nose: 30,32,33,34 left mouth corner: 48 right mouth corner: 54 - # self.face_template = np.array([[193.65928, 242.98541], [318.32558, 243.06108], [255.67984, 328.82894], - # [198.22603, 372.82502], [313.91018, 372.75659]]) - - - self.face_template = self.face_template * (face_size / 512.0) - if self.crop_ratio[0] > 1: - self.face_template[:, 1] += face_size * (self.crop_ratio[0] - 1) / 2 - if self.crop_ratio[1] > 1: - self.face_template[:, 0] += face_size * (self.crop_ratio[1] - 1) / 2 - self.save_ext = save_ext - self.pad_blur = pad_blur - if self.pad_blur is True: - self.template_3points = False - - self.all_landmarks_5 = [] - self.det_faces = [] - self.affine_matrices = [] - self.inverse_affine_matrices = [] - self.cropped_faces = [] - self.restored_faces = [] - self.pad_input_imgs = [] - - if device is None: - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - else: - self.device = device - - # init face detection model - self.face_det = init_detection_model(det_model, half=False, device=self.device) - - # init face parsing model - self.use_parse = use_parse - self.face_parse = init_parsing_model(model_name='parsenet', device=self.device) - - def set_upscale_factor(self, upscale_factor): - self.upscale_factor = upscale_factor - - def read_image(self, img): - """img can be image path or cv2 loaded image.""" - # self.input_img is Numpy array, (h, w, c), BGR, uint8, [0, 255] - if isinstance(img, str): - img = cv2.imread(img) - - if np.max(img) > 256: # 16-bit image - img = img / 65535 * 255 - if len(img.shape) == 2: # gray image - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - elif img.shape[2] == 4: # BGRA image with alpha channel - img = img[:, :, 0:3] - - self.input_img = img - self.is_gray = is_gray(img, threshold=5) - if self.is_gray: - print('Grayscale input: True') - - if min(self.input_img.shape[:2])<512: - f = 512.0/min(self.input_img.shape[:2]) - self.input_img = cv2.resize(self.input_img, (0,0), fx=f, fy=f, interpolation=cv2.INTER_LINEAR) - - def get_face_landmarks_5(self, - only_keep_largest=False, - only_center_face=False, - resize=None, - blur_ratio=0.01, - eye_dist_threshold=None): - if resize is None: - scale = 1 - input_img = self.input_img - else: - h, w = self.input_img.shape[0:2] - scale = resize / min(h, w) - scale = max(1, scale) # always scale up - h, w = int(h * scale), int(w * scale) - interp = cv2.INTER_AREA if scale < 1 else cv2.INTER_LINEAR - input_img = cv2.resize(self.input_img, (w, h), interpolation=interp) - - with torch.no_grad(): - bboxes = self.face_det.detect_faces(input_img) - - if bboxes is None or bboxes.shape[0] == 0: - return 0 - else: - bboxes = bboxes / scale - - for bbox in bboxes: - # remove faces with too small eye distance: side faces or too small faces - eye_dist = np.linalg.norm([bbox[6] - bbox[8], bbox[7] - bbox[9]]) - if eye_dist_threshold is not None and (eye_dist < eye_dist_threshold): - continue - - if self.template_3points: - landmark = np.array([[bbox[i], bbox[i + 1]] for i in range(5, 11, 2)]) - else: - landmark = np.array([[bbox[i], bbox[i + 1]] for i in range(5, 15, 2)]) - self.all_landmarks_5.append(landmark) - self.det_faces.append(bbox[0:5]) - - if len(self.det_faces) == 0: - return 0 - if only_keep_largest: - h, w, _ = self.input_img.shape - self.det_faces, largest_idx = get_largest_face(self.det_faces, h, w) - self.all_landmarks_5 = [self.all_landmarks_5[largest_idx]] - elif only_center_face: - h, w, _ = self.input_img.shape - self.det_faces, center_idx = get_center_face(self.det_faces, h, w) - self.all_landmarks_5 = [self.all_landmarks_5[center_idx]] - - # pad blurry images - if self.pad_blur: - self.pad_input_imgs = [] - for landmarks in self.all_landmarks_5: - # get landmarks - eye_left = landmarks[0, :] - eye_right = landmarks[1, :] - eye_avg = (eye_left + eye_right) * 0.5 - mouth_avg = (landmarks[3, :] + landmarks[4, :]) * 0.5 - eye_to_eye = eye_right - eye_left - eye_to_mouth = mouth_avg - eye_avg - - # Get the oriented crop rectangle - # x: half width of the oriented crop rectangle - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - # - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise - # norm with the hypotenuse: get the direction - x /= np.hypot(*x) # get the hypotenuse of a right triangle - rect_scale = 1.5 - x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale) - # y: half height of the oriented crop rectangle - y = np.flipud(x) * [-1, 1] - - # c: center - c = eye_avg + eye_to_mouth * 0.1 - # quad: (left_top, left_bottom, right_bottom, right_top) - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - # qsize: side length of the square - qsize = np.hypot(*x) * 2 - border = max(int(np.rint(qsize * 0.1)), 3) - - # get pad - # pad: (width_left, height_top, width_right, height_bottom) - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = [ - max(-pad[0] + border, 1), - max(-pad[1] + border, 1), - max(pad[2] - self.input_img.shape[0] + border, 1), - max(pad[3] - self.input_img.shape[1] + border, 1) - ] - - if max(pad) > 1: - # pad image - pad_img = np.pad(self.input_img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - # modify landmark coords - landmarks[:, 0] += pad[0] - landmarks[:, 1] += pad[1] - # blur pad images - h, w, _ = pad_img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = int(qsize * blur_ratio) - if blur % 2 == 0: - blur += 1 - blur_img = cv2.boxFilter(pad_img, 0, ksize=(blur, blur)) - # blur_img = cv2.GaussianBlur(pad_img, (blur, blur), 0) - - pad_img = pad_img.astype('float32') - pad_img += (blur_img - pad_img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - pad_img += (np.median(pad_img, axis=(0, 1)) - pad_img) * np.clip(mask, 0.0, 1.0) - pad_img = np.clip(pad_img, 0, 255) # float32, [0, 255] - self.pad_input_imgs.append(pad_img) - else: - self.pad_input_imgs.append(np.copy(self.input_img)) - - return len(self.all_landmarks_5) - - def align_warp_face(self, save_cropped_path=None, border_mode='constant'): - """Align and warp faces with face template. - """ - if self.pad_blur: - assert len(self.pad_input_imgs) == len( - self.all_landmarks_5), f'Mismatched samples: {len(self.pad_input_imgs)} and {len(self.all_landmarks_5)}' - for idx, landmark in enumerate(self.all_landmarks_5): - # use 5 landmarks to get affine matrix - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D(landmark, self.face_template, method=cv2.LMEDS)[0] - self.affine_matrices.append(affine_matrix) - # warp and crop faces - if border_mode == 'constant': - border_mode = cv2.BORDER_CONSTANT - elif border_mode == 'reflect101': - border_mode = cv2.BORDER_REFLECT101 - elif border_mode == 'reflect': - border_mode = cv2.BORDER_REFLECT - if self.pad_blur: - input_img = self.pad_input_imgs[idx] - else: - input_img = self.input_img - cropped_face = cv2.warpAffine( - input_img, affine_matrix, self.face_size, borderMode=border_mode, borderValue=(135, 133, 132)) # gray - self.cropped_faces.append(cropped_face) - # save the cropped face - if save_cropped_path is not None: - path = os.path.splitext(save_cropped_path)[0] - save_path = f'{path}_{idx:02d}.{self.save_ext}' - imwrite(cropped_face, save_path) - - def get_inverse_affine(self, save_inverse_affine_path=None): - """Get inverse affine matrix.""" - for idx, affine_matrix in enumerate(self.affine_matrices): - inverse_affine = cv2.invertAffineTransform(affine_matrix) - inverse_affine *= self.upscale_factor - self.inverse_affine_matrices.append(inverse_affine) - # save inverse affine matrices - if save_inverse_affine_path is not None: - path, _ = os.path.splitext(save_inverse_affine_path) - save_path = f'{path}_{idx:02d}.pth' - torch.save(inverse_affine, save_path) - - - def add_restored_face(self, face): - if self.is_gray: - face = bgr2gray(face) # convert img into grayscale - self.restored_faces.append(face) - - - def paste_faces_to_input_image(self, save_path=None, upsample_img=None, draw_box=False, face_upsampler=None): - h, w, _ = self.input_img.shape - h_up, w_up = int(h * self.upscale_factor), int(w * self.upscale_factor) - - if upsample_img is None: - # simply resize the background - # upsample_img = cv2.resize(self.input_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4) - upsample_img = cv2.resize(self.input_img, (w_up, h_up), interpolation=cv2.INTER_LINEAR) - else: - upsample_img = cv2.resize(upsample_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4) - - assert len(self.restored_faces) == len( - self.inverse_affine_matrices), ('length of restored_faces and affine_matrices are different.') - - inv_mask_borders = [] - for restored_face, inverse_affine in zip(self.restored_faces, self.inverse_affine_matrices): - if face_upsampler is not None: - restored_face = face_upsampler.enhance(restored_face, outscale=self.upscale_factor)[0] - inverse_affine /= self.upscale_factor - inverse_affine[:, 2] *= self.upscale_factor - face_size = (self.face_size[0]*self.upscale_factor, self.face_size[1]*self.upscale_factor) - else: - # Add an offset to inverse affine matrix, for more precise back alignment - if self.upscale_factor > 1: - extra_offset = 0.5 * self.upscale_factor - else: - extra_offset = 0 - inverse_affine[:, 2] += extra_offset - face_size = self.face_size - inv_restored = cv2.warpAffine(restored_face, inverse_affine, (w_up, h_up)) - - # if draw_box or not self.use_parse: # use square parse maps - # mask = np.ones(face_size, dtype=np.float32) - # inv_mask = cv2.warpAffine(mask, inverse_affine, (w_up, h_up)) - # # remove the black borders - # inv_mask_erosion = cv2.erode( - # inv_mask, np.ones((int(2 * self.upscale_factor), int(2 * self.upscale_factor)), np.uint8)) - # pasted_face = inv_mask_erosion[:, :, None] * inv_restored - # total_face_area = np.sum(inv_mask_erosion) # // 3 - # # add border - # if draw_box: - # h, w = face_size - # mask_border = np.ones((h, w, 3), dtype=np.float32) - # border = int(1400/np.sqrt(total_face_area)) - # mask_border[border:h-border, border:w-border,:] = 0 - # inv_mask_border = cv2.warpAffine(mask_border, inverse_affine, (w_up, h_up)) - # inv_mask_borders.append(inv_mask_border) - # if not self.use_parse: - # # compute the fusion edge based on the area of face - # w_edge = int(total_face_area**0.5) // 20 - # erosion_radius = w_edge * 2 - # inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - # blur_size = w_edge * 2 - # inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - # if len(upsample_img.shape) == 2: # upsample_img is gray image - # upsample_img = upsample_img[:, :, None] - # inv_soft_mask = inv_soft_mask[:, :, None] - - # always use square mask - mask = np.ones(face_size, dtype=np.float32) - inv_mask = cv2.warpAffine(mask, inverse_affine, (w_up, h_up)) - # remove the black borders - inv_mask_erosion = cv2.erode( - inv_mask, np.ones((int(2 * self.upscale_factor), int(2 * self.upscale_factor)), np.uint8)) - pasted_face = inv_mask_erosion[:, :, None] * inv_restored - total_face_area = np.sum(inv_mask_erosion) # // 3 - # add border - if draw_box: - h, w = face_size - mask_border = np.ones((h, w, 3), dtype=np.float32) - border = int(1400/np.sqrt(total_face_area)) - mask_border[border:h-border, border:w-border,:] = 0 - inv_mask_border = cv2.warpAffine(mask_border, inverse_affine, (w_up, h_up)) - inv_mask_borders.append(inv_mask_border) - # compute the fusion edge based on the area of face - w_edge = int(total_face_area**0.5) // 20 - erosion_radius = w_edge * 2 - inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - blur_size = w_edge * 2 - inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - if len(upsample_img.shape) == 2: # upsample_img is gray image - upsample_img = upsample_img[:, :, None] - inv_soft_mask = inv_soft_mask[:, :, None] - - # parse mask - if self.use_parse: - # inference - face_input = cv2.resize(restored_face, (512, 512), interpolation=cv2.INTER_LINEAR) - face_input = img2tensor(face_input.astype('float32') / 255., bgr2rgb=True, float32=True) - normalize(face_input, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - face_input = torch.unsqueeze(face_input, 0).to(self.device) - with torch.no_grad(): - out = self.face_parse(face_input)[0] - out = out.argmax(dim=1).squeeze().cpu().numpy() - - parse_mask = np.zeros(out.shape) - MASK_COLORMAP = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 255, 0, 0, 0] - for idx, color in enumerate(MASK_COLORMAP): - parse_mask[out == idx] = color - # blur the mask - parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11) - parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11) - # remove the black borders - thres = 10 - parse_mask[:thres, :] = 0 - parse_mask[-thres:, :] = 0 - parse_mask[:, :thres] = 0 - parse_mask[:, -thres:] = 0 - parse_mask = parse_mask / 255. - - parse_mask = cv2.resize(parse_mask, face_size) - parse_mask = cv2.warpAffine(parse_mask, inverse_affine, (w_up, h_up), flags=3) - inv_soft_parse_mask = parse_mask[:, :, None] - # pasted_face = inv_restored - fuse_mask = (inv_soft_parse_mask 256: # 16-bit image - upsample_img = upsample_img.astype(np.uint16) - else: - upsample_img = upsample_img.astype(np.uint8) - - # draw bounding box - if draw_box: - # upsample_input_img = cv2.resize(input_img, (w_up, h_up)) - img_color = np.ones([*upsample_img.shape], dtype=np.float32) - img_color[:,:,0] = 0 - img_color[:,:,1] = 255 - img_color[:,:,2] = 0 - for inv_mask_border in inv_mask_borders: - upsample_img = inv_mask_border * img_color + (1 - inv_mask_border) * upsample_img - # upsample_input_img = inv_mask_border * img_color + (1 - inv_mask_border) * upsample_input_img - - if save_path is not None: - path = os.path.splitext(save_path)[0] - save_path = f'{path}.{self.save_ext}' - imwrite(upsample_img, save_path) - return upsample_img - - def clean_all(self): - self.all_landmarks_5 = [] - self.restored_faces = [] - self.affine_matrices = [] - self.cropped_faces = [] - self.inverse_affine_matrices = [] - self.det_faces = [] - self.pad_input_imgs = [] \ No newline at end of file diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/eyebrow_decomposer/eyebrow_decomposer_00.py b/spaces/cymic/Talking_Head_Anime_3/tha3/nn/eyebrow_decomposer/eyebrow_decomposer_00.py deleted file mode 100644 index 0c6fad9748c953f5cb78e73dc190da0410a93642..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/eyebrow_decomposer/eyebrow_decomposer_00.py +++ /dev/null @@ -1,102 +0,0 @@ -from typing import List, Optional - -import torch -from torch import Tensor -from torch.nn import Module - -from tha3.nn.common.poser_encoder_decoder_00 import PoserEncoderDecoder00Args, PoserEncoderDecoder00 -from tha3.nn.image_processing_util import apply_color_change -from tha3.module.module_factory import ModuleFactory -from tha3.nn.nonlinearity_factory import ReLUFactory -from tha3.nn.normalization import InstanceNorm2dFactory -from tha3.nn.util import BlockArgs - - -class EyebrowDecomposer00Args(PoserEncoderDecoder00Args): - def __init__(self, - image_size: int = 128, - image_channels: int = 4, - start_channels: int = 64, - bottleneck_image_size=16, - num_bottleneck_blocks=6, - max_channels: int = 512, - block_args: Optional[BlockArgs] = None): - super().__init__( - image_size, - image_channels, - image_channels, - 0, - start_channels, - bottleneck_image_size, - num_bottleneck_blocks, - max_channels, - block_args) - - -class EyebrowDecomposer00(Module): - def __init__(self, args: EyebrowDecomposer00Args): - super().__init__() - self.args = args - self.body = PoserEncoderDecoder00(args) - self.background_layer_alpha = self.args.create_alpha_block() - self.background_layer_color_change = self.args.create_color_change_block() - self.eyebrow_layer_alpha = self.args.create_alpha_block() - self.eyebrow_layer_color_change = self.args.create_color_change_block() - - def forward(self, image: Tensor, *args) -> List[Tensor]: - feature = self.body(image)[0] - - background_layer_alpha = self.background_layer_alpha(feature) - background_layer_color_change = self.background_layer_color_change(feature) - background_layer_1 = apply_color_change(background_layer_alpha, background_layer_color_change, image) - - eyebrow_layer_alpha = self.eyebrow_layer_alpha(feature) - eyebrow_layer_color_change = self.eyebrow_layer_color_change(feature) - eyebrow_layer = apply_color_change(eyebrow_layer_alpha, image, eyebrow_layer_color_change) - - return [ - eyebrow_layer, # 0 - eyebrow_layer_alpha, # 1 - eyebrow_layer_color_change, # 2 - background_layer_1, # 3 - background_layer_alpha, # 4 - background_layer_color_change, # 5 - ] - - EYEBROW_LAYER_INDEX = 0 - EYEBROW_LAYER_ALPHA_INDEX = 1 - EYEBROW_LAYER_COLOR_CHANGE_INDEX = 2 - BACKGROUND_LAYER_INDEX = 3 - BACKGROUND_LAYER_ALPHA_INDEX = 4 - BACKGROUND_LAYER_COLOR_CHANGE_INDEX = 5 - OUTPUT_LENGTH = 6 - - -class EyebrowDecomposer00Factory(ModuleFactory): - def __init__(self, args: EyebrowDecomposer00Args): - super().__init__() - self.args = args - - def create(self) -> Module: - return EyebrowDecomposer00(self.args) - - -if __name__ == "__main__": - cuda = torch.device('cuda') - args = EyebrowDecomposer00Args( - image_size=128, - image_channels=4, - start_channels=64, - bottleneck_image_size=16, - num_bottleneck_blocks=3, - block_args=BlockArgs( - initialization_method='xavier', - use_spectral_norm=False, - normalization_layer_factory=InstanceNorm2dFactory(), - nonlinearity_factory=ReLUFactory(inplace=True))) - face_morpher = EyebrowDecomposer00(args).to(cuda) - - image = torch.randn(8, 4, 128, 128, device=cuda) - outputs = face_morpher.forward(image) - for i in range(len(outputs)): - print(i, outputs[i].shape) diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/preprocess.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/preprocess.py deleted file mode 100644 index fa3d75d27b68e4bbd4fcc57ecc51df214f737d12..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/preprocess.py +++ /dev/null @@ -1,104 +0,0 @@ -"""This script contains the image preprocessing code for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from scipy.io import loadmat -from PIL import Image -import cv2 -import os -from skimage import transform as trans -import torch -import warnings -warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) -warnings.filterwarnings("ignore", category=FutureWarning) - - -# calculating least square problem for image alignment -def POS(xp, x): - npts = xp.shape[1] - - A = np.zeros([2*npts, 8]) - - A[0:2*npts-1:2, 0:3] = x.transpose() - A[0:2*npts-1:2, 3] = 1 - - A[1:2*npts:2, 4:7] = x.transpose() - A[1:2*npts:2, 7] = 1 - - b = np.reshape(xp.transpose(), [2*npts, 1]) - - k, _, _, _ = np.linalg.lstsq(A, b) - - R1 = k[0:3] - R2 = k[4:7] - sTx = k[3] - sTy = k[7] - s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2 - t = np.stack([sTx, sTy], axis=0) - - return t, s - -# resize and crop images for face reconstruction -def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None): - w0, h0 = img.size - w = (w0*s).astype(np.int32) - h = (h0*s).astype(np.int32) - left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32) - right = left + target_size - up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32) - below = up + target_size - - img = img.resize((w, h), resample=Image.BICUBIC) - img = img.crop((left, up, right, below)) - - if mask is not None: - mask = mask.resize((w, h), resample=Image.BICUBIC) - mask = mask.crop((left, up, right, below)) - - lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] - - t[1] + h0/2], axis=1)*s - lm = lm - np.reshape( - np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2]) - - return img, lm, mask - -# utils for face reconstruction -def extract_5p(lm): - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean( - lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0) - lm5p = lm5p[[1, 2, 0, 3, 4], :] - return lm5p - -# utils for face reconstruction -def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.): - """ - Return: - transparams --numpy.array (raw_W, raw_H, scale, tx, ty) - img_new --PIL.Image (target_size, target_size, 3) - lm_new --numpy.array (68, 2), y direction is opposite to v direction - mask_new --PIL.Image (target_size, target_size) - - Parameters: - img --PIL.Image (raw_H, raw_W, 3) - lm --numpy.array (68, 2), y direction is opposite to v direction - lm3D --numpy.array (5, 3) - mask --PIL.Image (raw_H, raw_W, 3) - """ - - w0, h0 = img.size - if lm.shape[0] != 5: - lm5p = extract_5p(lm) - else: - lm5p = lm - - # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face - t, s = POS(lm5p.transpose(), lm3D.transpose()) - s = rescale_factor/s - - # processing the image - img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask) - # trans_params = np.array([w0, h0, s, t[0], t[1]]) - trans_params = np.array([w0, h0, s, t[0][0], t[1][0] ]) - - return trans_params, img_new, lm_new, mask_new diff --git a/spaces/darkCat/Anime-image-classification/src/render.py b/spaces/darkCat/Anime-image-classification/src/render.py deleted file mode 100644 index bd58b0403c297d174832582fe7d95de3a26089e2..0000000000000000000000000000000000000000 --- a/spaces/darkCat/Anime-image-classification/src/render.py +++ /dev/null @@ -1,20 +0,0 @@ -import cv2 -import matplotlib.pyplot as plt - -def display_plt(image, label, gt, cls, name_path): - w = raw_image.shape[0] - h = raw_image.shape[1] - plt.figure(figsize = (14,14)) - label = np.argmax(label[0]) - plt.title(list(cls.keys())[label] + '_' + gt, fontproperties="SimHei") - plt.imshow(image) - plt.savefig(name_path) - -def display_cv2(image, label, gt, cls, name_path): - font = cv2.FONT_HERSHEY_SIMPLEX - label = np.argmax(label[0]) - cv2.putText(image, "Ground Truth: " + list(cls.keys())[label], (0, 150), font, 3, (0, 0, 255), 15) - cv2.putText(image, "转换: " + str(gt), (0, 250), font, 3, (0, 255, 0), 15, - cv2.FONT_HERSHEY_SCRIPT_SIMPLEX, True) -# cv2.imshow('imshow',image) - cv2.imwrite(name_path, image) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py deleted file mode 100644 index 04505e8250919eb666b8412e2d12cd739cc16bde..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py +++ /dev/null @@ -1,124 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.fixedTools import floatToFixedToStr, strToFixedToFloat -from fontTools.misc.textTools import safeEval, num2binary, binary2num -from fontTools.misc.timeTools import ( - timestampFromString, - timestampToString, - timestampNow, -) -from fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat -from fontTools.misc.arrayTools import intRect, unionRect -from . import DefaultTable -import logging - - -log = logging.getLogger(__name__) - -headFormat = """ - > # big endian - tableVersion: 16.16F - fontRevision: 16.16F - checkSumAdjustment: I - magicNumber: I - flags: H - unitsPerEm: H - created: Q - modified: Q - xMin: h - yMin: h - xMax: h - yMax: h - macStyle: H - lowestRecPPEM: H - fontDirectionHint: h - indexToLocFormat: h - glyphDataFormat: h -""" - - -class table__h_e_a_d(DefaultTable.DefaultTable): - - dependencies = ["maxp", "loca", "CFF ", "CFF2"] - - def decompile(self, data, ttFont): - dummy, rest = sstruct.unpack2(headFormat, data, self) - if rest: - # this is quite illegal, but there seem to be fonts out there that do this - log.warning("extra bytes at the end of 'head' table") - assert rest == b"\0\0" - - # For timestamp fields, ignore the top four bytes. Some fonts have - # bogus values there. Since till 2038 those bytes only can be zero, - # ignore them. - # - # https://github.com/fonttools/fonttools/issues/99#issuecomment-66776810 - for stamp in "created", "modified": - value = getattr(self, stamp) - if value > 0xFFFFFFFF: - log.warning("'%s' timestamp out of range; ignoring top bytes", stamp) - value &= 0xFFFFFFFF - setattr(self, stamp, value) - if value < 0x7C259DC0: # January 1, 1970 00:00:00 - log.warning( - "'%s' timestamp seems very low; regarding as unix timestamp", stamp - ) - value += 0x7C259DC0 - setattr(self, stamp, value) - - def compile(self, ttFont): - if ttFont.recalcBBoxes: - # For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc(). - if "CFF " in ttFont: - topDict = ttFont["CFF "].cff.topDictIndex[0] - self.xMin, self.yMin, self.xMax, self.yMax = intRect(topDict.FontBBox) - elif "CFF2" in ttFont: - topDict = ttFont["CFF2"].cff.topDictIndex[0] - charStrings = topDict.CharStrings - fontBBox = None - for charString in charStrings.values(): - bounds = charString.calcBounds(charStrings) - if bounds is not None: - if fontBBox is not None: - fontBBox = unionRect(fontBBox, bounds) - else: - fontBBox = bounds - if fontBBox is not None: - self.xMin, self.yMin, self.xMax, self.yMax = intRect(fontBBox) - if ttFont.recalcTimestamp: - self.modified = timestampNow() - data = sstruct.pack(headFormat, self) - return data - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - _, names, fixes = sstruct.getformat(headFormat) - for name in names: - value = getattr(self, name) - if name in fixes: - value = floatToFixedToStr(value, precisionBits=fixes[name]) - elif name in ("created", "modified"): - value = timestampToString(value) - elif name in ("magicNumber", "checkSumAdjustment"): - if value < 0: - value = value + 0x100000000 - value = hex(value) - if value[-1:] == "L": - value = value[:-1] - elif name in ("macStyle", "flags"): - value = num2binary(value, 16) - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - fixes = sstruct.getformat(headFormat)[2] - if name in fixes: - value = strToFixedToFloat(value, precisionBits=fixes[name]) - elif name in ("created", "modified"): - value = timestampFromString(value) - elif name in ("macStyle", "flags"): - value = binary2num(value) - else: - value = safeEval(value) - setattr(self, name, value) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/dask.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/dask.py deleted file mode 100644 index 3e1276463db6866665e6a0fe114efc247971b57e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/dask.py +++ /dev/null @@ -1,152 +0,0 @@ -import dask -from distributed.client import Client, _get_global_client -from distributed.worker import Worker - -from fsspec import filesystem -from fsspec.spec import AbstractBufferedFile, AbstractFileSystem -from fsspec.utils import infer_storage_options - - -def _get_client(client): - if client is None: - return _get_global_client() - elif isinstance(client, Client): - return client - else: - # e.g., connection string - return Client(client) - - -def _in_worker(): - return bool(Worker._instances) - - -class DaskWorkerFileSystem(AbstractFileSystem): - """View files accessible to a worker as any other remote file-system - - When instances are run on the worker, uses the real filesystem. When - run on the client, they call the worker to provide information or data. - - **Warning** this implementation is experimental, and read-only for now. - """ - - def __init__( - self, target_protocol=None, target_options=None, fs=None, client=None, **kwargs - ): - super().__init__(**kwargs) - if not (fs is None) ^ (target_protocol is None): - raise ValueError( - "Please provide one of filesystem instance (fs) or" - " target_protocol, not both" - ) - self.target_protocol = target_protocol - self.target_options = target_options - self.worker = None - self.client = client - self.fs = fs - self._determine_worker() - - @staticmethod - def _get_kwargs_from_urls(path): - so = infer_storage_options(path) - if "host" in so and "port" in so: - return {"client": f"{so['host']}:{so['port']}"} - else: - return {} - - def _determine_worker(self): - if _in_worker(): - self.worker = True - if self.fs is None: - self.fs = filesystem( - self.target_protocol, **(self.target_options or {}) - ) - else: - self.worker = False - self.client = _get_client(self.client) - self.rfs = dask.delayed(self) - - def mkdir(self, *args, **kwargs): - if self.worker: - self.fs.mkdir(*args, **kwargs) - else: - self.rfs.mkdir(*args, **kwargs).compute() - - def rm(self, *args, **kwargs): - if self.worker: - self.fs.rm(*args, **kwargs) - else: - self.rfs.rm(*args, **kwargs).compute() - - def copy(self, *args, **kwargs): - if self.worker: - self.fs.copy(*args, **kwargs) - else: - self.rfs.copy(*args, **kwargs).compute() - - def mv(self, *args, **kwargs): - if self.worker: - self.fs.mv(*args, **kwargs) - else: - self.rfs.mv(*args, **kwargs).compute() - - def ls(self, *args, **kwargs): - if self.worker: - return self.fs.ls(*args, **kwargs) - else: - return self.rfs.ls(*args, **kwargs).compute() - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - **kwargs, - ): - if self.worker: - return self.fs._open( - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - **kwargs, - ) - else: - return DaskFile( - fs=self, - path=path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - **kwargs, - ) - - def fetch_range(self, path, mode, start, end): - if self.worker: - with self._open(path, mode) as f: - f.seek(start) - return f.read(end - start) - else: - return self.rfs.fetch_range(path, mode, start, end).compute() - - -class DaskFile(AbstractBufferedFile): - def __init__(self, mode="rb", **kwargs): - if mode != "rb": - raise ValueError('Remote dask files can only be opened in "rb" mode') - super().__init__(**kwargs) - - def _upload_chunk(self, final=False): - pass - - def _initiate_upload(self): - """Create remote file/upload""" - pass - - def _fetch_range(self, start, end): - """Get the specified set of bytes from remote""" - return self.fs.fetch_range(self.path, self.mode, start, end) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_api/deprecation.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_api/deprecation.py deleted file mode 100644 index 7c304173b2e513d6d356d8acb88e1be9dfb75683..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_api/deprecation.py +++ /dev/null @@ -1,510 +0,0 @@ -""" -Helper functions for deprecating parts of the Matplotlib API. - -This documentation is only relevant for Matplotlib developers, not for users. - -.. warning:: - - This module is for internal use only. Do not use it in your own code. - We may change the API at any time with no warning. - -""" - -import contextlib -import functools -import inspect -import math -import warnings - - -class MatplotlibDeprecationWarning(DeprecationWarning): - """A class for issuing deprecation warnings for Matplotlib users.""" - - -def _generate_deprecation_warning( - since, message='', name='', alternative='', pending=False, obj_type='', - addendum='', *, removal=''): - if pending: - if removal: - raise ValueError( - "A pending deprecation cannot have a scheduled removal") - else: - removal = f"in {removal}" if removal else "two minor releases later" - if not message: - message = ( - ("The %(name)s %(obj_type)s" if obj_type else "%(name)s") - + (" will be deprecated in a future version" - if pending else - (" was deprecated in Matplotlib %(since)s" - + (" and will be removed %(removal)s" if removal else ""))) - + "." - + (" Use %(alternative)s instead." if alternative else "") - + (" %(addendum)s" if addendum else "")) - warning_cls = (PendingDeprecationWarning if pending - else MatplotlibDeprecationWarning) - return warning_cls(message % dict( - func=name, name=name, obj_type=obj_type, since=since, removal=removal, - alternative=alternative, addendum=addendum)) - - -def warn_deprecated( - since, *, message='', name='', alternative='', pending=False, - obj_type='', addendum='', removal=''): - """ - Display a standardized deprecation. - - Parameters - ---------- - since : str - The release at which this API became deprecated. - message : str, optional - Override the default deprecation message. The ``%(since)s``, - ``%(name)s``, ``%(alternative)s``, ``%(obj_type)s``, ``%(addendum)s``, - and ``%(removal)s`` format specifiers will be replaced by the values - of the respective arguments passed to this function. - name : str, optional - The name of the deprecated object. - alternative : str, optional - An alternative API that the user may use in place of the deprecated - API. The deprecation warning will tell the user about this alternative - if provided. - pending : bool, optional - If True, uses a PendingDeprecationWarning instead of a - DeprecationWarning. Cannot be used together with *removal*. - obj_type : str, optional - The object type being deprecated. - addendum : str, optional - Additional text appended directly to the final message. - removal : str, optional - The expected removal version. With the default (an empty string), a - removal version is automatically computed from *since*. Set to other - Falsy values to not schedule a removal date. Cannot be used together - with *pending*. - - Examples - -------- - :: - - # To warn of the deprecation of "matplotlib.name_of_module" - warn_deprecated('1.4.0', name='matplotlib.name_of_module', - obj_type='module') - """ - warning = _generate_deprecation_warning( - since, message, name, alternative, pending, obj_type, addendum, - removal=removal) - from . import warn_external - warn_external(warning, category=MatplotlibDeprecationWarning) - - -def deprecated(since, *, message='', name='', alternative='', pending=False, - obj_type=None, addendum='', removal=''): - """ - Decorator to mark a function, a class, or a property as deprecated. - - When deprecating a classmethod, a staticmethod, or a property, the - ``@deprecated`` decorator should go *under* ``@classmethod`` and - ``@staticmethod`` (i.e., `deprecated` should directly decorate the - underlying callable), but *over* ``@property``. - - When deprecating a class ``C`` intended to be used as a base class in a - multiple inheritance hierarchy, ``C`` *must* define an ``__init__`` method - (if ``C`` instead inherited its ``__init__`` from its own base class, then - ``@deprecated`` would mess up ``__init__`` inheritance when installing its - own (deprecation-emitting) ``C.__init__``). - - Parameters are the same as for `warn_deprecated`, except that *obj_type* - defaults to 'class' if decorating a class, 'attribute' if decorating a - property, and 'function' otherwise. - - Examples - -------- - :: - - @deprecated('1.4.0') - def the_function_to_deprecate(): - pass - """ - - def deprecate(obj, message=message, name=name, alternative=alternative, - pending=pending, obj_type=obj_type, addendum=addendum): - from matplotlib._api import classproperty - - if isinstance(obj, type): - if obj_type is None: - obj_type = "class" - func = obj.__init__ - name = name or obj.__name__ - old_doc = obj.__doc__ - - def finalize(wrapper, new_doc): - try: - obj.__doc__ = new_doc - except AttributeError: # Can't set on some extension objects. - pass - obj.__init__ = functools.wraps(obj.__init__)(wrapper) - return obj - - elif isinstance(obj, (property, classproperty)): - if obj_type is None: - obj_type = "attribute" - func = None - name = name or obj.fget.__name__ - old_doc = obj.__doc__ - - class _deprecated_property(type(obj)): - def __get__(self, instance, owner=None): - if instance is not None or owner is not None \ - and isinstance(self, classproperty): - emit_warning() - return super().__get__(instance, owner) - - def __set__(self, instance, value): - if instance is not None: - emit_warning() - return super().__set__(instance, value) - - def __delete__(self, instance): - if instance is not None: - emit_warning() - return super().__delete__(instance) - - def __set_name__(self, owner, set_name): - nonlocal name - if name == "": - name = set_name - - def finalize(_, new_doc): - return _deprecated_property( - fget=obj.fget, fset=obj.fset, fdel=obj.fdel, doc=new_doc) - - else: - if obj_type is None: - obj_type = "function" - func = obj - name = name or obj.__name__ - old_doc = func.__doc__ - - def finalize(wrapper, new_doc): - wrapper = functools.wraps(func)(wrapper) - wrapper.__doc__ = new_doc - return wrapper - - def emit_warning(): - warn_deprecated( - since, message=message, name=name, alternative=alternative, - pending=pending, obj_type=obj_type, addendum=addendum, - removal=removal) - - def wrapper(*args, **kwargs): - emit_warning() - return func(*args, **kwargs) - - old_doc = inspect.cleandoc(old_doc or '').strip('\n') - - notes_header = '\nNotes\n-----' - second_arg = ' '.join([t.strip() for t in - (message, f"Use {alternative} instead." - if alternative else "", addendum) if t]) - new_doc = (f"[*Deprecated*] {old_doc}\n" - f"{notes_header if notes_header not in old_doc else ''}\n" - f".. deprecated:: {since}\n" - f" {second_arg}") - - if not old_doc: - # This is to prevent a spurious 'unexpected unindent' warning from - # docutils when the original docstring was blank. - new_doc += r'\ ' - - return finalize(wrapper, new_doc) - - return deprecate - - -class deprecate_privatize_attribute: - """ - Helper to deprecate public access to an attribute (or method). - - This helper should only be used at class scope, as follows:: - - class Foo: - attr = _deprecate_privatize_attribute(*args, **kwargs) - - where *all* parameters are forwarded to `deprecated`. This form makes - ``attr`` a property which forwards read and write access to ``self._attr`` - (same name but with a leading underscore), with a deprecation warning. - Note that the attribute name is derived from *the name this helper is - assigned to*. This helper also works for deprecating methods. - """ - - def __init__(self, *args, **kwargs): - self.deprecator = deprecated(*args, **kwargs) - - def __set_name__(self, owner, name): - setattr(owner, name, self.deprecator( - property(lambda self: getattr(self, f"_{name}"), - lambda self, value: setattr(self, f"_{name}", value)), - name=name)) - - -# Used by _copy_docstring_and_deprecators to redecorate pyplot wrappers and -# boilerplate.py to retrieve original signatures. It may seem natural to store -# this information as an attribute on the wrapper, but if the wrapper gets -# itself functools.wraps()ed, then such attributes are silently propagated to -# the outer wrapper, which is not desired. -DECORATORS = {} - - -def rename_parameter(since, old, new, func=None): - """ - Decorator indicating that parameter *old* of *func* is renamed to *new*. - - The actual implementation of *func* should use *new*, not *old*. If *old* - is passed to *func*, a DeprecationWarning is emitted, and its value is - used, even if *new* is also passed by keyword (this is to simplify pyplot - wrapper functions, which always pass *new* explicitly to the Axes method). - If *new* is also passed but positionally, a TypeError will be raised by the - underlying function during argument binding. - - Examples - -------- - :: - - @_api.rename_parameter("3.1", "bad_name", "good_name") - def func(good_name): ... - """ - - decorator = functools.partial(rename_parameter, since, old, new) - - if func is None: - return decorator - - signature = inspect.signature(func) - assert old not in signature.parameters, ( - f"Matplotlib internal error: {old!r} cannot be a parameter for " - f"{func.__name__}()") - assert new in signature.parameters, ( - f"Matplotlib internal error: {new!r} must be a parameter for " - f"{func.__name__}()") - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if old in kwargs: - warn_deprecated( - since, message=f"The {old!r} parameter of {func.__name__}() " - f"has been renamed {new!r} since Matplotlib {since}; support " - f"for the old name will be dropped %(removal)s.") - kwargs[new] = kwargs.pop(old) - return func(*args, **kwargs) - - # wrapper() must keep the same documented signature as func(): if we - # instead made both *old* and *new* appear in wrapper()'s signature, they - # would both show up in the pyplot function for an Axes method as well and - # pyplot would explicitly pass both arguments to the Axes method. - - DECORATORS[wrapper] = decorator - return wrapper - - -class _deprecated_parameter_class: - def __repr__(self): - return "" - - -_deprecated_parameter = _deprecated_parameter_class() - - -def delete_parameter(since, name, func=None, **kwargs): - """ - Decorator indicating that parameter *name* of *func* is being deprecated. - - The actual implementation of *func* should keep the *name* parameter in its - signature, or accept a ``**kwargs`` argument (through which *name* would be - passed). - - Parameters that come after the deprecated parameter effectively become - keyword-only (as they cannot be passed positionally without triggering the - DeprecationWarning on the deprecated parameter), and should be marked as - such after the deprecation period has passed and the deprecated parameter - is removed. - - Parameters other than *since*, *name*, and *func* are keyword-only and - forwarded to `.warn_deprecated`. - - Examples - -------- - :: - - @_api.delete_parameter("3.1", "unused") - def func(used_arg, other_arg, unused, more_args): ... - """ - - decorator = functools.partial(delete_parameter, since, name, **kwargs) - - if func is None: - return decorator - - signature = inspect.signature(func) - # Name of `**kwargs` parameter of the decorated function, typically - # "kwargs" if such a parameter exists, or None if the decorated function - # doesn't accept `**kwargs`. - kwargs_name = next((param.name for param in signature.parameters.values() - if param.kind == inspect.Parameter.VAR_KEYWORD), None) - if name in signature.parameters: - kind = signature.parameters[name].kind - is_varargs = kind is inspect.Parameter.VAR_POSITIONAL - is_varkwargs = kind is inspect.Parameter.VAR_KEYWORD - if not is_varargs and not is_varkwargs: - name_idx = ( - # Deprecated parameter can't be passed positionally. - math.inf if kind is inspect.Parameter.KEYWORD_ONLY - # If call site has no more than this number of parameters, the - # deprecated parameter can't have been passed positionally. - else [*signature.parameters].index(name)) - func.__signature__ = signature = signature.replace(parameters=[ - param.replace(default=_deprecated_parameter) - if param.name == name else param - for param in signature.parameters.values()]) - else: - name_idx = -1 # Deprecated parameter can always have been passed. - else: - is_varargs = is_varkwargs = False - # Deprecated parameter can't be passed positionally. - name_idx = math.inf - assert kwargs_name, ( - f"Matplotlib internal error: {name!r} must be a parameter for " - f"{func.__name__}()") - - addendum = kwargs.pop('addendum', None) - - @functools.wraps(func) - def wrapper(*inner_args, **inner_kwargs): - if len(inner_args) <= name_idx and name not in inner_kwargs: - # Early return in the simple, non-deprecated case (much faster than - # calling bind()). - return func(*inner_args, **inner_kwargs) - arguments = signature.bind(*inner_args, **inner_kwargs).arguments - if is_varargs and arguments.get(name): - warn_deprecated( - since, message=f"Additional positional arguments to " - f"{func.__name__}() are deprecated since %(since)s and " - f"support for them will be removed %(removal)s.") - elif is_varkwargs and arguments.get(name): - warn_deprecated( - since, message=f"Additional keyword arguments to " - f"{func.__name__}() are deprecated since %(since)s and " - f"support for them will be removed %(removal)s.") - # We cannot just check `name not in arguments` because the pyplot - # wrappers always pass all arguments explicitly. - elif any(name in d and d[name] != _deprecated_parameter - for d in [arguments, arguments.get(kwargs_name, {})]): - deprecation_addendum = ( - f"If any parameter follows {name!r}, they should be passed as " - f"keyword, not positionally.") - warn_deprecated( - since, - name=repr(name), - obj_type=f"parameter of {func.__name__}()", - addendum=(addendum + " " + deprecation_addendum) if addendum - else deprecation_addendum, - **kwargs) - return func(*inner_args, **inner_kwargs) - - DECORATORS[wrapper] = decorator - return wrapper - - -def make_keyword_only(since, name, func=None): - """ - Decorator indicating that passing parameter *name* (or any of the following - ones) positionally to *func* is being deprecated. - - When used on a method that has a pyplot wrapper, this should be the - outermost decorator, so that :file:`boilerplate.py` can access the original - signature. - """ - - decorator = functools.partial(make_keyword_only, since, name) - - if func is None: - return decorator - - signature = inspect.signature(func) - POK = inspect.Parameter.POSITIONAL_OR_KEYWORD - KWO = inspect.Parameter.KEYWORD_ONLY - assert (name in signature.parameters - and signature.parameters[name].kind == POK), ( - f"Matplotlib internal error: {name!r} must be a positional-or-keyword " - f"parameter for {func.__name__}()") - names = [*signature.parameters] - name_idx = names.index(name) - kwonly = [name for name in names[name_idx:] - if signature.parameters[name].kind == POK] - - @functools.wraps(func) - def wrapper(*args, **kwargs): - # Don't use signature.bind here, as it would fail when stacked with - # rename_parameter and an "old" argument name is passed in - # (signature.bind would fail, but the actual call would succeed). - if len(args) > name_idx: - warn_deprecated( - since, message="Passing the %(name)s %(obj_type)s " - "positionally is deprecated since Matplotlib %(since)s; the " - "parameter will become keyword-only %(removal)s.", - name=name, obj_type=f"parameter of {func.__name__}()") - return func(*args, **kwargs) - - # Don't modify *func*'s signature, as boilerplate.py needs it. - wrapper.__signature__ = signature.replace(parameters=[ - param.replace(kind=KWO) if param.name in kwonly else param - for param in signature.parameters.values()]) - DECORATORS[wrapper] = decorator - return wrapper - - -def deprecate_method_override(method, obj, *, allow_empty=False, **kwargs): - """ - Return ``obj.method`` with a deprecation if it was overridden, else None. - - Parameters - ---------- - method - An unbound method, i.e. an expression of the form - ``Class.method_name``. Remember that within the body of a method, one - can always use ``__class__`` to refer to the class that is currently - being defined. - obj - Either an object of the class where *method* is defined, or a subclass - of that class. - allow_empty : bool, default: False - Whether to allow overrides by "empty" methods without emitting a - warning. - **kwargs - Additional parameters passed to `warn_deprecated` to generate the - deprecation warning; must at least include the "since" key. - """ - - def empty(): pass - def empty_with_docstring(): """doc""" - - name = method.__name__ - bound_child = getattr(obj, name) - bound_base = ( - method # If obj is a class, then we need to use unbound methods. - if isinstance(bound_child, type(empty)) and isinstance(obj, type) - else method.__get__(obj)) - if (bound_child != bound_base - and (not allow_empty - or (getattr(getattr(bound_child, "__code__", None), - "co_code", None) - not in [empty.__code__.co_code, - empty_with_docstring.__code__.co_code]))): - warn_deprecated(**{"name": name, "obj_type": "method", **kwargs}) - return bound_child - return None - - -@contextlib.contextmanager -def suppress_matplotlib_deprecation_warning(): - with warnings.catch_warnings(): - warnings.simplefilter("ignore", MatplotlibDeprecationWarning) - yield diff --git a/spaces/de3sec/Front-end-code-generation-from-images/classes/Utils.py b/spaces/de3sec/Front-end-code-generation-from-images/classes/Utils.py deleted file mode 100644 index a7aff300380f793c941953e0d63ddf6d71281592..0000000000000000000000000000000000000000 --- a/spaces/de3sec/Front-end-code-generation-from-images/classes/Utils.py +++ /dev/null @@ -1,39 +0,0 @@ -__author__ = 'Taneem Jan, taneemishere.github.io' - -import numpy as np - - -class Utils: - @staticmethod - def sparsify(label_vector, output_size): - sparse_vector = [] - - for label in label_vector: - sparse_label = np.zeros(output_size) - sparse_label[label] = 1 - - sparse_vector.append(sparse_label) - - return np.array(sparse_vector) - - @staticmethod - def get_preprocessed_img(img_path, image_size): - import cv2 - # from keras.preprocessing.image import array_to_img, img_to_array - # img = array_to_img(img_path) - # img = img_to_array(img) - # img = cv2.imread(img_path) - # don't need to read the image as we're now directly passing the - # image as numpy array to this method - img = cv2.resize(img_path, (image_size, image_size)) - img = img.astype('float32') - img /= 255 - return img - - @staticmethod - def show(image): - import cv2 - cv2.namedWindow("view", cv2.WINDOW_AUTOSIZE) - cv2.imshow("view", image) - cv2.waitKey(0) - cv2.destroyWindow("view") diff --git a/spaces/declare-lab/tango/tools/.ipynb_checkpoints/torch_tools-checkpoint.py b/spaces/declare-lab/tango/tools/.ipynb_checkpoints/torch_tools-checkpoint.py deleted file mode 100644 index d83d3137460aaf04ef1b335efb42ddb37d24b3ea..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/tools/.ipynb_checkpoints/torch_tools-checkpoint.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch -import torchaudio -import random -import itertools -import numpy as np -from tools.mix import mix - - -def normalize_wav(waveform): - waveform = waveform - torch.mean(waveform) - waveform = waveform / (torch.max(torch.abs(waveform)) + 1e-8) - return waveform * 0.5 - - -def pad_wav(waveform, segment_length): - waveform_length = len(waveform) - - if segment_length is None or waveform_length == segment_length: - return waveform - elif waveform_length > segment_length: - return waveform[:segment_length] - else: - pad_wav = torch.zeros(segment_length - waveform_length).to(waveform.device) - waveform = torch.cat([waveform, pad_wav]) - return waveform - - -def _pad_spec(fbank, target_length=1024): - batch, n_frames, channels = fbank.shape - p = target_length - n_frames - if p > 0: - pad = torch.zeros(batch, p, channels).to(fbank.device) - fbank = torch.cat([fbank, pad], 1) - elif p < 0: - fbank = fbank[:, :target_length, :] - - if channels % 2 != 0: - fbank = fbank[:, :, :-1] - - return fbank - - -def read_wav_file(filename, segment_length): - waveform, sr = torchaudio.load(filename) # Faster!!! - try: - waveform = torchaudio.functional.resample(waveform, orig_freq=sr, new_freq=16000)[0] - except: - print ("0 length wav encountered. Setting to random:", filename) - waveform = torch.rand(160000) - - try: - waveform = normalize_wav(waveform) - except: - print ("Exception normalizing:", filename) - waveform = torch.ones(160000) - waveform = pad_wav(waveform, segment_length).unsqueeze(0) - waveform = waveform / torch.max(torch.abs(waveform)) - waveform = 0.5 * waveform - return waveform - - -def get_mel_from_wav(audio, _stft): - audio = torch.nan_to_num(torch.clip(audio, -1, 1)) - audio = torch.autograd.Variable(audio, requires_grad=False) - melspec, log_magnitudes_stft, energy = _stft.mel_spectrogram(audio) - return melspec, log_magnitudes_stft, energy - - -def wav_to_fbank(paths, target_length=1024, fn_STFT=None): - assert fn_STFT is not None - - waveform = torch.cat([read_wav_file(path, target_length * 160) for path in paths], 0) # hop size is 160 - - fbank, log_magnitudes_stft, energy = get_mel_from_wav(waveform, fn_STFT) - fbank = fbank.transpose(1, 2) - log_magnitudes_stft = log_magnitudes_stft.transpose(1, 2) - - fbank, log_magnitudes_stft = _pad_spec(fbank, target_length), _pad_spec( - log_magnitudes_stft, target_length - ) - - return fbank, log_magnitudes_stft, waveform - - -def uncapitalize(s): - if s: - return s[:1].lower() + s[1:] - else: - return "" - - -def mix_wavs_and_captions(path1, path2, caption1, caption2, target_length=1024): - sound1 = read_wav_file(path1, target_length * 160)[0].numpy() - sound2 = read_wav_file(path2, target_length * 160)[0].numpy() - mixed_sound = mix(sound1, sound2, 0.5, 16000).reshape(1, -1) - mixed_caption = "{} and {}".format(caption1, uncapitalize(caption2)) - return mixed_sound, mixed_caption - - -def augment(paths, texts, num_items=4, target_length=1024): - mixed_sounds, mixed_captions = [], [] - combinations = list(itertools.combinations(list(range(len(texts))), 2)) - random.shuffle(combinations) - if len(combinations) < num_items: - selected_combinations = combinations - else: - selected_combinations = combinations[:num_items] - - for (i, j) in selected_combinations: - new_sound, new_caption = mix_wavs_and_captions(paths[i], paths[j], texts[i], texts[j], target_length) - mixed_sounds.append(new_sound) - mixed_captions.append(new_caption) - - waveform = torch.tensor(np.concatenate(mixed_sounds, 0)) - waveform = waveform / torch.max(torch.abs(waveform)) - waveform = 0.5 * waveform - - return waveform, mixed_captions - - -def augment_wav_to_fbank(paths, texts, num_items=4, target_length=1024, fn_STFT=None): - assert fn_STFT is not None - - waveform, captions = augment(paths, texts) - fbank, log_magnitudes_stft, energy = get_mel_from_wav(waveform, fn_STFT) - fbank = fbank.transpose(1, 2) - log_magnitudes_stft = log_magnitudes_stft.transpose(1, 2) - - fbank, log_magnitudes_stft = _pad_spec(fbank, target_length), _pad_spec( - log_magnitudes_stft, target_length - ) - - return fbank, log_magnitudes_stft, waveform, captions \ No newline at end of file diff --git a/spaces/deepwisdom/MetaGPT/metagpt/prompts/__init__.py b/spaces/deepwisdom/MetaGPT/metagpt/prompts/__init__.py deleted file mode 100644 index 93b945019a291d13cc03d960f48da2b347f117e6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/prompts/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/30 09:51 -@Author : alexanderwu -@File : __init__.py -""" diff --git a/spaces/diacanFperku/AutoGPT/Avid Liquid Chrome Xe V7.2 (Multilanguage) Keygen [RH] WORK Crack.md b/spaces/diacanFperku/AutoGPT/Avid Liquid Chrome Xe V7.2 (Multilanguage) Keygen [RH] WORK Crack.md deleted file mode 100644 index efc43ca3c88a1dc432428c1ff8cdb6d1cc6c3e93..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Avid Liquid Chrome Xe V7.2 (Multilanguage) Keygen [RH] WORK Crack.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

software, avid liquid chrome xe v7.2 (multilanguage) + keygen [rh], 749.67 mb. software, avid media composer v3.0 + crack [rh], 1.1 gb. but, some of you may receive a warning message saying that the the avid liquid chrome xe v7.2 (multilanguage) keygen [rh] crack is not suitable for your computer.

-

Avid Liquid Chrome Xe v7.2 (Multilanguage) Keygen [RH] crack


Download Zip ★★★ https://gohhs.com/2uFUwj



-

pv system simulation can use many types of energy: photovoltaic (pv), wind, hydro, geothermal, biomass and fuel cell. valentin is the simulation for the photovoltic method for producing energy in a detailed software environment. valentin pvsol completely visualizes data and analysis reports from the different pv systems and enables users to have a crystal very clear view and reach and target the desirable project. it has a perfect load on cpu which makes the consumers workflow easy and productive. you can also try avid liquid chrome 7.2 crack.

-

software, avid liquid chrome xe v7.2 (multilanguage) + keygen [rh], 749.67 mb. software, avid media composer v3.0 + crack [rh], 1.1 gb. but, some of you may receive a warning message saying that the the avid liquid chrome xe v7.2 (multilanguage) keygen [rh] crack is not suitable for your computer.

-

version 7.2 (2012-11-29), the release notes: liquid chrome xe v7.2 (2012-11-29) this release of liquid chrome xe incorporates a host of new features and enhancements in order to meet the needs of users. in addition, it has support for both microsoft windows 7 (32-bit and 64-bit) and linux. it is expected to be compatible with all linux distributions as well as with microsoft windows vista. it is available for download from the avid website.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Fritz Fax Software 3.07.61 TOP Download.md b/spaces/diacanFperku/AutoGPT/Fritz Fax Software 3.07.61 TOP Download.md deleted file mode 100644 index 8b747826cbe30a09bf72962684303fcf33c0ba52..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Fritz Fax Software 3.07.61 TOP Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

fritz fax software 3.07.61 download


Download File ››› https://gohhs.com/2uFTFG



-
-Fritz Fax Software 3.07.61 Download fritz software, fritz software chess, fritz software download, fritz software windows 10, fritz software free download, fritz ... 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/German Samper Recinto Urbano Pdf 24.md b/spaces/diacanFperku/AutoGPT/German Samper Recinto Urbano Pdf 24.md deleted file mode 100644 index c718053a53847d4c5737729736ae412cda3b341d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/German Samper Recinto Urbano Pdf 24.md +++ /dev/null @@ -1,6 +0,0 @@ -

german samper recinto urbano pdf 24


Download ✒ ✒ ✒ https://gohhs.com/2uFTvd



-
-Germán Samper Gnecco (Bogotá, 18 de abril de 1924-22 de mayo de 2019) fue un prestigioso arquitecto colombiano. Considerado uno de los mejores ... 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/N900 Usb Driver Download.md b/spaces/diacanFperku/AutoGPT/N900 Usb Driver Download.md deleted file mode 100644 index 19393f02384322229089f1facbee96c1a55a8680..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/N900 Usb Driver Download.md +++ /dev/null @@ -1,18 +0,0 @@ -

N900 Usb Driver Download


Download File 🔗 https://gohhs.com/2uFUz0



-
-Q: - -Does the iPhone/iPod Touch map interface have any limitations? - -My previous iPhone project used google's map interface and things worked well for all the devices I tested on. My current project though uses Apple's default map interface and my iphone has trouble displaying the map for "routes and directions" (that is, it seems to be "out of space"). On my other devices the maps load fine. I am wondering if there is any good reason for this, or is it just a bug? - -A: - -It's a bug in iOS5. I'm betting there's a fix in the update coming. I'd wait for it before posting a question about it. - -Synthesis of glycolipid antigens for stimulation of anti-polysaccharide and anti-lipid A sera. - -Protected glycolipid and glycolipid fragments are shown to be potent antigens for raising anti-LPS and anti-LPS--lipid A antibodies. The antigens used are 2-aminoethylphosphonic acid (2-AEP) mono-O-beta-D-galactopyranosyl-beta-D-glucopyranoside (1) and 1,2-di-O-beta-D-galactopyranosyl-sn-glycerol phosphate (2). The key glycosylating reagent, benzyl 4,6-O-benzylidene-2,3,5-tri-O-benzoyl-alpha-D-glucopyranoside, was prepared from the condensation of benzyl 3,4,6-tri-O-benzoyl-alpha-D-glucopyranoside with benzyl 4,6-O-benzylidene-2,3-di-O-benzoyl-alpha-D-mannopyranoside in the presence of BF3.OEt2. Derivatives were then deprotected by saponification and oxalyl chloride, and glycolipids were purified by chromatography on AG-50X8 and AG-50X8/CV-50 column systems. Anti-LPS and anti-lipid A antibodies are raised in rabbits immunized with 2-AEP-1 and 2-AEP-2. These antibodies display specificity for both 4fefd39f24
-
-
-

diff --git a/spaces/diffusers/check_pr/README.md b/spaces/diffusers/check_pr/README.md deleted file mode 100644 index b9b339c37808edf0d295f7a7395b39e8a69b8dd3..0000000000000000000000000000000000000000 --- a/spaces/diffusers/check_pr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Check Pr -emoji: 📚 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/dineshreddy/WALT/mmdet/__init__.py b/spaces/dineshreddy/WALT/mmdet/__init__.py deleted file mode 100644 index ce2930f62a0091e06b37575b96db2ae51ca7908e..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.2.4' -mmcv_maximum_version = '1.4.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/spaces/dorkai/ChatUIPro/app/components/base/toast/style.module.css b/spaces/dorkai/ChatUIPro/app/components/base/toast/style.module.css deleted file mode 100644 index 305fde49cafeed9868b7941e98a87c76aeafef74..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/app/components/base/toast/style.module.css +++ /dev/null @@ -1,43 +0,0 @@ -.toast { - display: flex; - justify-content: center; - align-items: center; - position: fixed; - width: 1.84rem; - height: 1.80rem; - left: 50%; - top: 50%; - transform: translateX(-50%) translateY(-50%); - background: #000000; - box-shadow: 0 -.04rem .1rem 1px rgba(255, 255, 255, 0.1); - border-radius: .1rem .1rem .1rem .1rem; -} - -.main { - width: 2rem; -} - -.icon { - margin-bottom: .2rem; - height: .4rem; - background: center center no-repeat; - background-size: contain; -} - -/* .success { - background-image: url('./icons/success.svg'); -} - -.warning { - background-image: url('./icons/warning.svg'); -} - -.error { - background-image: url('./icons/error.svg'); -} */ - -.text { - text-align: center; - font-size: .2rem; - color: rgba(255, 255, 255, 0.86); -} \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Custom-chat-characters.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Custom-chat-characters.md deleted file mode 100644 index eeb22d1c2b64222626faa166828b2cb06e9f66e7..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Custom-chat-characters.md +++ /dev/null @@ -1,31 +0,0 @@ -Custom chat mode characters are defined by `.yaml` files inside the `characters` folder. An example is included: [Example.yaml](https://github.com/oobabooga/text-generation-webui/blob/main/characters/Example.yaml) - -The following fields may be defined: - -| Field | Description | -|-------|-------------| -| `name` or `bot` | The character's name. | -| `your_name` or `user` (optional) | Your name. This overwrites what you had previously written in the `Your name` field in the interface. | -| `context` | A string that appears at the top of the prompt. It usually contains a description of the character's personality. | -| `greeting` (optional) | The character's opening message when a new conversation is started. | -| `example_dialogue` (optional) | A few example messages to guide the model. | -| `turn_template` (optional) | Used to define where the spaces and new line characters should be in Instruct mode. See the characters in `characters/instruction-following` for examples. | - -#### Special tokens - -* `{{char}}` or ``: are replaced with the character's name -* `{{user}}` or ``: are replaced with your name - -These replacements happen when the character is loaded, and they apply to the `context`, `greeting`, and `example_dialogue` fields. - -#### How do I add a profile picture for my character? - -Put an image with the same name as your character's yaml file into the `characters` folder. For example, if your bot is `Character.yaml`, add `Character.jpg` or `Character.png` to the folder. - -#### Is the chat history truncated in the prompt? - -Once your prompt reaches the 2048 token limit, old messages will be removed one at a time. The context string will always stay at the top of the prompt and will never get truncated. - -#### Pygmalion format characters - -These are also supported out of the box. Simply put the JSON file in the `characters` folder, or upload it directly from the web UI by clicking on the "Upload character" tab at the bottom. \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/ScanSSD/utils/visualize.py b/spaces/duycse1603/math2tex/ScanSSD/utils/visualize.py deleted file mode 100644 index a3735116cadc85123677ef04e8102bdca3b330e2..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/utils/visualize.py +++ /dev/null @@ -1,202 +0,0 @@ -''' -This file contains functions to visualize the heatmap and detected bounding boxes -''' - -import matplotlib -matplotlib.use('Agg') - -import matplotlib.pyplot as plt -import matplotlib.patches as patches -import os -import numpy as np -import cv2 - -def draw_stitched_boxes(im, data, outpath): - - # Create figure and axes - fig, ax = plt.subplots(1) - - # sort based on the confs. Confs is column 4 - data = data[data[:, 4].argsort()] - - # Display the image - ax.imshow(im) - - width, height, channels = im.shape - heatmap = np.zeros([width, height]) - - for box in data: - heatmap[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = box[4] - - # Following line makes sure that all the heatmaps are in the scale, 0 to 1 - # So color assigned to different scores are consistent across heatmaps for - # different images - heatmap[0:1, 0:1] = 1 - heatmap[0:1, 1:2] = 0 - - plt.imshow(heatmap, alpha=0.4, cmap='hot', interpolation='nearest') - plt.colorbar() - - plt.title("Stitching visualization") - plt.show() - plt.savefig(outpath, dpi=600) - plt.close() - - -def draw_all_boxes(im, data, recognized_boxes, gt_boxes, outpath): - - if len(data) == 0: - return - - # Create figure and axes - fig, ax = plt.subplots(1) - - # sort based on the confs. Confs is column 4 - data = data[data[:, 4].argsort()] - - # Display the image - ax.imshow(im) - - width, height, channels = im.shape - heatmap = np.zeros([width, height]) - - if data is not None: - for box in data: - heatmap[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = box[4] - #rect = patches.Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1], - # linewidth=0.25, edgecolor='m', facecolor='none') - #Add the patch to the Axes - #ax.add_patch(rect) - - if recognized_boxes is not None: - # recognized boxes are green - for box in recognized_boxes: - rect = patches.Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1], - linewidth=1, edgecolor='g', facecolor='none') - # Add the patch to the Axes - ax.add_patch(rect) - - - if gt_boxes is not None: - # ground truth are red - for box in gt_boxes: - rect = patches.Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1], - linewidth=0.25, edgecolor='b', facecolor='none') - # Add the patch to the Axes - ax.add_patch(rect) - - # Following line makes sure that all the heatmaps are in the scale, 0 to 1 - # So color assigned to different scores are consistent across heatmaps for - # different images - heatmap[0:1, 0:1] = 1 - heatmap[0:1, 1:2] = 0 - - plt.imshow(heatmap, alpha=0.4, cmap='hot', interpolation='nearest') - plt.colorbar() - - plt.title("Stitching visualization") - plt.show() - plt.savefig(outpath, dpi=600) - plt.close() - - -def draw_boxes_cv(image, recognized_boxes, gt_boxes, outpath): - - ''' - :param image - :param recognized_boxes - :param outpath: save as outpath. Should be complete image path with extension - :return: - ''' - - #(BGR) - # detected is green - for box in recognized_boxes: - cv2.rectangle(image, (box[0], box[1]), (box[2], box[3]), (0, 255, 0), 3) - - # ground truth is blue - for box in gt_boxes: - cv2.rectangle(image, (box[0], box[1]), (box[2], box[3]), (255, 0, 0), 3) - - cv2.imwrite(outpath, image) - - -def save_boxes(args, recognized_boxes, recognized_scores, img_id): - - if len(recognized_scores) < 1 and len(recognized_boxes) < 1: - return - - pdf_name = img_id.split("/")[0] - math_csv_path = os.path.join(args.save_folder, args.exp_name, pdf_name + ".csv") - - if not os.path.exists(os.path.dirname(math_csv_path)): - os.makedirs(os.path.dirname(math_csv_path)) - - math_output = open(math_csv_path, 'a') - - recognized_boxes = np.concatenate((recognized_boxes,np.transpose([recognized_scores])),axis=1) - - page_num = int(img_id.split("/")[-1]) - - col = np.array([int(page_num) - 1] * recognized_boxes.shape[0]) - math_regions = np.concatenate((col[:, np.newaxis], recognized_boxes), axis=1) - - np.savetxt(math_output, math_regions, fmt='%.2f', delimiter=',') - math_output.close() - - # - # - # for i, box in enumerate(recognized_boxes): - # math_output.write(str(box[0]) + ',' + str(box[1]) + ',' + str(box[2]) + ',' + - # str(box[3]) + ',' + str(recognized_scores[i]) + '\n') - # - -def draw_boxes(args, im, recognized_boxes, recognized_scores, boxes, confs, scale, img_id): - - path = os.path.join("eval", args.exp_name, img_id + ".png") - - if not os.path.exists(os.path.dirname(path)): - os.makedirs(os.path.dirname(path)) - - # Create figure and axes - fig,ax = plt.subplots(1) - scale = scale.cpu().numpy() - - # Display the image - ax.imshow(im) - - width, height, channels = im.shape - heatmap = np.zeros([width, height]) - - if len(recognized_scores) > 1 and len(recognized_boxes) > 1: - - # Recognition heatmap - data = np.concatenate((recognized_boxes,np.transpose([recognized_scores])),axis=1) - data = data[data[:, 4].argsort()] - - for box in data: - heatmap[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = box[4] - - for box in recognized_boxes: - rect = patches.Rectangle((box[0], box[1]), box[2]-box[0], box[3] - box[1], - linewidth=1, edgecolor='g', facecolor='none') - #Add the patch to the Axes - ax.add_patch(rect) - - # Following line makes sure that all the heatmaps are in the scale, 0 to 1 - # So color assigned to different scores are consistent across heatmaps for - # different images - heatmap[0:1, 0:1] = 1 - heatmap[0:1, 1:2] = 0 - - plt.imshow(heatmap, alpha=0.4, cmap='hot', interpolation='nearest') - plt.colorbar() - - plt.title(args.exp_name) - plt.show() - plt.savefig(path, dpi=600) - plt.close() - - -if __name__ == "__main__": - draw_boxes() \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/GetGUIData.py b/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/GetGUIData.py deleted file mode 100644 index 52f77213ab88edf8b33eff166b89b9e56ac4ff01..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/GetGUIData.py +++ /dev/null @@ -1,67 +0,0 @@ - -import os -import numpy as np -import argparse -from manipulate import Manipulator -import torch -from PIL import Image -#%% - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Process some integers.') - - parser.add_argument('--dataset_name',type=str,default='ffhq', - help='name of dataset, for example, ffhq') - - parser.add_argument('--real', action='store_true') - - args = parser.parse_args() - dataset_name=args.dataset_name - - if not os.path.isdir('./data/'+dataset_name): - os.system('mkdir ./data/'+dataset_name) - #%% - M=Manipulator(dataset_name=dataset_name) - np.set_printoptions(suppress=True) - print(M.dataset_name) - #%% - #remove all .jpg - names=os.listdir('./data/'+dataset_name+'/') - for name in names: - if '.jpg' in name: - os.system('rm ./data/'+dataset_name+'/'+name) - - - #%% - if args.real: - latents=torch.load('./data/'+dataset_name+'/latents.pt') - w_plus=latents.cpu().detach().numpy() - else: - w=np.load('./npy/'+dataset_name+'/W.npy') - tmp=w[:50] #only use 50 images - tmp=tmp[:,None,:] - w_plus=np.tile(tmp,(1,M.Gs.components.synthesis.input_shape[1],1)) - np.save('./data/'+dataset_name+'/w_plus.npy',w_plus) - - #%% - tmp=M.W2S(w_plus) - M.dlatents=tmp - - M.img_index=0 - M.num_images=len(w_plus) - M.alpha=[0] - M.step=1 - lindex,bname=0,0 - - M.manipulate_layers=[lindex] - codes,out=M.EditOneC(bname) - #%% - - for i in range(len(out)): - img=out[i,0] - img=Image.fromarray(img) - img.save('./data/'+dataset_name+'/'+str(i)+'.jpg') - #%% - - - \ No newline at end of file diff --git a/spaces/eskayML/object_detection_system/app.py b/spaces/eskayML/object_detection_system/app.py deleted file mode 100644 index a723b322a7724060fbfacd3ca085b1f0f1c772f9..0000000000000000000000000000000000000000 --- a/spaces/eskayML/object_detection_system/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -import cv2 -from transformers import pipeline - -model = pipeline('object-detection') - -def draw_box(image): - img = cv2.imread(image) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - test = model(image) - - for objects in test: - if objects['score'] < .5: - continue - - coord = objects['box'] - label = objects['label'] - color = (0,0,255) - img = cv2.rectangle(img, (coord['xmin'],coord['ymin']) , (coord['xmax'],coord['ymax']), color,1 ) - img = cv2.putText(img,label,(coord['xmin'], coord['ymin']-10), cv2.FONT_HERSHEY_PLAIN, 1, color , 2) - - return img - - -with gr.Blocks() as demo: - - - gr.Markdown("""# Object Detection using the Transformers library
- Enter an Image on the left and view the localized objects on the right. - """) - - with gr.Row(): - inp = gr.Image( type='filepath') - out = gr.Image() - btn = gr.Button('Detect Objects') - - - btn.click(fn = draw_box, inputs = inp, outputs = out) - -demo.launch() - - diff --git a/spaces/eson/tokenizer-arena/vocab/moss/README.md b/spaces/eson/tokenizer-arena/vocab/moss/README.md deleted file mode 100644 index f59916986dc9beee415d1cb98b46fe992386d8d3..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/moss/README.md +++ /dev/null @@ -1,15 +0,0 @@ - - - - -moss-moon-003-base 模型的 tokenizer 中,`eos token` 为 `<|endoftext|>`,在训练SFT模型时需要将该 token 指定为 `` token. - - -## SFT 阶段 - -- ``: end of human -- ``: end of thoughts -- ``: end of commands -- ``: end of moss - - diff --git a/spaces/facebook/XLS-R-2B-21-EN/app.py b/spaces/facebook/XLS-R-2B-21-EN/app.py deleted file mode 100644 index 55309e74af6b350ceb245021d9210fb552502b12..0000000000000000000000000000000000000000 --- a/spaces/facebook/XLS-R-2B-21-EN/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -os.system("pip install gradio==2.8.0b2") -import gradio as gr -import librosa -from transformers import AutoFeatureExtractor, AutoTokenizer, SpeechEncoderDecoderModel - -model_name = "facebook/wav2vec2-xls-r-2b-21-to-en" - -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) -model = SpeechEncoderDecoderModel.from_pretrained(model_name) - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != 16000: - data = librosa.resample(data, sr, 16000) - input_values = feature_extractor(data, return_tensors="pt").input_values - return input_values - -def transcribe(file_mic, file_upload): - warn_output = "" - if (file_mic is not None) and (file_upload is not None): - warn_output = "WARNING: You've uploaded an audio file and used the microphone. The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - file = file_mic - elif (file_mic is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - elif file_mic is not None: - file = file_mic - else: - file = file_upload - - input_values = process_audio_file(file) - - sequences = model.generate(input_values, num_beams=1, max_length=30) - - transcription = tokenizer.batch_decode(sequences, skip_special_tokens=True) - return warn_output + transcription[0] - -iface = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type='filepath', optional=True), - gr.inputs.Audio(source="upload", type='filepath', optional=True), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="XLS-R 2B 21-to-EN Speech Translation", - description="A simple interface to translate from 21 spoken languages to written English.", -) -iface.launch() diff --git a/spaces/failfast/2D-GameCreator/src/scripts/write-version.js b/spaces/failfast/2D-GameCreator/src/scripts/write-version.js deleted file mode 100644 index d88123c8a196575097587290af4e19526a4b501b..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/scripts/write-version.js +++ /dev/null @@ -1,6 +0,0 @@ -const fs = require('fs'); -const packageJson = require('../../package.json'); - -const envVars = `NEXT_PUBLIC_VERSION=${packageJson.version}\n`; - -fs.appendFileSync('.env', envVars); diff --git a/spaces/falterWliame/Face_Mask_Detection/Advanced Archive Password Recovery 4.53 Serial Keygen Software.md b/spaces/falterWliame/Face_Mask_Detection/Advanced Archive Password Recovery 4.53 Serial Keygen Software.md deleted file mode 100644 index 4af06083e105a578f066ca2ecce22d06251473f0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Advanced Archive Password Recovery 4.53 Serial Keygen Software.md +++ /dev/null @@ -1,55 +0,0 @@ -
-

How to Recover Passwords from Compressed Archives with Advanced Archive Password Recovery 4.53 Serial Keygen Software

- -

If you have ever forgotten or lost the password to your ZIP, RAR, ACE or ARJ archives, you know how frustrating it can be to access your important files. Fortunately, there is a solution that can help you recover your passwords quickly and efficiently: Advanced Archive Password Recovery 4.53 Serial Keygen Software.

- -

Advanced Archive Password Recovery 4.53 Serial Keygen Software is a powerful tool that can unlock password-protected archives created with any version of PKZip, WinZip, RAR, WinRAR and compatible products. It supports the latest implementations of AES encryption and the two-gigabyte archives. It can also exploit all known vulnerabilities and implementation flaws in the various compression algorithms for faster recovery.

-

advanced archive password recovery 4.53 serial keygen software


Download File >>> https://urlca.com/2uDc3A



- -

How does Advanced Archive Password Recovery 4.53 Serial Keygen Software work?

- -

Advanced Archive Password Recovery 4.53 Serial Keygen Software uses different methods to recover your passwords depending on the situation. Here are some of the features that make it stand out:

- -
    -
  • If you remember something about the password, such as its length, character set or a part of it, you can use the mask attack to speed up the recovery process. Advanced Archive Password Recovery 4.53 Serial Keygen Software will use every bit of information about the password for even faster recovery.
  • -
  • If you have no idea about the password, you can use the dictionary attack or the brute force attack to try all possible combinations of letters, numbers and symbols. Advanced Archive Password Recovery 4.53 Serial Keygen Software will attempt these attacks automatically and intelligently.
  • -
  • If you have just a single file from the encrypted archive, you can use the known-plaintext attack to unlock the entire archive and decrypt all files in minutes. This is possible thanks to a unique algorithm that can find the encryption key based on the file content.
  • -
  • If you have an archive created with WinZip 8.0 or earlier versions, you can use the guaranteed recovery feature to unlock it in under one hour. This is because these versions of WinZip had a weakness in their encryption scheme that allows Advanced Archive Password Recovery 4.53 Serial Keygen Software to crack them easily.
  • -
- -

How to use Advanced Archive Password Recovery 4.53 Serial Keygen Software?

- -

Using Advanced Archive Password Recovery 4.53 Serial Keygen Software is very easy and intuitive. You just need to follow these steps:

- -
    -
  1. Download and install Advanced Archive Password Recovery 4.53 Serial Keygen Software from the official website or from a trusted source.
  2. -
  3. Run the program and select the archive file that you want to recover.
  4. -
  5. Select the recovery method that suits your situation: mask attack, dictionary attack, brute force attack or known-plaintext attack.
  6. -
  7. Specify any additional options or settings that can help with the recovery process, such as the password length, character set or dictionary file.
  8. -
  9. Click on Start and wait for the program to find your password.
  10. -
  11. Once your password is recovered, you can copy it to the clipboard or save it to a file.
  12. -
  13. Use your password to open your archive and access your files.
  14. -
- -

Why choose Advanced Archive Password Recovery 4.53 Serial Keygen Software?

- -

There are many reasons why Advanced Archive Password Recovery 4.53 Serial Keygen Software is the best choice for recovering passwords from compressed archives. Here are some of them:

- -
    -
  • It has a high success rate and can recover passwords from any type of archive format.
  • -
  • It has a fast performance and can recover passwords in minutes or even seconds.
  • -
  • It has a user-friendly interface and supports multiple languages.
  • -
  • It has a low-level optimization that leads to password recovery speed of millions passwords a second.
  • -
  • It has a background mode that utilizes the idle CPU cycles without affecting your regular work.
  • -
  • It has a resume feature that allows you to stop and resume the recovery at any time.
  • -
- -

Conclusion

- -

If you are looking for a reliable and effective way to recover passwords from compressed archives, you should definitely try Advanced Archive Password Recovery 4.53 Serial Keygen Software. It is a professional tool that can unlock any archive format and decrypt any encryption scheme in no time. You can download it from here and start recovering your passwords today!

-

-

Conclusion

- -

If you are looking for a reliable and effective way to recover passwords from compressed archives, you should definitely try Advanced Archive Password Recovery 4.53 Serial Keygen Software. It is a professional tool that can unlock any archive format and decrypt any encryption scheme in no time. You can download it from here and start recovering your passwords today!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/CyberGhost VPN 7.2 Crack Premium.md b/spaces/falterWliame/Face_Mask_Detection/CyberGhost VPN 7.2 Crack Premium.md deleted file mode 100644 index 5ed4fe4bd10b39ad59eb75078a645f4374beb31f..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/CyberGhost VPN 7.2 Crack Premium.md +++ /dev/null @@ -1,34 +0,0 @@ -
-

CyberGhost VPN 7.2 Crack Premium: A Reliable and Secure VPN Service

-

If you are looking for a way to protect your online privacy, surf anonymously and access blocked or censored content, you might want to consider using CyberGhost VPN 7.2 Crack Premium. This is a cracked version of the original CyberGhost VPN software, which allows you to enjoy all the features of the premium subscription without paying any money.

-

CyberGhost VPN 7.2 Crack Premium


Download Filehttps://urlca.com/2uDbT6



-

CyberGhost VPN 7.2 Crack Premium is based on the OpenVPN protocol with SSL encryption, which means that it creates a secure and encrypted connection between your device and one of the servers from the CyberGhost network. This way, your original IP address is hidden and replaced with one from the network, making it impossible for third parties to track your online activity or access your personal data.

-

With CyberGhost VPN 7.2 Crack Premium, you can also access geo-restricted or censored content from all over the world, by connecting to one of the 900+ servers located in different countries. You can also block malicious content, prevent DNS leaks, and enjoy a fully encrypted internet with the 256-AES bit technology.

-

CyberGhost VPN 7.2 Crack Premium is easy to use and install, and it works with almost any program that accesses the internet, such as browsers, streaming services, torrent clients, etc. It also offers a high-performance server network, which ensures minimal delays and fast loading times.

-

However, before you download and use CyberGhost VPN 7.2 Crack Premium, you should be aware of some risks and limitations. First of all, using a cracked version of any software is illegal and unethical, and it may violate the terms of service of CyberGhost VPN. Secondly, using a cracked version may expose you to malware or viruses that could harm your device or compromise your security. Thirdly, using a cracked version may not guarantee you the same level of quality and reliability as the original software, and it may stop working at any time due to updates or patches.

-

Therefore, if you want to use CyberGhost VPN safely and legally, you should consider purchasing a legitimate subscription from their official website[^1^]. This way, you can support the developers of this great service and enjoy all the benefits of a premium VPN without any risks or limitations.

Here are some more paragraphs for the article:

-

-

CyberGhost VPN 7.2 Crack Premium: How to Install and Use It

-

If you still want to try CyberGhost VPN 7.2 Crack Premium, despite the risks and limitations mentioned above, you will need to follow some steps to install and use it. First of all, you will need to download the crack file from a reliable source, such as SadeemPC. Then, you will need to install the trial version of CyberGhost VPN 6.5.2 from their official website. After that, you will need to run the stop.service.exe file from the crack folder to stop the CyberGhost service. Next, you will need to copy and replace all the files from the crack folder to the installation directory of CyberGhost VPN. Finally, you will need to launch CyberGhost VPN and enjoy the premium features.

-

However, you should be careful when using CyberGhost VPN 7.2 Crack Premium, as it may not work properly or stop working at any time. You should also avoid updating CyberGhost VPN to newer versions or builds, as it may break the crack and cause errors or problems. Moreover, you should scan your device regularly for any malware or viruses that may have been installed along with the crack file.

-

CyberGhost VPN 7.2 Crack Premium: Pros and Cons

-

To sum up, CyberGhost VPN 7.2 Crack Premium is a cracked version of the original CyberGhost VPN software, which allows you to use a secure and reliable VPN service for free. However, it also comes with some drawbacks and risks that you should be aware of before using it. Here are some pros and cons of CyberGhost VPN 7.2 Crack Premium:

-
    -
  • Pros: -
      -
    • It offers all the features of the premium subscription of CyberGhost VPN, such as hiding your IP address, accessing geo-restricted content, blocking malicious content, preventing DNS leaks, and encrypting your internet traffic.
    • -
    • It is easy to use and install, and it works with almost any program that accesses the internet.
    • -
    • It has a high-performance server network that ensures fast and smooth connections.
    • -
    -
  • -
  • Cons: -
      -
    • It is illegal and unethical to use a cracked version of any software, and it may violate the terms of service of CyberGhost VPN.
    • -
    • It may expose you to malware or viruses that could harm your device or compromise your security.
    • -
    • It may not guarantee you the same level of quality and reliability as the original software, and it may stop working at any time due to updates or patches.
    • -
    -
  • -
-

Therefore, we recommend you to purchase a legitimate subscription from CyberGhost VPN if you want to use their service safely and legally.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Bugjaeger APK How to Debug Shell Sideload and More on Android Devices.md b/spaces/fatiXbelha/sd/Bugjaeger APK How to Debug Shell Sideload and More on Android Devices.md deleted file mode 100644 index cd7bd2655b3a584344954a22aca37cc00e5bd81e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bugjaeger APK How to Debug Shell Sideload and More on Android Devices.md +++ /dev/null @@ -1,198 +0,0 @@ - -

Bugjaeger APK: A Powerful Tool for Android Power Users

-

If you are an Android power user, developer, geek, or hacker, you probably know how useful it is to have a tool that can debug, dissect, shell into, and control your Android device or TV via USB or WiFi. Such a tool is called ADB (Android Debug Bridge), and it normally runs on your development machine. But what if you could run ADB directly on your Android device, without the need for a laptop or a PC?

-

bugjaeger apk


Download 🆗 https://urllie.com/2uNIAv



-

That's exactly what Bugjaeger APK does. It is an app that works as a sort of Android to Android ADB, offering some features similar to ADB, but instead of running on your development machine, it runs directly on your Android device. You connect your target device through USB OTG cable or through WiFi and you'll be able to play around with the device. You can control your Android TV, Wear OS watch, or even Raspberry Pi with Android Things OS and Oculus VR.

-

In this article, we will show you how to use Bugjaeger APK, what features it offers, what benefits it brings, and where to download it. Let's get started!

-

How to use Bugjaeger APK

-

Requirements

-

To use Bugjaeger APK, you will need the following:

-

bugjaeger mobile adb usb otg apk
-bugjaeger apk download latest version
-bugjaeger apk for android tv
-bugjaeger apk free download apkcombo
-bugjaeger multitool for android power users
-bugjaeger mobile adb usb otg app
-bugjaeger apk for pc windows
-bugjaeger apk for android tablet
-bugjaeger apk for oculus quest vr
-bugjaeger apk for raspberry pi
-bugjaeger mobile adb usb otg 4.6 apk
-bugjaeger apk sisik eu hackendebug
-bugjaeger apk roman sisik developer
-bugjaeger apk tools category
-bugjaeger apk enable developer options
-bugjaeger apk connect via usb otg cable
-bugjaeger apk connect via wifi
-bugjaeger apk debug dissect shell control android device
-bugjaeger apk running shell scripts on target device
-bugjaeger apk sideload regular split apks
-bugjaeger apk sideload flash aosp images
-bugjaeger apk remote interactive shell
-bugjaeger apk tv remote controller
-bugjaeger apk mirroring screen remotely control with touch gesture
-bugjaeger apk reading filtering exporting device logs logcat
-bugjaeger apk pull apk files
-bugjaeger apk adb backups inspecting extracting content of backup files
-bugjaeger apk screenshots
-bugjaeger apk performing various adb commands for controlling your device
-bugjaeger apk launch force-stop disable apps
-bugjaeger apk uninstalling and installing packages checking various details about installed apps
-bugjaeger apk copying apps between phones
-bugjaeger apk monitoring the processes showing additional information related to processes killing processes
-bugjaeger apk get system properties
-bugjaeger apk showing various details about android version linux kernel cpu abi display
-bugjaeger apk showing battery details temperature health technology voltage
-bugjaeger apk file management pushing and pulling files from device browsing the file system
-bugjaeger apk search and connect to android devices on your network that configured adbd to listen on port 5555
-bugjaeger apk reading bootloader variables info via fastboot protocol
-bugjaeger mobile adb usb otg old versions
-how to use bugjaeger mobile adb usb otg
-what is new in bugjaeger mobile adb usb otg
-how to update bugjaeger mobile adb usb otg
-how to install bugjaeger mobile adb usb otg
-how to uninstall bugjaeger mobile adb usb otg
-how to contact roman sisik developer of bugjaeger mobile adb usb otg
-how to rate and review bugjaeger mobile adb usb otg
-how to report bugs and issues with bugjaeger mobile adb usb otg

-
    -
  • An Android device that supports USB OTG (On-The-Go) or WiFi connection. This will be your host device, where you will install and run Bugjaeger APK.
  • -
  • An Android device that supports USB debugging or WiFi debugging. This will be your target device, where you will perform various tasks with Bugjaeger APK.
  • -
  • A USB OTG cable or a WiFi network. This will be used to connect your host device and your target device.
  • -
  • A permission from your target device to allow debugging. This will be requested when you connect your devices for the first time.
  • -
-

Installation

-

To install Bugjaeger APK on your host device, you can follow these steps:

-
    -
  1. Download the latest version of Bugjaeger APK from APKCombo. This is a trusted source that offers free and safe downloads of various Android apps.
  2. -
  3. Open the downloaded file and tap on Install. You may need to enable Unknown Sources in your settings if this is your first time installing an app from outside the Google Play Store.
  4. -
  5. Wait for the installation to finish. You will see a Bugjaeger icon on your app drawer.
  6. -
-

Connection

-

To connect your target device to your host device, you can use either USB OTG or WiFi. Here are the steps for each method:

-

USB OTG

-
    -
  1. Enable USB debugging on your target device. You can do this by going to Settings > About phone > Tap on Build number 7 times > Go back to Settings > Developer options > Enable USB debugging.
  2. -
  3. Connect your target device to your host device using a USB OTG cable.
  4. -
  5. Launch Bugjaeger APK on your host device and tap on the USB icon on the top right corner.
  6. -
  7. You will see a list of connected devices. Tap on the one you want to control.
  8. -
  9. You will see a pop-up on your target device asking you to allow USB debugging. Tap on OK.
  10. -
  11. You are now connected and ready to use Bugjaeger APK.
  12. -
-

WiFi

-
    -
  1. Make sure both your host device and your target device are connected to the same WiFi network.
  2. -
  3. Enable WiFi debugging on your target device. You can do this by going to Settings > Developer options > Enable WiFi debugging.
  4. -
  5. Launch Bugjaeger APK on your host device and tap on the WiFi icon on the top right corner.
  6. -
  7. You will see a list of available devices. Tap on the one you want to control.
  8. -
  9. You will see a pop-up on your target device asking you to allow WiFi debugging. Tap on OK.
  10. -
  11. You are now connected and ready to use Bugjaeger APK.
  12. -
-

Features of Bugjaeger APK

-

Bugjaeger APK offers a variety of features that let you debug, control, and manipulate your target device. Here are some of the most useful ones:

-

Remote interactive shell

-

This feature allows you to run commands and scripts on your target device, just like you would do with ADB shell. You can access the remote interactive shell by tapping on the Shell icon on the bottom navigation bar of Bugjaeger APK. You will see a terminal-like interface where you can type and execute commands. You can also use the built-in keyboard or an external keyboard for convenience. Some of the commands you can run are:

-
    -
  • ls: List files and directories in the current path.
  • -
  • cd: Change directory to a specified path.
  • -
  • cp: Copy files or directories from one location to another.
  • -
  • rm: Remove files or directories.
  • -
  • cat: Display the contents of a file or concatenate files.
  • -
  • echo: Print a message or a variable value.
  • -
  • ps: Show information about processes running on the device.
  • -
  • kill: Terminate a process by its process ID (PID).
  • -
  • top: Show CPU usage and memory usage of processes.
  • -
  • ping: Test network connectivity by sending packets to a specified host and measuring the response time.
  • -
  • ifconfig: Show network interface configuration and status.
  • -
  • ip: Show or manipulate routing, devices, policy routing, and tunnels.
  • -
  • wget: Download files from the web.
  • -
  • cURL: Transfer data from or to a server using various protocols.
  • -
  • logcat: View system logs and filter them by tags, levels, or keywords.
  • -
  • dmesg: View kernel logs and messages.
  • -
  • dumpsys: Dump system service information and state.
  • -
  • dumpstate: Dump system state information such as battery, memory, CPU, network, etc.
  • -
  • screencap: Capture a screenshot of the device screen and save it as a PNG file.
  • -
  • screenrecord: Record a video of the device screen and save it as an MP4 file.
  • getprop: Get or set system properties.
  • -
  • setprop: Set system properties.
  • -
  • pm: Manage packages and applications.
  • -
  • am: Manage activities and services.
  • -
  • input: Simulate user input events such as tap, swipe, text, etc.
  • -
  • settings: Get or put system settings.
  • -
  • svc: Control system services such as WiFi, Bluetooth, airplane mode, etc.
  • -
  • reboot: Reboot the device.
  • -
  • su: Switch to the root user (requires root access).
  • -
-

You can also run custom scripts that you have stored on your host device or your target device. You can use the Script icon on the bottom navigation bar to access the script manager. You can create, edit, delete, import, export, and run scripts from there. You can also use variables and parameters in your scripts for more flexibility.

-

TV remote controller

-

This feature allows you to control your Android TV with Bugjaeger APK. You can use your host device as a remote controller for your Android TV. You can access the TV remote controller by tapping on the Remote icon on the bottom navigation bar of Bugjaeger APK. You will see a virtual remote controller with buttons for navigation, selection, back, home, menu, volume, power, etc. You can also use the keyboard icon to enter text on your Android TV. You can also use the mouse icon to move a cursor on your Android TV screen and click on items. This feature is very handy when you want to control your Android TV without using the physical remote controller or when you lose or break it.

-

Pull APK files

-

This feature allows you to copy apps from one device to another with Bugjaeger APK. You can pull APK files from your target device and save them on your host device or vice versa. You can access this feature by tapping on the Apps icon on the bottom navigation bar of Bugjaeger APK. You will see a list of apps installed on your target device. You can select one or more apps and tap on the Pull icon on the top right corner. You will be asked to choose a destination folder on your host device where you want to save the APK files. Alternatively, you can tap on the Push icon on the top right corner and choose an APK file from your host device that you want to install on your target device. This feature is useful when you want to backup or restore apps or when you want to share apps with other devices.

-

ADB backups

-

This feature allows you to backup and restore data with Bugjaeger APK. You can use ADB backup and restore commands to create and apply backup archives of your target device data. You can access this feature by tapping on the Backup icon on the bottom navigation bar of Bugjaeger APK. You will see two options: Backup and Restore. If you choose Backup, you will be asked to select what data you want to backup: apps, shared storage, system settings, etc. You will also be asked to choose a destination folder on your host device where you want to save the backup archive file. If you choose Restore, you will be asked to select a backup archive file from your host device that you want to apply on your target device. This feature is helpful when you want to backup or restore your data in case of loss or damage.

-

Screenshots

-

This feature allows you to capture screenshots of your target device with Bugjaeger APK. You can take screenshots of any screen or app on your target device and save them on your host device or share them with others. You can access this feature by tapping on the Screenshot icon on the bottom navigation bar of Bugjaeger APK. You will see a preview of your target device screen. You can tap on the Capture icon on the top right corner to take a screenshot. You will be asked to choose a destination folder on your host device where you want to save the screenshot file. Alternatively, you can tap on the Share icon on the top right corner and choose an app or a service where you want to share the screenshot file. This feature is handy when you want to capture something interesting or important on your target device screen.

-

System properties

-

This feature allows you to get and set system properties with Bugjaeger APK. System properties are key-value pairs that store various information and settings about your target device system. You can access this feature by tapping on the Properties icon on the bottom navigation bar of Bugjaeger APK. You will see a list of system properties with their keys and values. You can search for a specific property by entering its key or value in the search bar. You can also filter the properties by their source: system, default, or secure. You can tap on a property to view its details and edit its value. You can also add a new property by tapping on the Add icon on the top right corner. You will need to enter a key and a value for the new property. This feature is useful when you want to tweak or customize your target device system settings.

-

Fastboot commands

-

This feature allows you to execute fastboot commands with Bugjaeger APK. Fastboot is a protocol that lets you communicate with your target device bootloader and flash various partitions such as boot, recovery, system, etc. You can access this feature by tapping on the Fastboot icon on the bottom navigation bar of Bugjaeger APK. You will see a list of fastboot commands that you can run on your target device. Some of the commands are:

-
    -
  • devices: List connected devices in fastboot mode.
  • -
  • reboot: Reboot the device normally.
  • -
  • reboot-bootloader: Reboot the device into bootloader mode.
  • -
  • reboot-recovery: Reboot the device into recovery mode.
  • -
  • flash: Flash a partition with an image file.
  • -
  • erase: Erase a partition.
  • -
  • format: Format a partition.
  • -
  • getvar: Get a bootloader variable.
  • -
  • set_active: Set the active slot for devices with A/B partitions.
  • -
  • lock: Lock the bootloader.
  • -
  • unlock: Unlock the bootloader.
  • -
  • oem: Execute an OEM-specific command.
  • -
-

You can also run custom fastboot commands by tapping on the Custom icon on the top right corner. You will need to enter the command and its arguments in the text field. This feature is helpful when you want to flash or modify your target device firmware or partitions.

-

System info

-

This feature allows you to get extensive system information with Bugjaeger APK. You can access various details about your target device hardware, software, network, battery, memory, storage, sensors, etc. You can access this feature by tapping on the Info icon on the bottom navigation bar of Bugjaeger APK. You will see a list of categories that you can tap on to view more information. Some of the categories are:

-
    -
  • Device: Model, manufacturer, brand, product, serial number, hardware, etc.
  • -
  • OS: Version, API level, build number, security patch level, kernel version, etc.
  • -
  • CPU: Architecture, cores, frequency, usage, temperature, etc.
  • -
  • GPU: Vendor, model, renderer, version, extensions, etc.
  • -
  • RAM: Total, free, used, available, etc.
  • -
  • Storage: Internal and external storage capacity, free space, used space, etc.
  • -
  • Battery: Level, status, health, temperature, voltage, current, capacity, etc.
  • -
  • Network: WiFi and cellular network status, signal strength, IP address, MAC address, DNS, gateway, etc.
  • -
  • Sensors: List of sensors available on the device, type, vendor, version, range, resolution, power, etc.
  • -
  • Features: List of features supported by the device, such as Bluetooth, camera, fingerprint, NFC, etc.
  • -
  • Permissions: List of permissions granted or denied to apps on the device.
  • -
  • Processes: List of processes running on the device, PID, user, memory usage, CPU usage, etc.
  • -
  • Services: List of services running on the device, name, package, process, state, etc.
  • -
  • Apps: List of apps installed on the device, name, package, version, size, etc.
  • -
-

This feature is useful when you want to get a comprehensive overview of your target device system and performance.

-

Benefits of Bugjaeger APK

-

Bugjaeger APK is not just a cool app that lets you play with your Android devices. It also brings some benefits that can make your life easier and more productive. Here are some of them:

-

Convenience

-

With Bugjaeger APK, you don't need to carry a laptop or a PC with you when you want to debug or control your Android devices. You can use your Android phone or tablet as a portable ADB tool that can connect to any other Android device via USB OTG or WiFi. This saves you the hassle of setting up your development environment, installing drivers, configuring ports, etc. You can also use Bugjaeger APK anywhere and anytime you want, without worrying about power outlets or internet connections. You can use Bugjaeger APK in your home, office, car, hotel room, airport lounge, or even outdoors.

-

Control

-

With Bugjaeger APK, you have better control and deep understanding of your Android device internals. You can access various system settings and properties that are normally hidden or restricted by the user interface. You can also run commands and scripts that can modify or manipulate your device behavior and functionality. You can also backup and restore your data or flash new firmware or partitions with ease. You can also monitor and optimize your device performance and battery life by checking the CPU usage, memory usage, network status, battery status, etc. You can also troubleshoot and fix any issues or errors that may occur on your device by viewing the system logs and messages.

-

Compatibility

-

With Bugjaeger APK, you can work with various Android devices and platforms. You can connect to any Android device that supports USB debugging or WiFi debugging. You can also connect to devices that run different versions of Android OS or different custom ROMs. You can also connect to devices that have different form factors or functions such as Android TV, Wear OS watch, Raspberry Pi with Android Things OS, Oculus VR, etc. You can also connect to devices that have different architectures or chipsets such as ARM, x86, Qualcomm, MediaTek, etc. You can also connect to devices that have different features or sensors such as Bluetooth, camera, fingerprint, NFC, etc. You can also connect to devices that have different permissions or security levels such as root access, bootloader unlock, etc.

-

Conclusion

-

Bugjaeger APK is a powerful tool for Android power users who want to debug, control, and manipulate their Android devices with ease and convenience. It offers a variety of features that let you run commands and scripts, control your Android TV, pull APK files, backup and restore data, take screenshots, get and set system properties, execute fastboot commands, and get extensive system information. It also brings some benefits such as convenience, control, and compatibility. You can download Bugjaeger APK from APKCombo and start using it right away.

-

If you are interested in learning more about Bugjaeger APK, you can visit their official website or follow them on Twitter. You can also join their Telegram group or Discord server to chat with other users and developers. You can also support their development by donating via PayPal or Patreon.

-

We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

-

FAQs

-
    -
  • Is Bugjaeger APK safe to use?
  • -

    Bugjaeger APK is safe to use as long as you download it from a trusted source such as APKCombo. It does not contain any malware or spyware and it does not collect any personal data from your devices. However, you should be careful when using Bugjaeger APK as it can perform some actions that may affect your device functionality or security. You should always backup your data before using Bugjaeger APK and only use it on devices that you own or have permission to use.

    -
  • Is Bugjaeger APK free to use?
  • -

    Bugjaeger APK is free to use for personal and non-commercial purposes. You can download it from APKCombo without paying any fees or charges. However, if you want to support the development of Bugjaeger APK and get access to some exclusive features and updates, you can donate via PayPal or Patreon.

    -
  • Does Bugjaeger APK require root access?
  • -

    Bugjaeger APK does not require root access to work on your devices. However, some features may require root access to function properly such as setting system properties or executing fastboot commands. If your device is not rooted, you may see some errors or warnings when using these features.

    -
  • Does Bugjaeger APK work on iOS devices?
  • -

    Bugjaeger APK does not work on iOS devices as it is designed for Android devices only. iOS devices have a different operating system and a different debugging protocol than Android devices. Therefore, Bugjaeger APK cannot communicate or control iOS devices.

    -
  • How can I contact the developers of Bugjaeger APK?
  • -

    You can contact the developers of Bugjaeger APK by visiting their official website or following them on Twitter. You can also join their Telegram group or Discord server to chat with them directly. You can also send them an email at bugjaeger@gmail.com.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Classic Solitaire Online The Best Way to Relax and Have Fun.md b/spaces/fatiXbelha/sd/Classic Solitaire Online The Best Way to Relax and Have Fun.md deleted file mode 100644 index 8a758ad397f273e8a7d5db8f26728354a9c78d82..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Classic Solitaire Online The Best Way to Relax and Have Fun.md +++ /dev/null @@ -1,128 +0,0 @@ -
-

Classic Solitaire Online No Download: How to Play and Enjoy the Timeless Card Game

-

If you are looking for a fun and relaxing way to spend some time, you might want to try playing classic solitaire online no download. Solitaire is one of the most popular card games in the world, and it can be played by anyone, anywhere, anytime. In this article, we will tell you everything you need to know about classic solitaire, why you should play it online no download, and how to play it on your computer or mobile device.

-

classic solitaire online no download


Download File ►►►►► https://urllie.com/2uNCj6



-

What is Classic Solitaire?

-

Classic solitaire, also known as Klondike solitaire or Patience, is a single-player card game that involves sorting a deck of 52 cards into four piles according to suit and rank. The game is simple to learn but challenging to master, as it requires skill, strategy, and luck.

-

The history and popularity of the game

-

The origin of solitaire is not clear, but some historians believe that it was invented in France or Germany in the 18th century. The game became popular in Europe and America in the 19th century, especially among aristocrats and intellectuals who used it as a form of entertainment and mental exercise. The game was also featured in many books and movies, such as "The Count of Monte Cristo" by Alexandre Dumas and "The Shawshank Redemption" by Stephen King.

-

Today, solitaire is still one of the most played card games in the world, thanks to its accessibility and appeal. According to a survey by Microsoft, more than 35 million people play solitaire every month on their Windows computers. Solitaire is also available on many other platforms, such as online websites, mobile apps, and video game consoles.

-

The rules and objectives of the game

-

The goal of classic solitaire is to move all the cards from the tableau (the seven columns of cards on the table) to the foundation (the four empty piles at the top right corner) in ascending order from Ace to King. To do this, you can follow these rules:

-
    -
  • You can only move one card at a time, either from the tableau or from the stock (the face-down pile at the top left corner).
  • -
  • You can only place a card on another card that is one rank higher and of the opposite color (for example, a black 6 on a red 7).
  • -
  • You can move a group of cards that are in sequence and of the same suit (for example, a red 5-4-3-2) as a unit.
  • -
  • You can move any card to an empty column on the tableau.
  • -
  • You can move an Ace to an empty foundation pile, and then build up from there according to suit.
  • -
  • You can draw one or three cards from the stock at a time, depending on your preference.
  • -
  • You can use the waste (the face-up pile next to the stock) to store cards that you cannot use at the moment.
  • -
  • You win the game when you have moved all the cards to the foundation.
  • -
-

Why Play Classic Solitaire Online No Download?

-

Playing classic solitaire online no download has many advantages over playing with physical cards or downloading software. Here are some of them:

-

The benefits of playing online

-
    -
  • You can play anytime, anywhere, as long as you have an internet connection and a browser.
  • -
  • You don't need to shuffle or deal cards manually, which saves time and effort.
  • -
  • You don't need to worry about losing or damaging cards, which saves money.
  • -
  • You can choose from different levels of difficulty, themes, and layouts to suit your preference and mood.
  • -
  • You can track your progress, statistics, and achievements, and compare them with other players.
  • -
  • You can access tips, hints, and tutorials to improve your skills and strategies.
  • -
  • You can enjoy the game without any ads, pop-ups, or malware.
  • -
-

The features and options of online solitaire games

-

Playing classic solitaire online no download also gives you access to various features and options that enhance your gaming experience. Some of these are:

-
    -
  • Auto-complete: This feature allows you to finish the game automatically when you have moved all the cards to the foundation.
  • -
  • Undo: This feature allows you to undo your last move or moves in case you make a mistake or change your mind.
  • -
  • Hint: This feature gives you a suggestion for your next move when you are stuck or unsure.
  • -
  • Restart: This feature allows you to start a new game with the same or a different deal.
  • -
  • Timer: This feature shows you how long it takes you to complete the game.
  • -
  • Score: This feature shows you how many points you earn for each move and for completing the game.
  • -
  • Moves: This feature shows you how many moves you have made so far.
  • -
  • Sound: This feature allows you to turn on or off the sound effects and music of the game.
  • -
-

How to Play Classic Solitaire Online No Download?

-

Playing classic solitaire online no download is easy and fun. All you need is a computer or a mobile device with an internet connection and a browser. Here are the steps and tips for playing online solitaire:

-

The steps and tips for playing online solitaire

-
    -
  1. Go to a website or an app that offers classic solitaire online no download. Some of the best ones are [Solitaired], [World of Solitaire], and [Solitaire Bliss].
  2. -
  3. Select your preferred level of difficulty, theme, and layout. You can also customize the card backs, backgrounds, and sounds.
  4. -
  5. Click on the "Start" or "Play" button to begin the game. You will see the tableau, the stock, the waste, and the foundation on the screen.
  6. -
  7. Drag and drop the cards from the tableau, the stock, or the waste to the foundation or another column on the tableau. Follow the rules and objectives of the game as explained above.
  8. -
  9. Use the features and options of the game as needed. For example, you can click on the "Hint" button to get a suggestion for your next move, or click on the "Undo" button to reverse your last move.
  10. -
  11. Try to complete the game as quickly and efficiently as possible. You will earn more points and achieve higher ranks if you do so.
  12. -
  13. If you get stuck or bored, you can restart the game with a new deal or try a different level of difficulty, theme, or layout.
  14. -
-

The best websites and apps for playing online solitaire

-

There are many websites and apps that offer classic solitaire online no download, but not all of them are equally good. Some of them may have poor graphics, annoying ads, limited features, or unreliable performance. To help you find the best ones, we have reviewed some of the most popular ones based on their quality, variety, functionality, and user-friendliness. Here are our top picks:

- - - - - -
NameDescriptionRating
[Solitaired]This website offers over 500 types of solitaire games, including classic solitaire, spider solitaire, freecell solitaire, pyramid solitaire, and more. You can play them online no download on any device with a browser. You can also customize your game with different themes, backgrounds, card backs, and sounds. You can track your progress, statistics, and achievements, and compare them with other players. You can also access tips, hints, tutorials, and articles to improve your skills and knowledge. The website has a sleek design, smooth gameplay, and no ads.5/5
[World of Solitaire]This website offers over 100 types of solitaire games, including classic solitaire, spider solitaire, freecell solitaire, and more. You can play them online no download on any device with a browser. You can also customize your game with different themes, backgrounds, card backs, and sounds. You can track your progress, statistics, and achievements, and compare them with other players. You can also access tips, hints, tutorials, and articles to improve your skills and knowledge. The website has a classic design, fast gameplay, and minimal ads.4.5/5
[Solitaire Bliss]This website offers over 30 types of solitaire games, including classic solitaire, spider solitaire, freecell solitaire, and more. You can play them online no download on any device with a browser. You can also customize your game with different themes, backgrounds, card backs, and sounds. You can track your progress, statistics, and achievements, and compare them with other players. You can also access tips, hints, tutorials, and articles to improve your skills and knowledge. The website has a modern design, smooth gameplay, and no ads.4/5
-

Conclusion

-

Classic solitaire is a timeless card game that can provide you with hours of fun and relaxation. Playing it online no download is a convenient and enjoyable way to enjoy the game without any hassle or cost. You can choose from different levels of difficulty, themes, and layouts to suit your preference and mood. You can also use various features and options to enhance your gaming experience. You can also find the best websites and apps for playing online solitaire by checking our reviews above.

-

play classic solitaire online free
-classic solitaire online without flash
-classic solitaire online full screen
-classic solitaire online no ads
-classic solitaire online with hints
-classic solitaire online spider
-classic solitaire online freecell
-classic solitaire online pyramid
-classic solitaire online klondike
-classic solitaire online card game
-classic solitaire online one card draw
-classic solitaire online three card draw
-classic solitaire online easy mode
-classic solitaire online hard mode
-classic solitaire online timer
-classic solitaire online score
-classic solitaire online undo
-classic solitaire online shuffle
-classic solitaire online rules
-classic solitaire online tips
-classic solitaire online strategy
-classic solitaire online cheat codes
-classic solitaire online google
-classic solitaire online world of solitaire
-classic solitaire online 247 games
-classic solitaire online aarp games
-classic solitaire online microsoft games
-classic solitaire online pogo games
-classic solitaire online cool math games
-classic solitaire online big fish games
-classic solitaire online for mac
-classic solitaire online for ipad
-classic solitaire online for iphone
-classic solitaire online for android
-classic solitaire online for windows 10
-classic solitaire online for chromebook
-classic solitaire online for kids
-classic solitaire online for adults
-classic solitaire online for seniors
-classic solitaire online for beginners
-best classic solitaire online game
-most popular classic solitaire online game
-most challenging classic solitaire online game
-most relaxing classic solitaire online game
-most fun classic solitaire online game
-how to play classic solitaire online game
-how to win classic solitaire online game
-how to download classic solitaire online game
-how to install classic solitaire online game

-

So what are you waiting for? Grab your computer or mobile device and start playing classic solitaire online no download today! You will be amazed by how much fun you will have!

-

FAQs

-

Here are some of the most frequently asked questions about classic solitaire online no download:

-
    -
  1. Q: Is classic solitaire online no download free?
    A: Yes, classic solitaire online no download is completely free to play. You don't need to register, download, or install anything to play the game. You just need an internet connection and a browser.
  2. -
  3. Q: Is classic solitaire online no download safe?
    A: Yes, classic solitaire online no download is safe to play. The websites and apps that we recommend are secure and reliable. They do not contain any viruses, malware, or spyware. They also do not collect or share any personal or sensitive information from you.
  4. -
  5. Q: Is classic solitaire online no download fair?
    A: Yes, classic solitaire online no download is fair to play. The game uses a random number generator to shuffle and deal the cards. This ensures that every game is different and unpredictable. The game also does not cheat or favor any player.
  6. -
  7. Q: Is classic solitaire online no download challenging?
    A: Yes, classic solitaire online no download is challenging to play. The game requires skill, strategy, and luck to complete. The game also offers different levels of difficulty to suit your skill level and challenge you.
  8. -
  9. Q: Is classic solitaire online no download fun?
    A: Yes, classic solitaire online no download is fun to play. The game is simple to learn but hard to master. The game also offers different themes, layouts, and features to make it more interesting and enjoyable.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Minecraft APK and Build Your Own World in 3D.md b/spaces/fatiXbelha/sd/Download Minecraft APK and Build Your Own World in 3D.md deleted file mode 100644 index b1cb30c2b761c22837553429653e594411796cc7..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Minecraft APK and Build Your Own World in 3D.md +++ /dev/null @@ -1,154 +0,0 @@ - - - -
-

What is www.minecraft.com apk?

-

If you love sandbox games that let you unleash your creativity and imagination in a blocky world, you might have heard of Minecraft. It is one of the most popular and best-selling video games of all time, with over 200 million copies sold worldwide.

-

www. minecraft .com apk


Download Ziphttps://urllie.com/2uNwPz



-

Minecraft is a game that allows you to explore infinite worlds and build anything you can imagine using blocks. You can play in different modes, such as survival mode, where you have to gather resources and fight enemies, or creative mode, where you have unlimited resources and can build anything you want. You can also play with other players online or on your own private world.

-

Minecraft is available on various platforms, such as PC, console, mobile, and VR. If you want to play Minecraft on your Android device, you need to download and install an apk file. An apk file is an Android application package that contains all the files and data needed to run an app on your device.

-

You can download and install the Minecraft apk file from the official website www.minecraft.com or from the Google Play Store. You will need to pay a one-time fee of $7.49 to get the full game, but you can also try a free trial version before buying it. Once you have downloaded and installed the apk file, you can launch the game and enjoy playing Minecraft on your Android device.

-

Minecraft features and gameplay

-

Explore infinite worlds and build anything you can imagine

-

One of the main features of Minecraft is that it gives you the freedom to explore infinite worlds and build anything you can imagine using blocks. You can create your own world from scratch or use one of the many pre-made maps available online. You can also generate random worlds with different biomes, such as forests, deserts, oceans, mountains, etc.

-

You can use various tools and materials to mine blocks and craft items. You can make weapons, armor, tools, furniture, food, potions, etc. You can also use redstone to create circuits and mechanisms that can power your creations. You can also breed animals, grow crops, fish, trade with villagers, and more.

-

You can play in different modes depending on your preference. In survival mode, you have to gather resources and fight enemies, such as zombies, skeletons, spiders, creepers, etc. You also have to manage your hunger and health bars. In creative mode, you have unlimited resources and can build anything you want without any restrictions. You can also play in adventure mode, where you have to follow a custom map with quests and challenges. You can also play in spectator mode, where you can fly around and observe the world without interacting with it.

-

How to download and install Minecraft APK on Android devices
-Minecraft APK vs Minecraft Java Edition: Which one is better?
-Minecraft APK mods: The best ones to enhance your gameplay
-Minecraft APK for Chrome OS: How to play Minecraft on your Chromebook
-Minecraft APK 1.20.0: What's new in the latest update?
-Minecraft APK free download: Is it safe and legal?
-Minecraft APK for iOS: How to get Minecraft on your iPhone or iPad
-Minecraft APK servers: How to join and host multiplayer games
-Minecraft APK skins: How to customize your character's appearance
-Minecraft APK cheats: How to use commands and hacks in the game
-Minecraft APK for Windows 10 or 11: How to play Minecraft on your PC
-Minecraft APK for Nintendo Switch: How to cross-play with other platforms
-Minecraft APK for PlayStation 4: How to access the Minecraft Store and use Tokens
-Minecraft APK for Fire Devices: How to play Minecraft on your Kindle Fire or Fire TV
-Minecraft APK for Oculus: How to experience Minecraft in virtual reality
-Minecraft APK for Xbox 360 Edition: How to upgrade to the Bedrock version
-Minecraft APK for Nintendo Wii U Edition: How to transfer your worlds and DLCs
-Minecraft APK for New 2DS & 3DS Edition: How to play Minecraft on your handheld console
-Minecraft APK for PlayStation 3: How to connect with other players online
-Minecraft APK for PlayStation Vita Edition: How to play Minecraft on the go
-Minecraft APK Caves and Cliffs update: Everything you need to know about the new features and blocks
-Minecraft APK biomes: How to explore different terrains and environments in the game
-Minecraft APK mobs: How to interact with villagers, animals, and hostile creatures in the game
-Minecraft APK creative mode: How to unleash your imagination and build anything you want
-Minecraft APK survival mode: How to craft weapons, armor, and tools to fend off dangers and challenges in the game
-Minecraft APK add-ons: How to create and use custom content in the game
-Minecraft APK marketplace: How to discover and download maps, skins, and texture packs from the community
-Minecraft APK realms: How to create and join private servers with your friends
-Minecraft APK slash commands: How to tweak how the game plays and access cheats and shortcuts in the game
-Minecraft APK achievements: How to unlock and track your progress in the game

-

Customize your experience with add-ons, skins, and texture packs

-

Another feature of Minecraft is that you can customize your experience with various add-ons, skins, and texture packs. Add-ons are modifications that change or add new features to the game, such as new mobs, items, blocks, biomes, etc. Skins are cosmetic changes that alter the appearance of your character. Texture packs are graphical changes that change the look of the blocks and items in the game.

-

You can find many add-ons, skins, and texture packs from the Minecraft Marketplace or other sources online. You can also create your own using the Minecraft tools or third-party software. You can apply them to your game using the settings menu or by importing them from external files.

-

Learn and have fun with Minecraft Education Edition

-

A special version of Minecraft that supports learning in various subjects and skills is Minecraft Education Edition. It is designed for educators and students who want to use Minecraft as a tool for teaching and learning in a fun and engaging way.

-

Minecraft Education Edition has many features that enhance the educational potential of Minecraft, such as coding lessons, curriculum-aligned activities, immersive classrooms, collaboration features, and more. You can also access a library of lessons and worlds created by other educators and students from around the world.

-

Minecraft Education Edition is available for Windows, Mac, iPad, and Chromebook devices. You can download it from the official website or the app store. You will need a valid Office 365 Education account to sign in and use the app. You can also join the Minecraft Education community to share your ideas and feedback with other educators and students.

-

Minecraft system requirements for Android devices

-

Minimum and recommended specifications

-

If you want to play Minecraft on your Android device, you need to make sure that your device meets the minimum and recommended specifications for the game. Here are the system requirements for Minecraft on Android devices:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SpecificationMinimumRecommended
Operating systemAndroid 4.2 Jelly BeanAndroid 8.0 Oreo or higher
Processor1.2 GHz dual-core or higher1.8 GHz quad-core or higher
Memory (RAM)1 GB or higher2 GB or higher
Storage space300 MB or higher1 GB or higher
Screen resolution800 x 480 pixels or higher1280 x 720 pixels or higher
Internet connectionRequired for online features and updatesRequired for online features and updates
Battery lifeN/AAdequate for long gaming sessions
-

How to check your device compatibility

-

If you are not sure whether your device is compatible with Minecraft or not, you can check it using the Google Play Store or other tools. Here are some ways to check your device compatibility:

-
    -
  • Go to the Google Play Store and search for Minecraft. If you see the "Install" button, it means your device is compatible. If you see the "This app is incompatible with your device" message, it means your device is not compatible.
  • -
  • Go to the official website www.minecraft.com and click on the "Get Minecraft" button. Select the Android option and follow the instructions. If you can download and install the apk file, it means your device is compatible. If you encounter any errors or issues, it means your device is not compatible.
  • -
  • Use a third-party tool, such as Can I Run It, to scan your device and compare it with the system requirements of Minecraft. If you see a green check mark, it means your device is compatible. If you see a red cross mark, it means your device is not compatible.
  • -
-

Minecraft reviews and ratings

-

What do critics and players say about Minecraft?

-

Minecraft has received overwhelmingly positive reviews and ratings from critics and players on various platforms. It has been praised for its originality, creativity, replayability, and educational value. It has also been criticized for its technical issues, lack of guidance, and potential addiction.

-

Here are some examples of reviews and ratings from different sources:

-
    -
  • PC Gamer gave Minecraft a score of 96/100, calling it "a masterwork of game design that transcends its genre and platform."
  • -
  • Common Sense Media gave Minecraft 5/5 stars, saying that it is "an amazing game that lets kids be creative in a fun and safe environment."
  • -
  • PCMag gave Minecraft 4.5/5 stars, stating that it is "a brilliant sandbox game that will keep you hooked for hours."
  • -
-

How popular is Minecraft?

-

Minecraft is not only one of the most popular video games of all time, but also one of the most influential and successful ones. It has achieved many statistics and achievements that show how popular and successful it is, such as:

-
    -
  • It has sold over 200 million copies worldwide, making it the best-selling video game of all time.
  • -
  • It has over 126 million monthly active players, making it one of the most played video games of all time.
  • -
  • It has won over 30 awards, including Game of the Year, Best Indie Game, Best Family Game, etc.
  • -
  • It has set several Guinness World Records, such as Most-Played Online Game, Most-Downloaded Game App, Most-Viewed Game on YouTube, etc.
  • -

Minecraft tips and tricks

-

How to get started with Minecraft

-

If you are new to Minecraft, you might feel overwhelmed by the vast and open-ended world of the game. However, you don't need to worry, as there are some simple tips and tricks that can help you get started with Minecraft. Here are some of them:

-
    -
  • Choose a difficulty level that suits your preference. You can play in peaceful mode, where there are no enemies and you don't have to worry about hunger or health. You can also play in easy, normal, or hard mode, where there are different levels of enemies and challenges.
  • -
  • Choose a game mode that suits your style. You can play in survival mode, where you have to gather resources and fight enemies. You can also play in creative mode, where you have unlimited resources and can build anything you want. You can also play in adventure mode, where you have to follow a custom map with quests and challenges.
  • -
  • Learn the basics of mining and crafting. You can mine blocks using your fist or tools, such as pickaxes, shovels, axes, etc. You can craft items using a crafting table or your inventory. You can make weapons, armor, tools, furniture, food, potions, etc.
  • -
  • Learn how to survive the night. The night is dangerous in Minecraft, as enemies will spawn and attack you. You can survive the night by building a shelter, lighting up the area, sleeping in a bed, or setting the time to day using commands.
  • -
  • Learn how to find resources. You can find resources in different places in Minecraft, such as caves, mineshafts, villages, temples, dungeons, etc. You can also use maps, compasses, or coordinates to locate resources.
  • -
-

How to master Minecraft

-

If you are an experienced player of Minecraft, you might want to master the game and improve your skills and creativity. There are some tips and tricks that can help you master Minecraft. Here are some of them:

-
    -
  • Build complex structures. You can build complex structures using blocks and redstone. You can make houses, castles, towers, bridges, statues, etc. You can also use commands or command blocks to create custom structures.
  • -
  • Use redstone circuits. Redstone is a material that can transmit power and signals in Minecraft. You can use redstone to create circuits and mechanisms that can power your creations. You can make doors, traps, elevators, clocks, calculators, etc.
  • -
  • Find rare items. There are some rare items in Minecraft that are hard to find or obtain. You can find rare items by exploring the world, defeating enemies, trading with villagers, fishing, enchanting, brewing, etc. Some rare items are diamonds, netherite, ender pearls, dragon eggs, etc.
  • -
  • Play with mods. Mods are modifications that add new features or change the game in various ways. You can play with mods to enhance your Minecraft experience. You can find mods for different aspects of the game, such as gameplay, graphics, content, etc.
  • -
  • Join servers. Servers are online worlds where you can play with other players. You can join servers to play different game modes, such as survival, creative, mini-games, etc. You can also create your own server and invite your friends to join.
  • -
-

Conclusion

-

Minecraft is a game that lets you explore infinite worlds and build anything you can imagine using blocks. You can play in different modes, such as survival, creative, adventure, etc. You can also customize your experience with add-ons, skins, and texture packs. You can also learn and have fun with Minecraft Education Edition.

-

If you want to play Minecraft on your Android device, you need to download and install an apk file from the official website www.minecraft.com or from the Google Play Store. You also need to make sure that your device meets the system requirements for the game. You can also check the reviews and ratings from critics and players to see what they think about the game.

-

If you are new to Minecraft, you can follow some tips and tricks to get started with the game. If you are an experienced player, you can follow some tips and tricks to master the game. You can also play with other players online or on your own private world.

-

Minecraft is a game that will keep you hooked for hours with its endless possibilities and fun. If you are interested in playing Minecraft on your Android device, you can visit the official website www.minecraft.com for more information and download the apk file today.

-

FAQs

-

Here are some frequently asked questions and answers about www.minecraft.com apk:

-
    -
  • Q: Is www.minecraft.com apk safe to download?
    A: Yes, www.minecraft.com apk is safe to download from the official website or the Google Play Store. However, you should avoid downloading apk files from unknown or untrusted sources, as they may contain malware or viruses.
  • -
  • Q: How much does www.minecraft.com apk cost?
    A: www.minecraft.com apk costs $7.49 on the Google Play Store. However, you can also try a free trial version before buying the full game.
  • -
  • Q: How do I update www.minecraft.com apk?
    A: You can update www.minecraft.com apk automatically or manually through the Google Play Store or the official website. You should always update your game to enjoy the latest features and bug fixes.
  • -
  • Q: Can I play www.minecraft.com apk with other players?
    A: Yes, you can play www.minecraft.com apk with other players across different devices and platforms using cross-play or realms features. You can also join servers or create your own private world with friends.
  • -
  • Q: What are some alternatives to www.minecraft.com apk?
    A: Some alternatives to www.minecraft.com apk are Terraria, Roblox, Stardew Valley, Lego Worlds, etc.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Realistic Bus Driving with Bus Simulator Ultimate on PC (Emulator).md b/spaces/fatiXbelha/sd/Enjoy Realistic Bus Driving with Bus Simulator Ultimate on PC (Emulator).md deleted file mode 100644 index 42a9ece1ef5202d2f4fd4a975549a445e323dcb2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Realistic Bus Driving with Bus Simulator Ultimate on PC (Emulator).md +++ /dev/null @@ -1,89 +0,0 @@ - -

How to Download Bus Simulator Ultimate in Laptop

-

Do you love driving buses and exploring different cities? Do you want to create your own bus company and become the largest bus corporation in the world? If yes, then you should try Bus Simulator Ultimate, a realistic and immersive bus simulation game for Android devices. But what if you don't have a powerful Android device or you want to enjoy the game on a bigger screen with better controls? Don't worry, you can still play Bus Simulator Ultimate on your laptop with the help of an Android emulator like BlueStacks. In this article, we will show you how to download and play Bus Simulator Ultimate on PC with BlueStacks, and also share some tips and tricks for playing the game on PC.

-

how to download bus simulator ultimate in laptop


DOWNLOADhttps://urllie.com/2uNwrm



-

What is Bus Simulator Ultimate?

-

Bus Simulator Ultimate is a simulation game developed by Zuuks Games, the creators of the hit Truck Simulator 2018: Europe. In this game, you can drive over 32 amazing coach buses across more than 300 original terminals in different countries like the United States, United Kingdom, China, Canada, Russia, Germany, Italy, France, Spain, Netherlands, Turkey, South Korea, Japan, Brazil, Azerbaijan, and more. You can also establish your own bus company and hire employees and manage your company for maximum profit. You can also customize your buses with different skins and accessories.

-

Features of Bus Simulator Ultimate

-

Bus Simulator Ultimate has many features that make it one of the most popular bus simulation games on Android. Some of these features are:

-
    -
  • Free Multiplayer Game (Ultimate League)
  • -
  • Realistic city maps and bus stations
  • -
  • Passenger System that provides social and realistic reactions
  • -
  • Detailed Cockpits and realistic bus sound effects
  • -
  • 250+ radio stations and realistic weather conditions
  • -
  • Highway Toll roads and realistic traffic system
  • -
  • Easy controls (Tilt, Buttons or steering wheel)
  • -
  • More than 25 language support
  • -
-

Why play Bus Simulator Ultimate on PC?

-

Playing Bus Simulator Ultimate on PC has many advantages over playing it on your mobile device. Some of these advantages are:

-
    -
  • Bigger screen and improved visibility: You can see every detail of your favorite buses and cities on your laptop monitor instead of your tiny phone screen.
  • -
  • Better performance and battery life: You can play heavy games like Bus Simulator Ultimate without worrying about your device's specifications or battery life. You can use your laptop's power and resources to run the game smoothly.
  • -
  • Improved accuracy and control: You can use your mouse and keyboard or a gamepad to control your bus with more precision and agility. You can also customize your controls according to your preference.
  • -
  • No interruptions or distractions: You can play the game without getting disturbed by calls or messages or other notifications on your phone. You can also enjoy the game in full screen mode without any ads or pop-ups.
  • -
-

How to download and play Bus Simulator Ultimate on PC with BlueStacks

-

To download and play Bus Simulator Ultimate on PC, you need an Android emulator like BlueStacks. BlueStacks is a software that allows you to run Android apps and games on your PC or Mac. It is easy to use and has many features that enhance your gaming experience. Here are the steps to download and play Bus Simulator Ultimate on PC with BlueStacks:

-

Step 1: Download and install BlueStacks on your PC

-

The first step is to download and install BlueStacks on your PC. You can download it from the official website here. The installation process is simple and straightforward. Just follow the instructions on the screen and wait for the installation to complete.

-

How to install Bus Simulator : Ultimate on PC with BlueStacks
-Bus Simulator 21 Next Stop: The ultimate bus driving experience on PC
-How to play Bus Simulator : Ultimate on PC using LDPlayer emulator
-Bus Simulator : Ultimate PC download: Tips and tricks for beginners
-How to run Bus Simulator : Ultimate on Windows 10 with high graphics
-Bus Simulator 21 Next Stop: Career Mode guide and walkthrough
-How to update Bus Simulator : Ultimate on PC and access new features
-Bus Simulator : Ultimate vs Bus Simulator 21 Next Stop: Which one is better?
-How to fix Bus Simulator : Ultimate not working on PC issues
-Bus Simulator : Ultimate PC review: Is it worth playing?
-How to get Bus Simulator : Ultimate for free on PC legally
-Bus Simulator 21 Next Stop: How to unlock all buses and maps
-How to customize your bus in Bus Simulator : Ultimate on PC
-Bus Simulator : Ultimate PC system requirements and compatibility
-Bus Simulator 21 Next Stop: How to play with friends online on PC
-How to transfer your Bus Simulator : Ultimate progress from mobile to PC
-Bus Simulator 21 Next Stop: Best mods and add-ons for PC
-How to use a controller or a steering wheel for Bus Simulator : Ultimate on PC
-Bus Simulator : Ultimate PC cheats and hacks: How to get unlimited money and XP
-Bus Simulator 21 Next Stop: How to enable ray tracing and DLSS on PC
-How to stream Bus Simulator : Ultimate on Twitch or YouTube from PC
-Bus Simulator 21 Next Stop: How to create your own routes and scenarios on PC
-How to improve your driving skills in Bus Simulator : Ultimate on PC
-Bus Simulator : Ultimate PC FAQs: Everything you need to know
-Bus Simulator 21 Next Stop: How to get the Season Pass and Gold Edition on PC

-

Step 2: Sign in to Google Play Store or do it later

-

After installing BlueStacks, you need to sign in to your Google account to access the Google Play Store. You can do this by clicking on the Google icon on the home screen of BlueStacks. If you don't have a Google account, you can create one for free. Alternatively, you can skip this step and sign in later.

-

Step 3: Search for Bus Simulator Ultimate in the search bar

-

Once you have signed in to your Google account, you can search for Bus Simulator Ultimate in the search bar on the top right corner of the BlueStacks home screen. You can also use the voice search feature by clicking on the microphone icon.

-

Step 4: Click to install Bus Simulator Ultimate from the search results

-

After searching for Bus Simulator Ultimate, you will see a list of results related to the game. Click on the game icon to open its page on the Google Play Store. Then, click on the green "Install" button to start downloading and installing the game on your PC.

-

Step 5: Complete Google sign-in (if you skipped step 2) to install Bus Simulator Ultimate

-

If you skipped step 2 and did not sign in to your Google account before, you will need to do it now to install Bus Simulator Ultimate. You will see a pop-up window asking you to sign in to your Google account. Follow the instructions and complete the sign-in process.

-

Step 6: Start playing Bus Simulator Ultimate on PC

-

Congratulations! You have successfully downloaded and installed Bus Simulator Ultimate on your PC with BlueStacks. Now, you can start playing the game by clicking on its icon on the BlueStacks home screen or by clicking on "Open" on the Google Play Store page. Enjoy driving buses across different cities and countries!

-

Tips and tricks for playing Bus Simulator Ultimate on PC

-

To make your gaming experience more enjoyable and rewarding, here are some tips and tricks for playing Bus Simulator Ultimate on PC:

-

Customize your controls

-

One of the benefits of playing Bus Simulator Ultimate on PC is that you can customize your controls according to your preference. You can use your mouse and keyboard or a gamepad to control your bus. You can also change the key mapping and sensitivity settings by clicking on the keyboard icon on the bottom right corner of the BlueStacks window. You can also enable or disable tilt, buttons, or steering wheel controls from the game settings.

-

Use the Eco Mode and Multi-Instance features

-

Another benefit of playing Bus Simulator Ultimate on PC with BlueStacks is that you can use some of its amazing features like Eco Mode and Multi-Instance. Eco Mode allows you to reduce your PC's resource consumption by lowering the FPS of the game when it is running in the background. This way, you can save battery life and improve performance. Multi-Instance allows you to run multiple instances of BlueStacks and play different games or apps simultaneously. This way, you can switch between different games or tasks without closing any of them.

-

Earn rewards with Google Play Points

-

A final benefit of playing Bus Simulator Ultimate on PC with BlueStacks is that you can earn rewards with Google Play Points. Google Play Points is a loyalty program that rewards you for downloading and playing games and apps from the Google Play Store. You can earn points by completing various actions like installing games, making in-app purchases, watching ads, etc. You can then redeem these points for various rewards like discounts, coupons, gift cards, etc.

-

Conclusion

-

Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you drive over 32 amazing coach buses across more than 300 original terminals in different countries. You can also create your own bus company and manage it for maximum profit. You can play this game on your laptop with BlueStacks, an Android emulator that allows you to run Android apps and games on your PC or Mac. By playing Bus Simulator Ultimate on PC with BlueStacks, you can enjoy many advantages like bigger screen, better performance, improved accuracy and control, no interruptions or distractions, and more. You can also use some of the features of BlueStacks like Eco Mode, Multi-Instance, and Google Play Points to enhance your gaming experience. We hope this article helped you learn how to download and play Bus Simulator Ultimate on PC with BlueStacks. If you have any questions or feedback, feel free to leave a comment below.

-

FAQs

-

Here are some of the frequently asked questions about Bus Simulator Ultimate and BlueStacks:

-

Q: Is Bus Simulator Ultimate free to play?

-

A: Yes, Bus Simulator Ultimate is free to download and play on Android devices. However, the game contains ads and offers in-app purchases for some items and features.

-

Q: Is BlueStacks safe to use?

-

A: Yes, BlueStacks is safe and secure to use. It does not contain any malware or viruses and does not harm your PC or Mac. It also respects your privacy and does not collect any personal data without your consent.

-

Q: How can I update Bus Simulator Ultimate on PC?

-

A: To update Bus Simulator Ultimate on PC, you need to open the Google Play Store app on BlueStacks and go to the "My apps & games" section. There, you will see a list of apps that have updates available. Click on the "Update" button next to Bus Simulator Ultimate to download and install the latest version of the game.

-

Q: How can I uninstall Bus Simulator Ultimate from PC?

-

A: To uninstall Bus Simulator Ultimate from PC, you need to open the BlueStacks app player and go to the "My apps" tab. There, you will see a list of apps that you have installed on your PC. Right-click on the Bus Simulator Ultimate icon and select "Uninstall" from the menu. Confirm your action by clicking on "Yes" in the pop-up window.

-

Q: How can I contact the developers of Bus Simulator Ultimate or BlueStacks?

-

A: To contact the developers of Bus Simulator Ultimate, you can visit their official website here or send them an email at info@zuuks.com. To contact the developers of BlueStacks, you can visit their official website here or send them an email at support@bluestacks.com.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/detect_lm68.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/detect_lm68.py deleted file mode 100644 index b7e40997289e17405e1fb6c408d21adce7b626ce..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/detect_lm68.py +++ /dev/null @@ -1,106 +0,0 @@ -import os -import cv2 -import numpy as np -from scipy.io import loadmat -import tensorflow as tf -from util.preprocess import align_for_lm -from shutil import move - -mean_face = np.loadtxt('util/test_mean_face.txt') -mean_face = mean_face.reshape([68, 2]) - -def save_label(labels, save_path): - np.savetxt(save_path, labels) - -def draw_landmarks(img, landmark, save_name): - landmark = landmark - lm_img = np.zeros([img.shape[0], img.shape[1], 3]) - lm_img[:] = img.astype(np.float32) - landmark = np.round(landmark).astype(np.int32) - - for i in range(len(landmark)): - for j in range(-1, 1): - for k in range(-1, 1): - if img.shape[0] - 1 - landmark[i, 1]+j > 0 and \ - img.shape[0] - 1 - landmark[i, 1]+j < img.shape[0] and \ - landmark[i, 0]+k > 0 and \ - landmark[i, 0]+k < img.shape[1]: - lm_img[img.shape[0] - 1 - landmark[i, 1]+j, landmark[i, 0]+k, - :] = np.array([0, 0, 255]) - lm_img = lm_img.astype(np.uint8) - - cv2.imwrite(save_name, lm_img) - - -def load_data(img_name, txt_name): - return cv2.imread(img_name), np.loadtxt(txt_name) - -# create tensorflow graph for landmark detector -def load_lm_graph(graph_filename): - with tf.gfile.GFile(graph_filename, 'rb') as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString(f.read()) - - with tf.Graph().as_default() as graph: - tf.import_graph_def(graph_def, name='net') - img_224 = graph.get_tensor_by_name('net/input_imgs:0') - output_lm = graph.get_tensor_by_name('net/lm:0') - lm_sess = tf.Session(graph=graph) - - return lm_sess,img_224,output_lm - -# landmark detection -def detect_68p(img_path,sess,input_op,output_op): - print('detecting landmarks......') - names = [i for i in sorted(os.listdir( - img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i] - vis_path = os.path.join(img_path, 'vis') - remove_path = os.path.join(img_path, 'remove') - save_path = os.path.join(img_path, 'landmarks') - if not os.path.isdir(vis_path): - os.makedirs(vis_path) - if not os.path.isdir(remove_path): - os.makedirs(remove_path) - if not os.path.isdir(save_path): - os.makedirs(save_path) - - for i in range(0, len(names)): - name = names[i] - print('%05d' % (i), ' ', name) - full_image_name = os.path.join(img_path, name) - txt_name = '.'.join(name.split('.')[:-1]) + '.txt' - full_txt_name = os.path.join(img_path, 'detections', txt_name) # 5 facial landmark path for each image - - # if an image does not have detected 5 facial landmarks, remove it from the training list - if not os.path.isfile(full_txt_name): - move(full_image_name, os.path.join(remove_path, name)) - continue - - # load data - img, five_points = load_data(full_image_name, full_txt_name) - input_img, scale, bbox = align_for_lm(img, five_points) # align for 68 landmark detection - - # if the alignment fails, remove corresponding image from the training list - if scale == 0: - move(full_txt_name, os.path.join( - remove_path, txt_name)) - move(full_image_name, os.path.join(remove_path, name)) - continue - - # detect landmarks - input_img = np.reshape( - input_img, [1, 224, 224, 3]).astype(np.float32) - landmark = sess.run( - output_op, feed_dict={input_op: input_img}) - - # transform back to original image coordinate - landmark = landmark.reshape([68, 2]) + mean_face - landmark[:, 1] = 223 - landmark[:, 1] - landmark = landmark / scale - landmark[:, 0] = landmark[:, 0] + bbox[0] - landmark[:, 1] = landmark[:, 1] + bbox[1] - landmark[:, 1] = img.shape[0] - 1 - landmark[:, 1] - - if i % 100 == 0: - draw_landmarks(img, landmark, os.path.join(vis_path, name)) - save_label(landmark, os.path.join(save_path, txt_name)) diff --git a/spaces/fbeckk/cell-seg/README.md b/spaces/fbeckk/cell-seg/README.md deleted file mode 100644 index 9716b1e239acc36e187c825f50fee1f09e5ef299..0000000000000000000000000000000000000000 --- a/spaces/fbeckk/cell-seg/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cell Seg -emoji: 👁 -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: bsd ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fbeckk/cell-seg/app.py b/spaces/fbeckk/cell-seg/app.py deleted file mode 100644 index 0d3338130e3b135318606404928215459028f95c..0000000000000000000000000000000000000000 --- a/spaces/fbeckk/cell-seg/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import random -import time -from io import BytesIO - -import PIL.Image as Image -import numpy as np -import openvino.runtime as ov -import pandas as pd -import streamlit as st -from cellpose import models -from skimage.color import label2rgb - -from openvino_utils import ov_inference - -cellpose_model = models.Cellpose( - gpu=False, - model_type="cyto2", - net_avg=False, -) - -core = ov.Core() -core_model = core.read_model("./converted_unet_cellpose/cyto2.xml") -ov_model = core.compile_model(core_model, "CPU") - -gifs = [ - "https://gifdb.com/images/high/waiting-fingers-agent-cooper-83uxmdriug9b3eph.gif", - "https://gifdb.com/images/high/i-ll-be-waiting-nacho-libre-ij5d7npbjldtnd2j.webp", - "https://gifdb.com/images/high/waiting-cat-nail-file-fuyageziynzjynxt.webp", - "https://gifdb.com/images/high/waiting-for-you-squid-game-f52p9bou56mol0wa.webp", - "https://gifdb.com/images/high/waiting-man-tick-tock-lucgba3e6vq2eecp.webp", - "https://gifdb.com/images/high/waiting-impatiently-dumbledore-cu8xf3lc3o7pzyxq.webp", - "https://gifdb.com/images/high/waiting-dancing-mr-bean-zqemgdq7qldp6jl7.webp", - "https://gifdb.com/images/high/waiting-sad-pablo-narcos-zz7uiyio8n4g1yra.webp", - "https://gifdb.com/images/high/still-waiting-justin-timberlake-e06wu78iv4c62mmz.webp", - "https://gifdb.com/images/high/still-waiting-skeleton-chair-4yggsjnib7cs49ig.webp", - "https://gifdb.com/images/high/waiting-jared-silicon-valley-0jqzysdhn9om30av.webp", - "https://gifdb.com/images/high/waiting-for-reply-mocha-oo9sso1g90140bxi.webp", - "https://gifdb.com/images/high/still-waiting-mickey-mouse-w7kp6rhecp3yizsx.webp", - "https://gifdb.com/images/high/still-waiting-little-rascals-zedphzadck29jgl6.webp", - "https://gifdb.com/images/high/still-waiting-boo-monsters-inc-zmx0wumbaimraxf8.webp", -] - -st.title("OpenVINO :handshake: CellPose") - -st.caption('Developed by CellAI - BECLS') -st.caption('Contributors: Gabriele Aldeghi :pig: Filip Krasniqi :bear:') - -st.markdown( - "[![Repo](https://badgen.net/badge/icon/GitHub?icon=github&label)](https://github.com/Valitacell-Ltd) [![Repo](https://badgen.net/badge/icon/GitHub?icon=github&label)](https://github.com/grozby) [![Repo](https://badgen.net/badge/icon/GitHub?icon=github&label)](https://github.com/filipkrasniqi) [![Repo](https://badgen.net/badge/icon/GitHub?icon=github&label)](https://github.com/MouseLand/cellpose) Copyright © 2020 Howard Hughes Medical Institute", - unsafe_allow_html=True, -) - -uploaded_file = st.file_uploader("Choose a file") -container_warning = st.container() - -col1, col2, col3 = st.columns(3) -container_table = st.container() -st.caption("OpenVINO vs CellPose: Inference Time [s] vs Image Size [pixels]") -st.image("assets/cellpose_benchmark.png") - -MAX_SIZE = 1024 - -if uploaded_file is not None: - try: - bytes_data = uploaded_file.getvalue() - image = Image.open(BytesIO(bytes_data)) - - img_np = np.asarray(image) - display_warning = (img_np.shape[0] > MAX_SIZE or - img_np.shape[1] > MAX_SIZE) - - if display_warning: - former_shape = img_np.shape - img_np = img_np[:MAX_SIZE, :MAX_SIZE] - container_warning.write(f"WARNING: Image has been cropped " - f"from {former_shape} to {img_np.shape}") - - img_input_cellpose = img_np.copy() - - if len(img_input_cellpose.shape) <= 2: - img_input_cellpose = np.expand_dims(img_input_cellpose, axis=-1) - img_input_cellpose = np.expand_dims(img_input_cellpose, axis=0) - - img_input_cellpose = img_input_cellpose / (2**16 - 1) - - col1.write("Input") - col2.write("CellPose") - col3.write("OpenVINO") - - col1.image(img_input_cellpose) - - cellpose_img_container = col2.empty() - cellpose_img_container.image(random.choice(gifs)) - - t1 = time.time() - - cp_mask, *_ = cellpose_model.eval( - img_input_cellpose, - batch_size=64, - normalize=False, - diameter=None, - flow_threshold=0.4, - channels=(0, 0), - ) - - t2 = time.time() - - cp_overlay = (label2rgb( - cp_mask, - image=img_input_cellpose.squeeze(), - bg_label=0, - ) * 255).astype(np.uint8,) - - cellpose_img_container.image(cp_overlay) - - ov_img_container = col3.empty() - ov_img_container.image(random.choice(gifs)) - - img_input_ov = img_np.copy() - - image = np.expand_dims(img_input_ov, axis=0) - - image = np.concatenate( - [ - image, - np.zeros_like(image), - ], - axis=0, - ).astype(float) - - image /= 2**16 - 1 - - t3 = time.time() - ov_mask = ov_inference(model=ov_model, x=image) - - t4 = time.time() - - ov_overlay = (label2rgb( - ov_mask, - image=img_input_cellpose.squeeze(), - bg_label=0, - ) * 255).astype(np.uint8,) - - ov_img_container.image(ov_overlay) - - df = pd.DataFrame([ - { - "Model": "CellPose", - "Execution Time [s]": f"{(t2-t1):.2f} seconds" - }, - { - "Model": "OpenVINO", - "Execution Time [s]": f"{(t4-t3):.2f} seconds" - }, - ]) - - container_table.table(df) - except Exception as e: - container_warning.write("WARNING: an error occurred. Please retry.") diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/918Kiss v5.0-1.apk The Ultimate Guide to Download and Play the Best Slots Game.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/918Kiss v5.0-1.apk The Ultimate Guide to Download and Play the Best Slots Game.md deleted file mode 100644 index 3ff0fe8bb6531f81a47ae6764a2e145b38d2fdb4..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/918Kiss v5.0-1.apk The Ultimate Guide to Download and Play the Best Slots Game.md +++ /dev/null @@ -1,106 +0,0 @@ -
-

918kiss v5.0-1.apk: What You Need to Know

-

If you are looking for a fun and exciting way to spend your free time, you might want to try out 918kiss, one of the most popular online casino platforms in Southeast Asia. And if you want to enjoy the best gaming experience possible, you might want to download the latest version of the app, 918kiss v5.0-1.apk. In this article, we will tell you everything you need to know about this amazing app, including its features, how to download and install it, and how to play and win on it.

-

Introduction

-

What is 918kiss?

-

918kiss is an online casino platform that offers a wide range of games, such as slots, table games, card games, arcade games, and live dealer games. You can play these games anytime and anywhere, as long as you have an internet connection and a compatible device. You can also win real money prizes, jackpots, and bonuses by playing on 918kiss.

-

918kiss v5.0-1.apk


DOWNLOADhttps://gohhs.com/2uPuLq



-

What is 918kiss v5.0-1.apk?

-

918kiss v5.0-1.apk is the latest version of the 918kiss app, which was released in June 2023. It is an updated and improved version of the previous app, which had some issues with security, performance, and user interface. The new app has fixed these issues and added some new features that make it more enjoyable and rewarding for players.

-

Features of 918kiss v5.0-1.apk

-

Improved security and performance

-

One of the main features of 918kiss v5.0-1.apk is that it has enhanced its security and performance levels. The app uses advanced encryption technology to protect your personal and financial information from hackers and scammers. It also runs smoothly and fast on any device, without any lag or glitches.

-

Enhanced user interface and graphics

-

Another feature of 918kiss v5.0-1.apk is that it has improved its user interface and graphics quality. The app has a sleek and modern design that is easy to navigate and use. It also has stunning and realistic graphics that make the games more immersive and engaging.

-

More games and bonuses

-

The last feature of 918kiss v5.0-1.apk is that it has added more games and bonuses for players to enjoy. The app has over 200 games to choose from, including some new and exclusive ones that are only available on this version. It also has more generous and frequent bonuses, such as welcome bonus, daily bonus, loyalty bonus, referral bonus, and more.

-

How to download and install 918kiss v5.0-1.apk

-

For Android devices

-

If you want to download and install 918kiss v5.0-1.apk on your Android device, you can follow these simple steps:

-
    -
  1. Go to the official website of 918kiss at [https://www.\uE000918KISS\uE001.to](^1^) or scan the QR code on the homepage.
  2. Click on the download button for Android and wait for the file to be downloaded. -
  3. Go to your device settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the downloaded file in your device storage and tap on it to install it.
  6. -
  7. Launch the app and enjoy playing on 918kiss v5.0-1.apk.
  8. -
-

For iOS devices

-

If you want to download and install 918kiss v5.0-1.apk on your iOS device, you can follow these simple steps:

-
    -
  1. Go to the official website of 918kiss at [https://www.\uE000918KISS\uE001.to] or scan the QR code on the homepage.
  2. -
  3. Click on the download button for iOS and wait for the file to be downloaded.
  4. -
  5. Go to your device settings and trust the developer of the app.
  6. -
  7. Locate the downloaded file in your device storage and tap on it to install it.
  8. -
  9. Launch the app and enjoy playing on 918kiss v5.0-1.apk.
  10. -
-

For Windows devices

-

If you want to download and install 918kiss v5.0-1.apk on your Windows device, you will need an Android emulator, such as BlueStacks, NoxPlayer, or LDPlayer. You can follow these simple steps:

-
    -
  1. Download and install an Android emulator of your choice on your Windows device.
  2. -
  3. Go to the official website of 918kiss at [https://www.\uE000918KISS\uE001.to] or scan the QR code on the homepage using the emulator's browser.
  4. -
  5. Click on the download button for Android and wait for the file to be downloaded.
  6. -
  7. Locate the downloaded file in your emulator's storage and tap on it to install it.
  8. -
  9. Launch the app and enjoy playing on 918kiss v5.0-1.apk.
  10. -
-

How to play and win on 918kiss v5.0-1.apk

-

Register an account and login

-

To play on 918kiss v5.0-1.apk, you will need to register an account first. You can do this by contacting the customer service team via WhatsApp, Telegram, or WeChat. They will provide you with a username and password that you can use to login to the app. You can also change your password later for security reasons.

-

918kiss v5.0-1.apk download free
-918kiss v5.0-1.apk latest version
-918kiss v5.0-1.apk for android
-918kiss v5.0-1.apk game review
-918kiss v5.0-1.apk mod apk
-918kiss v5.0-1.apk hack
-918kiss v5.0-1.apk tips and tricks
-918kiss v5.0-1.apk online casino
-918kiss v5.0-1.apk slot machine
-918kiss v5.0-1.apk bonus
-918kiss v5.0-1.apk free credit
-918kiss v5.0-1.apk no deposit
-918kiss v5.0-1.apk malaysia
-918kiss v5.0-1.apk singapore
-918kiss v5.0-1.apk thailand
-918kiss v5.0-1.apk indonesia
-918kiss v5.0-1.apk philippines
-918kiss v5.0-1.apk vietnam
-918kiss v5.0-1.apk cambodia
-918kiss v5.0-1.apk brunei
-918kiss v5.0-1.apk myanmar
-918kiss v5.0-1.apk laos
-918kiss v5.0-1.apk india
-918kiss v5.0-1.apk china
-918kiss v5.0-1.apk hong kong
-918kiss v5.0-1.apk taiwan
-918kiss v5.0-1.apk japan
-918kiss v5.0-1.apk korea
-918kiss v5.0-1.apk australia
-918kiss v5.0-1.apk new zealand

-

Choose a game and place a bet

-

Once you have logged in, you can choose from over 200 games available on 918kiss v5.0-1.apk. You can find them in different categories, such as slots, table games, card games, arcade games, and live dealer games. You can also search for your favorite game by name or provider. To play a game, you will need to place a bet using your account balance. You can deposit money into your account using various methods, such as bank transfer, e-wallet, or credit card. You can also withdraw your winnings using the same methods.

-

Use tips and strategies

-

To increase your chances of winning on 918kiss v5.0-1.apk, you can use some tips and strategies that can help you improve your skills and knowledge. Some of these tips are:

-
    -
  • Play games that have a high return to player (RTP) percentage, which means they pay out more frequently and generously.
  • Play games that have a low house edge, which means they have a smaller advantage over the players. -
  • Play games that have a high volatility, which means they have a higher risk but also a higher reward potential.
  • -
  • Play games that have a progressive jackpot, which means they have a prize pool that increases with every bet placed by the players.
  • -
  • Play games that have a bonus feature, which means they have a special round or mode that can trigger extra rewards or free spins.
  • -
  • Play games that suit your preference and style, whether you like simple or complex, classic or modern, or themed or generic games.
  • -
-

You can also find more tips and strategies on the official blog of 918kiss at [https://www.\uE000918KISS\uE001.to/blog], where you can read articles, reviews, guides, and news about the app and its games.

-

Conclusion

-

918kiss v5.0-1.apk is the latest and best version of the 918kiss app, which offers a fun and exciting online casino experience for players. It has improved security and performance, enhanced user interface and graphics, and more games and bonuses. It is easy to download and install on any device, and easy to play and win on any game. If you are looking for a reliable and rewarding online casino platform, you should definitely try out 918kiss v5.0-1.apk today.

-

FAQs

-

Here are some frequently asked questions about 918kiss v5.0-1.apk:

-
    -
  1. Is 918kiss v5.0-1.apk safe and legal?
  2. -

    Yes, 918kiss v5.0-1.apk is safe and legal to use. It has a valid license from the Philippine Amusement and Gaming Corporation (PAGCOR), which regulates online gambling in the Philippines. It also uses advanced encryption technology to protect your data and transactions from hackers and scammers.

    -
  3. How can I contact the customer service team of 918kiss v5.0-1.apk?
  4. -

    You can contact the customer service team of 918kiss v5.0-1.apk via WhatsApp, Telegram, or WeChat. They are available 24/7 to assist you with any issues or inquiries you may have. You can find their contact details on the official website of 918kiss at [https://www.\uE000918KISS\uE001.to].

    -
  5. What are the minimum and maximum bets on 918kiss v5.0-1.apk?
  6. -

    The minimum and maximum bets on 918kiss v5.0-1.apk vary depending on the game you choose. Generally, the minimum bet is RM0.01 and the maximum bet is RM500. However, some games may have different limits, so you should check the game rules before placing your bets.

    -
  7. What are the payment methods supported by 918kiss v5.0-1.apk?
  8. -

    The payment methods supported by 918kiss v5.0-1.apk include bank transfer, e-wallet, and credit card. You can use any of these methods to deposit money into your account or withdraw your winnings from your account. The transactions are fast and secure, and usually take less than 15 minutes to complete.

    -
  9. Can I play on 918kiss v5.0-1.apk with my friends?
  10. -

    Yes, you can play on 918kiss v5.0-1.apk with your friends. You can invite them to join the app by sharing your referral link or code with them. You can also chat with them while playing on the app using the built-in chat feature. You can also compete with them on the leaderboard or join them in the live dealer games.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download FINAL FANTASY TACTICS WotL 2.1 0 APK for Android - Enjoy the Classic RPG on Your Device.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download FINAL FANTASY TACTICS WotL 2.1 0 APK for Android - Enjoy the Classic RPG on Your Device.md deleted file mode 100644 index f25845ac41e277722cee19fd5593574fd209bfd6..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download FINAL FANTASY TACTICS WotL 2.1 0 APK for Android - Enjoy the Classic RPG on Your Device.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Final Fantasy Tactics 2.1 0 apk: A Classic Strategy RPG for Android

-

Final Fantasy Tactics is a strategy role-playing game that was released in 1997 for the PlayStation console. It is one of the most acclaimed games in the Final Fantasy series, with a complex and engaging story, a rich and diverse job system, and a challenging and rewarding tactical combat. The game has been re-released several times, with enhanced graphics, sound, and content. The latest version is Final Fantasy Tactics 2.1 0 apk, which is available for Android devices.

-

final fantasy tactics 2.1 0 apk


Download Filehttps://gohhs.com/2uPlKy



-

In this article, we will explore the features of Final Fantasy Tactics 2.1 0 apk and how to download it on your Android device. Whether you are a fan of the original game or a newcomer to the world of Ivalice, you will find something to enjoy in this classic strategy RPG.

-

The story and setting of Final Fantasy Tactics

-

Final Fantasy Tactics is set in the medieval fantasy world of Ivalice, where two rival factions are fighting for the throne of the kingdom. You play as Ramza Beoulve, a young noble who gets involved in the conflict and uncovers a sinister plot behind it. Along the way, you will meet various characters, some of whom will join your party, while others will oppose you.

-

The story of Final Fantasy Tactics is rich and complex, with multiple twists and turns, political intrigue, moral dilemmas, and hidden secrets. The game also features multiple endings, depending on your choices and actions throughout the game.

-

The gameplay and job system of Final Fantasy Tactics

-

Final Fantasy Tactics is a tactical role-playing game, where you control a party of up to five characters in turn-based battles on grid-based maps. You can move your characters around the map, attack enemies, use items, cast spells, and perform various actions depending on their job class.

-

final fantasy tactics wotl apk
-final fantasy tactics war of the lions apk
-final fantasy tactics android apk
-final fantasy tactics mod apk
-final fantasy tactics apk download
-final fantasy tactics apk free
-final fantasy tactics apk obb
-final fantasy tactics apk cracked
-final fantasy tactics apk modded
-final fantasy tactics apk full
-final fantasy tactics apk latest version
-final fantasy tactics apk no root
-final fantasy tactics apk offline
-final fantasy tactics apk patched
-final fantasy tactics apk unlimited money
-final fantasy tactics the war of the lions android apk
-final fantasy tactics the war of the lions mod apk
-final fantasy tactics the war of the lions apk download
-final fantasy tactics the war of the lions apk free
-final fantasy tactics the war of the lions apk obb
-final fantasy tactics the war of the lions apk cracked
-final fantasy tactics the war of the lions apk modded
-final fantasy tactics the war of the lions apk full
-final fantasy tactics the war of the lions apk latest version
-final fantasy tactics the war of the lions apk no root
-final fantasy tactics the war of the lions apk offline
-final fantasy tactics the war of the lions apk patched
-final fantasy tactics the war of the lions apk unlimited money
-fft wotl apk
-fft war of the lions apk
-fft android apk
-fft mod apk
-fft apk download
-fft apk free
-fft apk obb
-fft apk cracked
-fft apk modded
-fft apk full
-fft apk latest version
-fft apk no root
-fft apk offline
-fft apk patched
-fft apk unlimited money
-fft the war of the lions android apk
-fft the war of the lions mod apk

-

The job system is one of the most distinctive features of Final Fantasy Tactics. There are over 20 types of jobs in the game, each with its own abilities, strengths, and weaknesses. You can change your characters' jobs at any time outside of battle, as long as they meet the requirements for that job. You can also learn new abilities by spending job points (JP) that you earn from battles.

-

The job system allows you to customize your characters according to your preferences and strategies. You can mix and match abilities from different jobs to create unique combinations and synergies. For example, you can have a Knight who can also cast White Magic, or a Ninja who can also use Geomancy.

-

The improvements and additions of Final Fantasy Tactics 2.1 0 apk

-

Final Fantasy Tactics 2.1 0 apk is based on the enhanced port of the game that was released for the PlayStation Portable in 2007, titled Final Fantasy Tactics: The War of the Lions. This version of the game features several improvements and additions over the original PlayStation version, such as:

-
    -
  • New high-resolution graphics and sound effects
  • -
  • New animated cutscenes with voice acting
  • -
  • New scenarios and characters, including Balthier from Final Fantasy XII and Luso from Final Fantasy Tactics A2
  • -
  • New jobs, such as Onion Knight and Dark Knight
  • -
  • New items, abilities, monsters, and locations
  • -
  • New multiplayer modes via local wireless or online connection
  • -
  • New touch screen controls for easier navigation
  • -
  • New autosave function for convenience
  • -
-

Final Fantasy Tactics 2.1 0 apk also fixes some bugs and glitches that were present in the original version of the game.

-

The steps to download and install Final Fantasy Tactics 2.1 0 apk

-

If you want to play Final Fantasy Tactics 2.1 0 apk on your Android device, you will need to follow these steps:

-
    -
  1. Download the Final Fantasy Tactics 2.1 0 apk file from a reliable source, such as [this one].
  2. -
  3. Download the OBB data file for the game from the same source, such as [this one].
  4. -
  5. Install the Final Fantasy Tactics 2.1 0 apk file on your device by tapping on it and allowing the installation from unknown sources.
  6. -
  7. Extract the OBB data file using a file manager app, such as [this one], and copy the folder com.square_enix.android_googleplay.FFT_en2 to the Android/OBB directory on your device.
  8. -
  9. Launch the game and enjoy!
  10. -
-

Conclusion

-

Final Fantasy Tactics 2.1 0 apk is a great way to experience one of the best strategy RPGs ever made on your Android device. The game offers a captivating story, a deep and flexible job system, and a challenging and satisfying tactical combat. The game also features improved graphics, sound, and content over the original version, as well as new touch screen controls and autosave function. If you are a fan of Final Fantasy or strategy games, you should definitely give this game a try.

-

FAQs

-

What are the system requirements for Final Fantasy Tactics 2.1 0 apk?

-

The game requires Android 4.0.3 or higher, and at least 1 GB of RAM and 700 MB of free storage space.

-

How can I save my progress in Final Fantasy Tactics 2.1 0 apk?

-

The game has an autosave function that saves your progress every time you enter or exit a battle or a location. You can also manually save your progress by accessing the menu and selecting Save Game.

-

How can I change the language of Final Fantasy Tactics 2.1 0 apk?

-

The game supports English, French, German, Italian, Spanish, Japanese, Korean, Traditional Chinese, and Simplified Chinese languages. You can change the language by accessing the menu and selecting Options, then Language.

-

How can I play multiplayer mode in Final Fantasy Tactics 2.1 0 apk?

-

The game supports local wireless and online multiplayer modes, where you can battle against other players or cooperate with them in special missions. You can access the multiplayer mode by selecting Multiplayer from the main menu.

-

Where can I find more information and tips about Final Fantasy Tactics 2.1 0 apk?

-

You can visit the official website of the game [here], or check out some fan-made guides and wikis [here] and [here].

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 Free Fire Mods for Android Features Link and More.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 Free Fire Mods for Android Features Link and More.md deleted file mode 100644 index 8df75d402ced37d864303ff1325282b13a3d3d14..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 Free Fire Mods for Android Features Link and More.md +++ /dev/null @@ -1,98 +0,0 @@ - -

How to Download and Install Free Fire x GTA 5 on Android

-

Free Fire x GTA 5 is a modded version of two popular games, Garena Free Fire and Grand Theft Auto V, that combines their features and gameplay in one package. This mod allows you to experience the thrill of survival shooting in Free Fire with the open-world adventure of GTA V. You can play as different characters, use various weapons and vehicles, and explore the maps of Los Santos and Blaine County.

-

free fire x gta 5 download for android


Download Zip https://gohhs.com/2uPqLu



-

If you are a fan of both games, you might be wondering how to download and install Free Fire x GTA 5 on your Android device. In this article, we will show you how to do that in a few simple steps. We will also tell you what are the main features of both games, how to check the file size and compatibility of your device, and how to enable unknown sources on your device settings. Let's get started!

-

How to Download Free Fire x GTA 5 APK and OBB Files

-

The first thing you need to do is to find the APK and OBB files for Free Fire x GTA 5. These are the files that contain the game data and resources. You can download them from various websites that offer modded games, such as GTA5MODAZ.com or APKMODY.io. Make sure you download the latest version of the mod, which is usually updated regularly.

-

Before you download the files, you should check the file size and compatibility of your device. The APK file is usually around 50 MB, while the OBB file can be up to 3 GB. You should have enough storage space on your device or SD card to accommodate these files. You should also have a device that runs on Android 7 or later, with at least 2 GB of RAM and a decent processor.

-

Once you have downloaded the files, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Apps & Notifications > Special Access > Install Unknown Apps > Chrome (or whichever browser you used) > Allow from this source. You can also find this option under Settings > Security > Unknown Sources in older versions of Android.

-

How to Install Free Fire x GTA 5 APK and OBB Files

-

After enabling unknown sources, you are ready to install Free Fire x GTA 5 on your device. To do this, you need a file manager app that can locate and extract the files. You can use any file manager app that you prefer, such as Cx File Explorer or File Manager. Follow these steps:

-
    -
  1. Open your file manager app and navigate to the location where you downloaded the files.
  2. -
  3. Select the APK file and tap Install. Wait for the installation process to finish.
  4. -
  5. Select the OBB file and tap Extract. Choose a destination folder to extract the file. You can use the default folder or create a new one.
  6. -
  7. Copy the extracted OBB folder, which should be named com.dts.freefireth, and paste it to the Android/OBB path on your device or SD card. If you don't have an OBB folder, you can create one.
  8. -
  9. Launch the game from your app drawer or home screen. You should see the Free Fire x GTA 5 logo and loading screen.
  10. -
-

Congratulations, you have successfully installed Free Fire x GTA 5 on your Android device. You can now enjoy the features and gameplay of both games in one mod.

-

Conclusion

-

Free Fire x GTA 5 is a modded version of two popular games, Garena Free Fire and Grand Theft Auto V, that combines their features and gameplay in one package. You can play as different characters, use various weapons and vehicles, and explore the maps of Los Santos and Blaine County. To download and install the game on your Android device, you need to follow these steps:

-
    -
  • Download the APK and OBB files for Free Fire x GTA 5 from a reliable website.
  • -
  • Check the file size and compatibility of your device.
  • -
  • Enable unknown sources on your device settings.
  • -
  • Use a file manager app to locate and extract the files.
  • -
  • Copy the OBB folder to the Android/OBB path on your device or SD card.
  • -
  • Launch the game and enjoy the features.
  • -
-

We hope this article has helped you to download and install Free Fire x GTA 5 on your Android device. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Happy gaming!

-

FAQs

-

What are the minimum requirements for playing Free Fire x GTA 5 on Android?

-

The minimum requirements for playing Free Fire x GTA 5 on Android are:

-

How to download GTA 5 Free Fire mods on Android devices
-GTA 5 Free Fire mod features, download link, and more
-GTA 5 Free Fire mod cost and membership options
-GTA 5 Free Fire mod vehicles, weapons, and characters
-GTA 5 Free Fire mod review and gameplay
-How to download GTA 5 on Android smartphones, laptops, and PCs
-How to create an account for Epic Games Store to download GTA 5
-GTA 5 availability and compatibility for Android and iOS
-GTA 5 launch date and details for PlayStation 5
-GTA 5 popularity and fanbase across platforms
-How to download Steam Link app to play GTA 5 on Android phones
-How to pair Steam Link app with PC for remote linking
-How to access GTA 5 via Steam Link app on Android phones
-Steam Link app features, requirements, and limitations for GTA 5
-Steam Link app review and rating for GTA 5
-How to download GTA 5 Mobile – Grand Theft Auto APK with BlueStacks
-How to install and run GTA 5 Mobile – Grand Theft Auto APK on PC or MAC
-GTA 5 Mobile – Grand Theft Auto APK features, graphics, and controls
-GTA 5 Mobile – Grand Theft Auto APK compatibility and performance issues
-GTA 5 Mobile – Grand Theft Auto APK review and feedback
-How to download GTA San Andreas Free Fire mod for Android devices
-GTA San Andreas Free Fire mod features, missions, and maps
-GTA San Andreas Free Fire mod weapons, vehicles, and skins
-GTA San Andreas Free Fire mod installation guide and tips
-GTA San Andreas Free Fire mod review and comparison with GTA 5 Free Fire mod
-How to download Garena Free Fire – Rampage APK for Android devices
-Garena Free Fire – Rampage APK features, modes, and events
-Garena Free Fire – Rampage APK characters, pets, and weapons
-Garena Free Fire – Rampage APK graphics, sound, and gameplay
-Garena Free Fire – Rampage APK review and rating
-How to play Garena Free Fire on PC with BlueStacks or LDPlayer
-How to customize keyboard and mouse controls for Garena Free Fire on PC
-How to use macros, scripts, and cheats for Garena Free Fire on PC
-How to record and stream Garena Free Fire gameplay on PC
-How to join or create a clan in Garena Free Fire on PC or mobile devices
-How to download Garena Free Fire Max APK for Android devices
-Garena Free Fire Max APK features, enhancements, and differences from Garena Free Fire
-Garena Free Fire Max APK system requirements and compatibility issues
-Garena Free Fire Max APK graphics settings, optimization, and tips
-Garena Free Fire Max APK review and feedback

- - - - - - -
OSAndroid 7 or later
RAM2 GB or more
ProcessorDual-core or better
Storage4 GB or more
InternetStable connection
-

Is Free Fire x GTA 5 safe and legal to download and install?

-

Free Fire x GTA 5 is a modded version of two official games, Garena Free Fire and Grand Theft Auto V, that are owned by their respective developers and publishers. Therefore, downloading and installing this mod may violate their terms of service and intellectual property rights. It may also expose your device to malware or viruses from untrusted sources. We do not endorse or promote the use of this mod, and we advise you to do so at your own risk and discretion.

-

Can I play Free Fire x GTA 5 online with other players?

-

Free Fire x GTA 5 is a modded version of two online games, Garena Free Fire and Grand Theft Auto V, that require an internet connection to play. However, this mod may not be compatible with the official servers or versions of these games, and it may not support multiplayer mode. You may also face issues such as lag, glitches, crashes, or bans from using this mod online. Therefore, we recommend you to play this mod offline or with friends who have the same mod installed.

-

How can I update Free Fire x GTA 5 to the latest version?

-

To update Free Fire x GTA 5 to the latest version, you need to download the new APK and OBB files from the same website where you downloaded the previous version. You can check for updates by visiting the website regularly or subscribing to their notifications. You can also follow their social media pages or blogs for news and updates. To install the new version, you need to follow the same steps as before, but make sure you delete or overwrite the old files before copying the new ones.

-

What are some alternatives to Free Fire x GTA 5 for Android?

-

If you are looking for some alternatives to Free Fire x GTA 5 for Android, you can try these games:

-
    -
  • PUBG Mobile: A battle royale game that features realistic graphics, weapons, vehicles, and maps.
  • -
  • GTA San Andreas: A classic GTA game that features a large map, diverse missions, and a rich story.
  • -
  • Call of Duty Mobile: A first-person shooter game that features various modes, maps, weapons, and characters.
  • -
  • Fortnite: A popular game that features a mix of building, shooting, and survival elements.
  • -
  • Minecraft: A sandbox game that allows you to create and explore your own world.
  • -
-

These are some of the alternatives to Free Fire x GTA 5 for Android that you can try. You can find them on the Google Play Store or other websites that offer Android games.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fernfromecuador/SG161222-Realistic_Vision_V1.4/README.md b/spaces/fernfromecuador/SG161222-Realistic_Vision_V1.4/README.md deleted file mode 100644 index 415798546840b28a453b9071364d6613b9835659..0000000000000000000000000000000000000000 --- a/spaces/fernfromecuador/SG161222-Realistic_Vision_V1.4/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SG161222-Realistic Vision V1.4 -emoji: 🏆 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/index.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/index.d.ts deleted file mode 100644 index f108ecd0a8ca1ec609529d3a0b76106c48e418a0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/index.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -declare function setPrototypeOf(o: any, proto: object | null): any; -export = setPrototypeOf; diff --git a/spaces/fun-research/FC-CLIP/fcclip/data/dataset_mappers/__init__.py b/spaces/fun-research/FC-CLIP/fcclip/data/dataset_mappers/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/fcclip/data/dataset_mappers/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/gforguru/EmailGenerator/app.py b/spaces/gforguru/EmailGenerator/app.py deleted file mode 100644 index b7d91da11669044ce55285f9bbb2fae122941570..0000000000000000000000000000000000000000 --- a/spaces/gforguru/EmailGenerator/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st -from langchain.prompts import PromptTemplate -from langchain.llms import CTransformers - -def getLLmResponse(email_topic,email_sender,email_recipient,email_style): - - llm = CTransformers(model='llama-2-7b-chat.ggmlv3.q2_K.bin', - model_type='llama', - config={'max_new_tokens':256, - 'temperature':0.01}) - - template =""" - write an email with {style} style and include topic:{email_topic}.\n\nSender: {sender}\nRecipient: {recipient} - \n\nEmail Text: - - """ - prompt = PromptTemplate(template=template, - input_variables=["style","email_topic","sender","recipient"]) - - response = llm(prompt.format(email_topic=email_topic,sender=email_sender, recipient = email_recipient, style = email_style)) - return response - -st.set_page_config(page_title="Email Generation Bot", - layout='centered', - initial_sidebar_state='collapsed') -st.header('Email Generation Bot') -email_topic = st.text_area('Please enter the email topic', height=275) - -c1,c2,c3 = st.columns([10,10,5]) - -with c1: - email_sender = st.text_input('Sender Name') - -with c2: - email_recipient = st.text_input('Recipient Name') - -with c3: - email_style = st.selectbox('Writing Style', - ('Formal','Appreciating','Not Satisfied','Neutral'), index=0) - -submit = st.button("Generate") - -if submit: - response = getLLmResponse(email_topic,email_sender,email_recipient,email_style) - st.write(response) \ No newline at end of file diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/__init__.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/__init__.py deleted file mode 100644 index 59d99cde5a32d9fe5561f88bdb16c334d946abfc..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from .constants import BINARY_MODE, MULTICLASS_MODE, MULTILABEL_MODE - -from .jaccard import JaccardLoss -from .dice import DiceLoss -from .focal import FocalLoss -from .lovasz import LovaszLoss -from .soft_bce import SoftBCEWithLogitsLoss -from .soft_ce import SoftCrossEntropyLoss -from .tversky import TverskyLoss -from .mcc import MCCLoss diff --git a/spaces/golem4300/RVC-TTS/lib/infer_pack/models.py b/spaces/golem4300/RVC-TTS/lib/infer_pack/models.py deleted file mode 100644 index fbcac8deb5fe6fe2c77752ca5b10459ce71ea43b..0000000000000000000000000000000000000000 --- a/spaces/golem4300/RVC-TTS/lib/infer_pack/models.py +++ /dev/null @@ -1,711 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - -class TextEncoder256(nn.Module): - def __init__(self, out_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, f0=True): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: self.emb_pitch = nn.Embedding(256, hidden_channels) - self.encoder = attentions.Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch is None: x = self.emb_phone(phone) - else: x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(x.dtype) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - -class TextEncoder768(nn.Module): - def __init__(self, out_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, f0=True): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: self.emb_pitch = nn.Embedding(256, hidden_channels) - self.encoder = attentions.Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch is None: x = self.emb_phone(phone) - else: x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(x.dtype) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - -class ResidualCouplingBlock(nn.Module): - def __init__(self, channels, hidden_channels, kernel_size, dilation_rate, n_layers, n_flows=4, gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - self.flows = nn.ModuleList() - for _ in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): self.flows[i * 2].remove_weight_norm() - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): self.ups.append(weight_norm(ConvTranspose1d(upsample_initial_channel // (2**i), upsample_initial_channel // (2 ** (i + 1)), k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for k, d in zip(resblock_kernel_sizes, resblock_dilation_sizes): self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: xs = self.resblocks[i * self.num_kernels + j](x) - else: xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: remove_weight_norm(l) - for l in self.resblocks: l.remove_weight_norm() - -class SineGen(torch.nn.Module): - def __init__(self, samp_rate, harmonic_num=0, sine_amp=0.1, noise_std=0.003, voiced_threshold=0): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (idx + 2) - rad_values = (f0_buf / self.sampling_rate) % 1 - rand_ini = torch.rand(f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) - tmp_over_one *= upp - tmp_over_one = F.interpolate(tmp_over_one.transpose(2, 1), scale_factor=upp, mode="linear", align_corners=True).transpose(2, 1) - rad_values = F.interpolate(rad_values.transpose(2, 1), scale_factor=upp, mode="nearest").transpose(2, 1) - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate(uv.transpose(2, 1), scale_factor=upp, mode="nearest").transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod) - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF(sampling_rate=sr, harmonic_num=0, is_half=is_half) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm(ConvTranspose1d(upsample_initial_channel // (2**i), upsample_initial_channel // (2 ** (i + 1)), k, u, padding=(k - u) // 2))) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for k, d in zip(resblock_kernel_sizes, resblock_dilation_sizes): self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: xs = self.resblocks[i * self.num_kernels + j](x) - else: xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: remove_weight_norm(l) - for l in self.resblocks: l.remove_weight_norm() - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256(inter_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout) - self.dec = GeneratorNSF(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels, sr=sr, is_half=kwargs["is_half"]) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds): - g = self.emb_g(ds).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768(inter_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout) - self.dec = GeneratorNSF(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels, sr=sr, is_half=kwargs["is_half"]) - self.enc_q = PosteriorEncoder(spec_channels,inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds): - g = self.emb_g(ds).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256(inter_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, f0=False) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): - g = self.emb_g(ds).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768(inter_channels, hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, f0=False) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): - g = self.emb_g(ds).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs += [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for d in self.discriminators: - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs += [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for d in self.discriminators: - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([norm_f(Conv1d(1, 16, 15, 1, padding=7)), norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), norm_f(Conv1d(1024, 1024, 5, 1, padding=2))]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - b, c, t = x.shape - if t % self.period != 0: - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Acordes De Cuatro Pdf Downloadl [HOT].md b/spaces/gotiQspiryo/whisper-ui/examples/Acordes De Cuatro Pdf Downloadl [HOT].md deleted file mode 100644 index 05618603fb398347044de16d360d83407b14dc59..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Acordes De Cuatro Pdf Downloadl [HOT].md +++ /dev/null @@ -1,9 +0,0 @@ -
-

once the application is launched on your computer screen, you need to select the platform you wish to access the application on. for windows pc, you need to select other and then choose android under it and the acordes de ukulele launcher icon will appear.

-

Acordes De Cuatro Pdf Downloadl


Download Zip >>> https://urlgoal.com/2uyLLV



-

memu is another great option for your mobile apps. it is ideal for those who wish to play games on their pc and mac. most popular games for download are available on memu play, including scrabble, puzzle kids, happy wheels, bookworm adventures, motor city as well.to check if a game is available, on the main page, tap on the search by name option.

-

so if you have some apk file instead of the original app and want to install it on pc then at the first step you have to make sure that you have android sdk. one can get the android sdk installed and configured on windows by following simple steps. the next step is to install an app emulator on pc that can be used for installing apk files. then it is a simple process of installing any app from file manager.. step 2: after downloading the acords de ukulele app from google play store, click on the app icon and tap on open. step 3: now the app will be opened. here you can see the icons of all the features of the app and toggles.
read also: without official pc support - how to download acords de ukulele on windows 10?

-

easeus partition master is one of the most commonly used apps when it comes to partitioning hard disk drive.. just use the pc as a remote desktop client and interact with your desktop pc and other devices through a remote desktop client. this action is not required by using the pc as a remote desktop client, however the option exists for those who want to use the pc desktop functionality in this manner. below are some of the best tips to use the pc as a remote desktop client:
step 1: if you need to open a file or any program from a remote machine, you need to click on add remote computer option at the bottom of the main menu.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Boost Your GM Sound Banks from 4MB to 250MB with Sonivox 250mb Gm Soundfont.md b/spaces/gotiQspiryo/whisper-ui/examples/Boost Your GM Sound Banks from 4MB to 250MB with Sonivox 250mb Gm Soundfont.md deleted file mode 100644 index f5715f40e42072878bcce124085f07d963948d89..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Boost Your GM Sound Banks from 4MB to 250MB with Sonivox 250mb Gm Soundfont.md +++ /dev/null @@ -1,6 +0,0 @@ -

Sonivox 250mb Gm Soundfontl


Download · https://urlgoal.com/2uyLR9



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fsx Beech B200 King Air Torrent TOP.md b/spaces/gotiQspiryo/whisper-ui/examples/Fsx Beech B200 King Air Torrent TOP.md deleted file mode 100644 index 6eadb723ab6bb867bd6ef5e6919afc05b29758a9..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Fsx Beech B200 King Air Torrent TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

fsx beech b200 king air torrent


DOWNLOADhttps://urlgoal.com/2uyMxO



- -The PDF settings recommendations will help you get the most out of your FSX and P3D programs. Carenado. About Carenado. Carenado is a well-known name in the field of flight simulators ... world-class flight simulators and avionics systems. Our Windows and Macintosh software, flight simulation software and avionics systems provide a high level of quality and usability that is appreciated by users. Since our founding in 1982, we have continually refined our software and hardware to offer customers the latest and greatest quality products. 8a78ff9644
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Lefevre Metodo Per Clarinetto Pdf Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Lefevre Metodo Per Clarinetto Pdf Download.md deleted file mode 100644 index 9ca480a3eb406920c95a354f1cd94425d4d98899..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Lefevre Metodo Per Clarinetto Pdf Download.md +++ /dev/null @@ -1,117 +0,0 @@ -
-

Lefevre Metodo Per Clarinetto Pdf Download

- -

Lefevre Metodo Per Clarinetto Pdf Download is a popular and useful resource for clarinet students and teachers. It is a method book that contains exercises, sonatas, and other pieces for clarinet practice and performance. It is written by Jean-Xavier Lefevre, a French clarinetist and composer who lived in the 18th and 19th centuries.

-

Lefevre Metodo Per Clarinetto Pdf Download


Downloadhttps://urlgoal.com/2uyN7M



- -

If you are looking for Lefevre Metodo Per Clarinetto Pdf Download, you have come to the right place. In this article, we will tell you how to download and use Lefevre Metodo Per Clarinetto Pdf for free online. We will also tell you more about the author, the content, and the benefits of Lefevre Metodo Per Clarinetto Pdf.

- -

Who is Jean-Xavier Lefevre?

- -

Jean-Xavier Lefevre was born in Lausanne, Switzerland, in 1763. He moved to Paris at a young age and studied clarinet with Joseph Beer, a famous clarinetist of his time. He became a member of the orchestra of the Paris Opera and later a professor of clarinet at the Paris Conservatory. He wrote several works for clarinet, including 12 sonatas, 6 duos, 6 trios, and a method book. He died in Paris in 1829.

- -

Lefevre was one of the most influential clarinetists and composers of his era. He contributed to the development of the clarinet technique and repertoire. He also introduced some innovations to the clarinet design, such as adding more keys and improving the bore. He was admired by his contemporaries and successors, such as Mozart, Weber, and Klose.

- -

What is Lefevre Metodo Per Clarinetto Pdf?

- -

Lefevre Metodo Per Clarinetto Pdf is a digital version of Lefevre's method book for clarinet. The original title of the book is Méthode de Clarinette, which means Method for Clarinet in French. The book was first published in 1802-03 by the Paris Conservatory. It was later translated into Italian by Carlo Augusto Lovagnini Scher and published as Metodo Popolare Per Clarinetto.

- -

Lefevre Metodo Per Clarinetto Pdf contains various exercises and pieces for clarinet practice and performance. The book is divided into three parts: Part I contains 60 exercises for developing tone, articulation, fingering, scales, arpeggios, intervals, ornaments, and expression; Part II contains 12 sonatas for clarinet and bass (or piano) that cover different styles and difficulties; Part III contains additional pieces for clarinet solo or with accompaniment that demonstrate various aspects of musical interpretation.

- -

Lefevre Metodo Per Clarinetto Pdf is a comprehensive and progressive method book that covers all the essential aspects of clarinet playing. It is suitable for beginners as well as advanced players who want to improve their skills and knowledge. It is also a valuable source of musical literature that showcases the classical style and taste of Lefevre.

- -

How to Download Lefevre Metodo Per Clarinetto Pdf for Free Online

- -

One of the easiest ways to download Lefevre Metodo Per Clarinetto Pdf for free online is to use IMSLP.org. IMSLP.org is a popular website that offers free access to millions of public domain sheet music files in PDF format. You can download Lefevre Metodo Per Clarinetto Pdf from IMSLP.org by following these steps:

- -
    -
  1. Visit https://imslp.org/wiki/M%C3%A9thode_de_clarinette_%28Lef%C3%A8vre,_Jean-Xavier%29 and scroll down to find the section "Sheet Music".
  2. -
  3. Choose the file that you want to download from the list of available files. You can choose between different editions, languages, arrangements, transcriptions, or selections.
  4. -
  5. Click on the file name or the PDF icon to open the file in a new tab or window.
  6. -
  7. Click on the download button or icon on the top right corner of the page to save the file to your device.
  8. -
  9. Enjoy using Lefevre Metodo Per Clarinetto Pdf for your clarinet practice and performance.
  10. -
- -

You can also download Lefevre Metodo Per Clarinetto Pdf from other sources online, such as EPDFX.com, Scribd.com, or Archive.org. However, you should be careful about the legality and safety of these sources, as they may contain viruses, malware, or pirated content.

- -

How to Use Lefevre Metodo Per Clarinetto Pdf for Your Clarinet Practice and Performance

- -

Once you have downloaded Lefevre Metodo Per Clarinetto Pdf to your device, you can use it for your clarinet practice and performance. Here are some tips on how to use Lefevre Metodo Per Clarinetto Pdf effectively:

-

- -
    -
  • Start with Part I of the book and follow the exercises in order. They will help you develop your basic skills and techniques on the clarinet.
  • -
  • Practice each exercise slowly and carefully at first, then gradually increase your speed and accuracy. Pay attention to your tone quality, intonation, articulation, fingering, breathing, posture, and expression.
  • -
  • Use a metronome or a tuner to help you keep a steady tempo and pitch. You can also use a recording device or an app to listen back to your playing and check your progress.
  • -
  • Move on to Part II of the book when you feel confident with Part I. Choose a sonata that suits your level and style preference. You can play it with a bass instrument (such as cello or bassoon) or a piano accompaniment.
  • -
  • Study the score carefully before playing it. Analyze the structure, harmony

    -

    What are the Benefits of Lefevre Metodo Per Clarinetto Pdf?

    - -

    Lefevre Metodo Per Clarinetto Pdf has many benefits for clarinet students and teachers who want to learn and teach clarinet with a classical approach. Here are some of the benefits of Lefevre Metodo Per Clarinetto Pdf:

    - -
      -
    • It is free and easy to download online from various sources, such as IMSLP.org, EPDFX.com, Scribd.com, or Archive.org.
    • -
    • It is a comprehensive and progressive method book that covers all the essential aspects of clarinet playing, such as tone, articulation, fingering, scales, arpeggios, intervals, ornaments, expression, and interpretation.
    • -
    • It contains exercises and pieces that are suitable for different levels and styles of clarinet playing, from beginner to advanced, from classical to romantic.
    • -
    • It introduces the clarinet students to the works of Jean-Xavier Lefevre, one of the most influential clarinetists and composers of his era.
    • -
    • It helps the clarinet students develop their musical skills and knowledge, as well as their appreciation and enjoyment of clarinet music.
    • -
    - -

    How to Improve Your Clarinet Playing with Lefevre Metodo Per Clarinetto Pdf

    - -

    Lefevre Metodo Per Clarinetto Pdf is a great resource for improving your clarinet playing with a classical approach. However, you need to use it properly and effectively to get the best results. Here are some tips on how to improve your clarinet playing with Lefevre Metodo Per Clarinetto Pdf:

    - -
      -
    • Practice regularly and consistently. Set a realistic and achievable goal for your practice time and stick to it. For example, you can practice for 30 minutes every day or for an hour every other day.
    • -
    • Practice with a purpose and a plan. Before you start practicing, decide what you want to work on and how you want to work on it. For example, you can focus on a specific exercise or piece, or on a specific skill or technique.
    • -
    • Practice with attention and feedback. While you are practicing, pay attention to your playing and listen carefully to your sound. Use a metronome or a tuner to help you keep a steady tempo and pitch. Use a recording device or an app to listen back to your playing and check your progress.
    • -
    • Practice with variety and challenge. Don't practice the same thing over and over again. Try different exercises and pieces that challenge you in different ways. For example, you can play faster or slower, louder or softer, higher or lower, or with different articulations or expressions.
    • -
    • Practice with fun and enjoyment. Don't make your practice a chore or a burden. Make it a fun and enjoyable activity that you look forward to. Play music that you like and that inspires you. Play with others if possible. Reward yourself for your achievements.
    • -
    - -

    Conclusion

    - -

    Lefevre Metodo Per Clarinetto Pdf Download is a popular and useful resource for clarinet students and teachers who want to learn and teach clarinet with a classical approach. It is a method book that contains exercises, sonatas, and other pieces for clarinet practice and performance. It is written by Jean-Xavier Lefevre, a famous clarinetist and composer who lived in the 18th and 19th centuries.

    - -

    If you want to download Lefevre Metodo Per Clarinetto Pdf Download for free online, you can use IMSLP.org or other websites that offer public domain sheet music files in PDF format. However -

    How to Find Lefevre Metodo Per Clarinetto Pdf Online

    - -

    If you want to find Lefevre Metodo Per Clarinetto Pdf online, you can use various search engines or websites that offer sheet music files in PDF format. You can use keywords such as "Lefevre Metodo Per Clarinetto Pdf", "Lefevre Method for Clarinet Pdf", "Lefevre Clarinet Method Pdf", or "Metodo Popolare Per Clarinetto Pdf". You can also use filters such as language, date, file type, or domain to narrow down your search results.

    - -

    Some of the most popular and reliable search engines or websites that you can use to find Lefevre Metodo Per Clarinetto Pdf online are:

    - -
      -
    • Google.com: Google is the most widely used and powerful search engine in the world. You can use Google to find Lefevre Metodo Per Clarinetto Pdf online by typing your keywords in the search box and clicking on the search button. You can also use Google Advanced Search to refine your search criteria.
    • -
    • Bing.com: Bing is another popular and effective search engine that you can use to find Lefevre Metodo Per Clarinetto Pdf online. You can use Bing to find Lefevre Metodo Per Clarinetto Pdf online by typing your keywords in the search box and clicking on the search button. You can also use Bing Advanced Search to refine your search criteria.
    • -
    • IMSLP.org: IMSLP is a popular and useful website that offers free access to millions of public domain sheet music files in PDF format. You can use IMSLP to find Lefevre Metodo Per Clarinetto Pdf online by typing your keywords in the search box and clicking on the search button. You can also browse by composer, genre, instrument, or nationality.
    • -
    • EPDFX.com: EPDFX is another popular and useful website that offers free access to thousands of sheet music files in PDF format. You can use EPDFX to find Lefevre Metodo Per Clarinetto Pdf online by typing your keywords in the search box and clicking on the download button. You can also browse by category, popularity, or rating.
    • -
    • Scribd.com: Scribd is a popular and useful website that offers free and paid access to millions of books, documents, and sheet music files in PDF format. You can use Scribd to find Lefevre Metodo Per Clarinetto Pdf online by typing your keywords in the search box and clicking on the search button. You can also browse by category, language, or format.
    • -
    - -

    How to Print Lefevre Metodo Per Clarinetto Pdf Online

    - -

    If you want to print Lefevre Metodo Per Clarinetto Pdf online, you need to have a printer connected to your device and a PDF reader software installed on your device. You can use any PDF reader software that supports printing, such as Adobe Acrobat Reader, Foxit Reader, Sumatra PDF, or Google Chrome. Here are some steps on how to print Lefevre Metodo Per Clarinetto Pdf online:

    - -
      -
    1. Download Lefevre Metodo Per Clarinetto Pdf from any of the sources mentioned above and save it to your device.
    2. -
    3. Open Lefevre Metodo Per Clarinetto Pdf with your PDF reader software.
    4. -
    5. Click on the print button or icon on the top menu bar of your PDF reader software.
    6. -
    7. Select your printer settings, such as paper size, orientation, margins, quality, copies, etc.
    8. -
    9. Click on the print button or icon again to start printing Lefevre Metodo Per Clarinetto Pdf.
    10. -
    11. Enjoy reading and playing Lefevre Metodo Per Clarinetto Pdf.
    12. -
    - -

    Conclusion

    - -

    Lefevre Metodo Per Clarinetto Pdf Download is a popular and useful resource for clarinet students and teachers who want to learn and teach clarinet with a classical approach. It is a method book that contains exercises, sonatas, and other pieces for clarinet practice and performance. It is written by Jean-Xavier Lefevre, a famous clarinetist and composer who lived in the 18th and 19th centuries.

    - -

    If you want to download Lefevre Metodo Per Clarinetto Pdf Download for free online, you can use IMSLP.org or other websites that offer public domain sheet music files in PDF format. However -

    Conclusion

    - -

    Lefevre Metodo Per Clarinetto Pdf Download is a great resource for clarinet students and teachers who want to learn and teach clarinet with a classical approach. The book has many benefits, such as being free and easy to download online, being comprehensive and progressive, and introducing the works of Jean-Xavier Lefevre. The book also has positive reviews and ratings from clarinet lovers who have used it for their clarinet practice and performance.

    - -

    If you want to download Lefevre Metodo Per Clarinetto Pdf Download for free online, you can use IMSLP.org or other websites that offer public domain sheet music files in PDF format. However, you should be careful about the legality and safety of these sources, as they may contain viruses, malware, or pirated content. You can also watch Lefevre Metodo Per Clarinetto Pdf videos online on YouTube or other video platforms to learn how to play or teach Lefevre Metodo Per Clarinetto Pdf better. You can also print Lefevre Metodo Per Clarinetto Pdf online with a printer and a PDF reader software.

    - -

    We hope this article was helpful for you. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/models/vggtransformer.py b/spaces/gradio/HuBERT/examples/speech_recognition/models/vggtransformer.py deleted file mode 100644 index 97974360a454b581eb63bdfd2af2e2afa05596c7..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/models/vggtransformer.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import math -from collections.abc import Iterable - -import torch -import torch.nn as nn -from examples.speech_recognition.data.data_utils import lengths_to_encoder_padding_mask -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqEncoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - LinearizedConvolution, - TransformerDecoderLayer, - TransformerEncoderLayer, - VGGBlock, -) - - -@register_model("asr_vggtransformer") -class VGGTransformerModel(FairseqEncoderDecoderModel): - """ - Transformers with convolutional context for ASR - https://arxiv.org/abs/1904.11660 - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--vggblock-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one vggblock: - [(out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - use_layer_norm), ...]) - """, - ) - parser.add_argument( - "--transformer-enc-config", - type=str, - metavar="EXPR", - help="""" - a tuple containing the configuration of the encoder transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ...]') - """, - ) - parser.add_argument( - "--enc-output-dim", - type=int, - metavar="N", - help=""" - encoder output dimension, can be None. If specified, projecting the - transformer output to the specified dimension""", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--tgt-embed-dim", - type=int, - metavar="N", - help="embedding dimension of the decoder target tokens", - ) - parser.add_argument( - "--transformer-dec-config", - type=str, - metavar="EXPR", - help=""" - a tuple containing the configuration of the decoder transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ...] - """, - ) - parser.add_argument( - "--conv-dec-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples for the decoder 1-D convolution config - [(out_channels, conv_kernel_size, use_layer_norm), ...]""", - ) - - @classmethod - def build_encoder(cls, args, task): - return VGGTransformerEncoder( - input_feat_per_channel=args.input_feat_per_channel, - vggblock_config=eval(args.vggblock_enc_config), - transformer_config=eval(args.transformer_enc_config), - encoder_output_dim=args.enc_output_dim, - in_channels=args.in_channels, - ) - - @classmethod - def build_decoder(cls, args, task): - return TransformerDecoder( - dictionary=task.target_dictionary, - embed_dim=args.tgt_embed_dim, - transformer_config=eval(args.transformer_dec_config), - conv_config=eval(args.conv_dec_config), - encoder_output_dim=args.enc_output_dim, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - base_architecture(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = True - return lprobs - - -DEFAULT_ENC_VGGBLOCK_CONFIG = ((32, 3, 2, 2, False),) * 2 -DEFAULT_ENC_TRANSFORMER_CONFIG = ((256, 4, 1024, True, 0.2, 0.2, 0.2),) * 2 -# 256: embedding dimension -# 4: number of heads -# 1024: FFN -# True: apply layerNorm before (dropout + resiaul) instead of after -# 0.2 (dropout): dropout after MultiheadAttention and second FC -# 0.2 (attention_dropout): dropout in MultiheadAttention -# 0.2 (relu_dropout): dropout after ReLu -DEFAULT_DEC_TRANSFORMER_CONFIG = ((256, 2, 1024, True, 0.2, 0.2, 0.2),) * 2 -DEFAULT_DEC_CONV_CONFIG = ((256, 3, True),) * 2 - - -# TODO: repace transformer encoder config from one liner -# to explicit args to get rid of this transformation -def prepare_transformer_encoder_params( - input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout, -): - args = argparse.Namespace() - args.encoder_embed_dim = input_dim - args.encoder_attention_heads = num_heads - args.attention_dropout = attention_dropout - args.dropout = dropout - args.activation_dropout = relu_dropout - args.encoder_normalize_before = normalize_before - args.encoder_ffn_embed_dim = ffn_dim - return args - - -def prepare_transformer_decoder_params( - input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout, -): - args = argparse.Namespace() - args.decoder_embed_dim = input_dim - args.decoder_attention_heads = num_heads - args.attention_dropout = attention_dropout - args.dropout = dropout - args.activation_dropout = relu_dropout - args.decoder_normalize_before = normalize_before - args.decoder_ffn_embed_dim = ffn_dim - return args - - -class VGGTransformerEncoder(FairseqEncoder): - """VGG + Transformer encoder""" - - def __init__( - self, - input_feat_per_channel, - vggblock_config=DEFAULT_ENC_VGGBLOCK_CONFIG, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - encoder_output_dim=512, - in_channels=1, - transformer_context=None, - transformer_sampling=None, - ): - """constructor for VGGTransformerEncoder - - Args: - - input_feat_per_channel: feature dim (not including stacked, - just base feature) - - in_channel: # input channels (e.g., if stack 8 feature vector - together, this is 8) - - vggblock_config: configuration of vggblock, see comments on - DEFAULT_ENC_VGGBLOCK_CONFIG - - transformer_config: configuration of transformer layer, see comments - on DEFAULT_ENC_TRANSFORMER_CONFIG - - encoder_output_dim: final transformer output embedding dimension - - transformer_context: (left, right) if set, self-attention will be focused - on (t-left, t+right) - - transformer_sampling: an iterable of int, must match with - len(transformer_config), transformer_sampling[i] indicates sampling - factor for i-th transformer layer, after multihead att and feedfoward - part - """ - super().__init__(None) - - self.num_vggblocks = 0 - if vggblock_config is not None: - if not isinstance(vggblock_config, Iterable): - raise ValueError("vggblock_config is not iterable") - self.num_vggblocks = len(vggblock_config) - - self.conv_layers = nn.ModuleList() - self.in_channels = in_channels - self.input_dim = input_feat_per_channel - self.pooling_kernel_sizes = [] - - if vggblock_config is not None: - for _, config in enumerate(vggblock_config): - ( - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - layer_norm, - ) = config - self.conv_layers.append( - VGGBlock( - in_channels, - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - input_dim=input_feat_per_channel, - layer_norm=layer_norm, - ) - ) - self.pooling_kernel_sizes.append(pooling_kernel_size) - in_channels = out_channels - input_feat_per_channel = self.conv_layers[-1].output_dim - - transformer_input_dim = self.infer_conv_output_dim( - self.in_channels, self.input_dim - ) - # transformer_input_dim is the output dimension of VGG part - - self.validate_transformer_config(transformer_config) - self.transformer_context = self.parse_transformer_context(transformer_context) - self.transformer_sampling = self.parse_transformer_sampling( - transformer_sampling, len(transformer_config) - ) - - self.transformer_layers = nn.ModuleList() - - if transformer_input_dim != transformer_config[0][0]: - self.transformer_layers.append( - Linear(transformer_input_dim, transformer_config[0][0]) - ) - self.transformer_layers.append( - TransformerEncoderLayer( - prepare_transformer_encoder_params(*transformer_config[0]) - ) - ) - - for i in range(1, len(transformer_config)): - if transformer_config[i - 1][0] != transformer_config[i][0]: - self.transformer_layers.append( - Linear(transformer_config[i - 1][0], transformer_config[i][0]) - ) - self.transformer_layers.append( - TransformerEncoderLayer( - prepare_transformer_encoder_params(*transformer_config[i]) - ) - ) - - self.encoder_output_dim = encoder_output_dim - self.transformer_layers.extend( - [ - Linear(transformer_config[-1][0], encoder_output_dim), - LayerNorm(encoder_output_dim), - ] - ) - - def forward(self, src_tokens, src_lengths, **kwargs): - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - bsz, max_seq_len, _ = src_tokens.size() - x = src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - x = x.transpose(1, 2).contiguous() - # (B, C, T, feat) - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - - bsz, _, output_seq_len, _ = x.size() - - # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) -> (T, B, C * feat) - x = x.transpose(1, 2).transpose(0, 1) - x = x.contiguous().view(output_seq_len, bsz, -1) - - input_lengths = src_lengths.clone() - for s in self.pooling_kernel_sizes: - input_lengths = (input_lengths.float() / s).ceil().long() - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - input_lengths, batch_first=True - ) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - subsampling_factor = int(max_seq_len * 1.0 / output_seq_len + 0.5) - attn_mask = self.lengths_to_attn_mask(input_lengths, subsampling_factor) - - transformer_layer_idx = 0 - - for layer_idx in range(len(self.transformer_layers)): - - if isinstance(self.transformer_layers[layer_idx], TransformerEncoderLayer): - x = self.transformer_layers[layer_idx]( - x, encoder_padding_mask, attn_mask - ) - - if self.transformer_sampling[transformer_layer_idx] != 1: - sampling_factor = self.transformer_sampling[transformer_layer_idx] - x, encoder_padding_mask, attn_mask = self.slice( - x, encoder_padding_mask, attn_mask, sampling_factor - ) - - transformer_layer_idx += 1 - - else: - x = self.transformer_layers[layer_idx](x) - - # encoder_padding_maks is a (T x B) tensor, its [t, b] elements indicate - # whether encoder_output[t, b] is valid or not (valid=0, invalid=1) - - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": encoder_padding_mask.t() - if encoder_padding_mask is not None - else None, - # (B, T) --> (T, B) - } - - def infer_conv_output_dim(self, in_channels, input_dim): - sample_seq_len = 200 - sample_bsz = 10 - x = torch.randn(sample_bsz, in_channels, sample_seq_len, input_dim) - for i, _ in enumerate(self.conv_layers): - x = self.conv_layers[i](x) - x = x.transpose(1, 2) - mb, seq = x.size()[:2] - return x.contiguous().view(mb, seq, -1).size(-1) - - def validate_transformer_config(self, transformer_config): - for config in transformer_config: - input_dim, num_heads = config[:2] - if input_dim % num_heads != 0: - msg = ( - "ERROR in transformer config {}: ".format(config) - + "input dimension {} ".format(input_dim) - + "not dividable by number of heads {}".format(num_heads) - ) - raise ValueError(msg) - - def parse_transformer_context(self, transformer_context): - """ - transformer_context can be the following: - - None; indicates no context is used, i.e., - transformer can access full context - - a tuple/list of two int; indicates left and right context, - any number <0 indicates infinite context - * e.g., (5, 6) indicates that for query at x_t, transformer can - access [t-5, t+6] (inclusive) - * e.g., (-1, 6) indicates that for query at x_t, transformer can - access [0, t+6] (inclusive) - """ - if transformer_context is None: - return None - - if not isinstance(transformer_context, Iterable): - raise ValueError("transformer context must be Iterable if it is not None") - - if len(transformer_context) != 2: - raise ValueError("transformer context must have length 2") - - left_context = transformer_context[0] - if left_context < 0: - left_context = None - - right_context = transformer_context[1] - if right_context < 0: - right_context = None - - if left_context is None and right_context is None: - return None - - return (left_context, right_context) - - def parse_transformer_sampling(self, transformer_sampling, num_layers): - """ - parsing transformer sampling configuration - - Args: - - transformer_sampling, accepted input: - * None, indicating no sampling - * an Iterable with int (>0) as element - - num_layers, expected number of transformer layers, must match with - the length of transformer_sampling if it is not None - - Returns: - - A tuple with length num_layers - """ - if transformer_sampling is None: - return (1,) * num_layers - - if not isinstance(transformer_sampling, Iterable): - raise ValueError( - "transformer_sampling must be an iterable if it is not None" - ) - - if len(transformer_sampling) != num_layers: - raise ValueError( - "transformer_sampling {} does not match with the number " - "of layers {}".format(transformer_sampling, num_layers) - ) - - for layer, value in enumerate(transformer_sampling): - if not isinstance(value, int): - raise ValueError("Invalid value in transformer_sampling: ") - if value < 1: - raise ValueError( - "{} layer's subsampling is {}.".format(layer, value) - + " This is not allowed! " - ) - return transformer_sampling - - def slice(self, embedding, padding_mask, attn_mask, sampling_factor): - """ - embedding is a (T, B, D) tensor - padding_mask is a (B, T) tensor or None - attn_mask is a (T, T) tensor or None - """ - embedding = embedding[::sampling_factor, :, :] - if padding_mask is not None: - padding_mask = padding_mask[:, ::sampling_factor] - if attn_mask is not None: - attn_mask = attn_mask[::sampling_factor, ::sampling_factor] - - return embedding, padding_mask, attn_mask - - def lengths_to_attn_mask(self, input_lengths, subsampling_factor=1): - """ - create attention mask according to sequence lengths and transformer - context - - Args: - - input_lengths: (B, )-shape Int/Long tensor; input_lengths[b] is - the length of b-th sequence - - subsampling_factor: int - * Note that the left_context and right_context is specified in - the input frame-level while input to transformer may already - go through subsampling (e.g., the use of striding in vggblock) - we use subsampling_factor to scale the left/right context - - Return: - - a (T, T) binary tensor or None, where T is max(input_lengths) - * if self.transformer_context is None, None - * if left_context is None, - * attn_mask[t, t + right_context + 1:] = 1 - * others = 0 - * if right_context is None, - * attn_mask[t, 0:t - left_context] = 1 - * others = 0 - * elsif - * attn_mask[t, t - left_context: t + right_context + 1] = 0 - * others = 1 - """ - if self.transformer_context is None: - return None - - maxT = torch.max(input_lengths).item() - attn_mask = torch.zeros(maxT, maxT) - - left_context = self.transformer_context[0] - right_context = self.transformer_context[1] - if left_context is not None: - left_context = math.ceil(self.transformer_context[0] / subsampling_factor) - if right_context is not None: - right_context = math.ceil(self.transformer_context[1] / subsampling_factor) - - for t in range(maxT): - if left_context is not None: - st = 0 - en = max(st, t - left_context) - attn_mask[t, st:en] = 1 - if right_context is not None: - st = t + right_context + 1 - st = min(st, maxT - 1) - attn_mask[t, st:] = 1 - - return attn_mask.to(input_lengths.device) - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - -class TransformerDecoder(FairseqIncrementalDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - left_pad (bool, optional): whether the input is left-padded. Default: - ``False`` - """ - - def __init__( - self, - dictionary, - embed_dim=512, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - conv_config=DEFAULT_DEC_CONV_CONFIG, - encoder_output_dim=512, - ): - - super().__init__(dictionary) - vocab_size = len(dictionary) - self.padding_idx = dictionary.pad() - self.embed_tokens = Embedding(vocab_size, embed_dim, self.padding_idx) - - self.conv_layers = nn.ModuleList() - for i in range(len(conv_config)): - out_channels, kernel_size, layer_norm = conv_config[i] - if i == 0: - conv_layer = LinearizedConv1d( - embed_dim, out_channels, kernel_size, padding=kernel_size - 1 - ) - else: - conv_layer = LinearizedConv1d( - conv_config[i - 1][0], - out_channels, - kernel_size, - padding=kernel_size - 1, - ) - self.conv_layers.append(conv_layer) - if layer_norm: - self.conv_layers.append(nn.LayerNorm(out_channels)) - self.conv_layers.append(nn.ReLU()) - - self.layers = nn.ModuleList() - if conv_config[-1][0] != transformer_config[0][0]: - self.layers.append(Linear(conv_config[-1][0], transformer_config[0][0])) - self.layers.append( - TransformerDecoderLayer( - prepare_transformer_decoder_params(*transformer_config[0]) - ) - ) - - for i in range(1, len(transformer_config)): - if transformer_config[i - 1][0] != transformer_config[i][0]: - self.layers.append( - Linear(transformer_config[i - 1][0], transformer_config[i][0]) - ) - self.layers.append( - TransformerDecoderLayer( - prepare_transformer_decoder_params(*transformer_config[i]) - ) - ) - self.fc_out = Linear(transformer_config[-1][0], vocab_size) - - def forward(self, prev_output_tokens, encoder_out=None, incremental_state=None): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for input feeding/teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - target_padding_mask = ( - (prev_output_tokens == self.padding_idx).to(prev_output_tokens.device) - if incremental_state is None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - - # B x T x C -> T x B x C - x = self._transpose_if_training(x, incremental_state) - - for layer in self.conv_layers: - if isinstance(layer, LinearizedConvolution): - x = layer(x, incremental_state) - else: - x = layer(x) - - # B x T x C -> T x B x C - x = self._transpose_if_inference(x, incremental_state) - - # decoder layers - for layer in self.layers: - if isinstance(layer, TransformerDecoderLayer): - x, *_ = layer( - x, - (encoder_out["encoder_out"] if encoder_out is not None else None), - ( - encoder_out["encoder_padding_mask"].t() - if encoder_out["encoder_padding_mask"] is not None - else None - ), - incremental_state, - self_attn_mask=( - self.buffered_future_mask(x) - if incremental_state is None - else None - ), - self_attn_padding_mask=( - target_padding_mask if incremental_state is None else None - ), - ) - else: - x = layer(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - x = self.fc_out(x) - - return x, None - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def _transpose_if_training(self, x, incremental_state): - if incremental_state is None: - x = x.transpose(0, 1) - return x - - def _transpose_if_inference(self, x, incremental_state): - if incremental_state: - x = x.transpose(0, 1) - return x - - -@register_model("asr_vggtransformer_encoder") -class VGGTransformerEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--vggblock-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one vggblock - [(out_channels, conv_kernel_size, pooling_kernel_size,num_conv_layers), ...] - """, - ) - parser.add_argument( - "--transformer-enc-config", - type=str, - metavar="EXPR", - help=""" - a tuple containing the configuration of the Transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ]""", - ) - parser.add_argument( - "--enc-output-dim", - type=int, - metavar="N", - help="encoder output dimension, projecting the LSTM output", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--transformer-context", - type=str, - metavar="EXPR", - help=""" - either None or a tuple of two ints, indicating left/right context a - transformer can have access to""", - ) - parser.add_argument( - "--transformer-sampling", - type=str, - metavar="EXPR", - help=""" - either None or a tuple of ints, indicating sampling factor in each layer""", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - base_architecture_enconly(args) - encoder = VGGTransformerEncoderOnly( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - vggblock_config=eval(args.vggblock_enc_config), - transformer_config=eval(args.transformer_enc_config), - encoder_output_dim=args.enc_output_dim, - in_channels=args.in_channels, - transformer_context=eval(args.transformer_context), - transformer_sampling=eval(args.transformer_sampling), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (T, B, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - # lprobs is a (T, B, D) tensor - # we need to transoose to get (B, T, D) tensor - lprobs = lprobs.transpose(0, 1).contiguous() - lprobs.batch_first = True - return lprobs - - -class VGGTransformerEncoderOnly(VGGTransformerEncoder): - def __init__( - self, - vocab_size, - input_feat_per_channel, - vggblock_config=DEFAULT_ENC_VGGBLOCK_CONFIG, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - encoder_output_dim=512, - in_channels=1, - transformer_context=None, - transformer_sampling=None, - ): - super().__init__( - input_feat_per_channel=input_feat_per_channel, - vggblock_config=vggblock_config, - transformer_config=transformer_config, - encoder_output_dim=encoder_output_dim, - in_channels=in_channels, - transformer_context=transformer_context, - transformer_sampling=transformer_sampling, - ) - self.fc_out = Linear(self.encoder_output_dim, vocab_size) - - def forward(self, src_tokens, src_lengths, **kwargs): - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - - enc_out = super().forward(src_tokens, src_lengths) - x = self.fc_out(enc_out["encoder_out"]) - # x = F.log_softmax(x, dim=-1) - # Note: no need this line, because model.get_normalized_prob will call - # log_softmax - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": enc_out["encoder_padding_mask"], # (T, B) - } - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - # nn.init.uniform_(m.weight, -0.1, 0.1) - # nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - # m.weight.data.uniform_(-0.1, 0.1) - # if bias: - # m.bias.data.uniform_(-0.1, 0.1) - return m - - -def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0, **kwargs): - """Weight-normalized Conv1d layer optimized for decoding""" - m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - nn.init.normal_(m.weight, mean=0, std=std) - nn.init.constant_(m.bias, 0) - return nn.utils.weight_norm(m, dim=2) - - -def LayerNorm(embedding_dim): - m = nn.LayerNorm(embedding_dim) - return m - - -# seq2seq models -def base_architecture(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 40) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", DEFAULT_ENC_VGGBLOCK_CONFIG - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", DEFAULT_ENC_TRANSFORMER_CONFIG - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.in_channels = getattr(args, "in_channels", 1) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 128) - args.transformer_dec_config = getattr( - args, "transformer_dec_config", DEFAULT_ENC_TRANSFORMER_CONFIG - ) - args.conv_dec_config = getattr(args, "conv_dec_config", DEFAULT_DEC_CONV_CONFIG) - args.transformer_context = getattr(args, "transformer_context", "None") - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_1") -def vggtransformer_1(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 14", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 128) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, - "transformer_dec_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 4", - ) - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_2") -def vggtransformer_2(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 16", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 512) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, - "transformer_dec_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 6", - ) - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_base") -def vggtransformer_base(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 12" - ) - - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 512) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, "transformer_dec_config", "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 6" - ) - # Size estimations: - # Encoder: - # - vggblock param: 64*1*3*3 + 64*64*3*3 + 128*64*3*3 + 128*128*3 = 258K - # Transformer: - # - input dimension adapter: 2560 x 512 -> 1.31M - # - transformer_layers (x12) --> 37.74M - # * MultiheadAttention: 512*512*3 (in_proj) + 512*512 (out_proj) = 1.048M - # * FFN weight: 512*2048*2 = 2.097M - # - output dimension adapter: 512 x 512 -> 0.26 M - # Decoder: - # - LinearizedConv1d: 512 * 256 * 3 + 256 * 256 * 3 * 3 - # - transformer_layer: (x6) --> 25.16M - # * MultiheadAttention (self-attention): 512*512*3 + 512*512 = 1.048M - # * MultiheadAttention (encoder-attention): 512*512*3 + 512*512 = 1.048M - # * FFN: 512*2048*2 = 2.097M - # Final FC: - # - FC: 512*5000 = 256K (assuming vocab size 5K) - # In total: - # ~65 M - - -# CTC models -def base_architecture_enconly(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 40) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(32, 3, 2, 2, True)] * 2" - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", "((256, 4, 1024, True, 0.2, 0.2, 0.2),) * 2" - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.in_channels = getattr(args, "in_channels", 1) - args.transformer_context = getattr(args, "transformer_context", "None") - args.transformer_sampling = getattr(args, "transformer_sampling", "None") - - -@register_model_architecture("asr_vggtransformer_encoder", "vggtransformer_enc_1") -def vggtransformer_enc_1(args): - # vggtransformer_1 is the same as vggtransformer_enc_big, except the number - # of layers is increased to 16 - # keep it here for backward compatiablity purpose - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 16", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) diff --git a/spaces/gradio/longformer/tvm/_ffi/_ctypes/types.py b/spaces/gradio/longformer/tvm/_ffi/_ctypes/types.py deleted file mode 100644 index 31c4786b858fa930fbf06138c886727d4782a907..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/tvm/_ffi/_ctypes/types.py +++ /dev/null @@ -1,108 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -"""The C Types used in API.""" -# pylint: disable=invalid-name -from __future__ import absolute_import as _abs - -import ctypes -import struct -from ..base import py_str, check_call, _LIB -from ..runtime_ctypes import TVMByteArray, TypeCode, TVMContext - -class TVMValue(ctypes.Union): - """TVMValue in C API""" - _fields_ = [("v_int64", ctypes.c_int64), - ("v_float64", ctypes.c_double), - ("v_handle", ctypes.c_void_p), - ("v_str", ctypes.c_char_p)] - - -TVMPackedCFunc = ctypes.CFUNCTYPE( - ctypes.c_int, - ctypes.POINTER(TVMValue), - ctypes.POINTER(ctypes.c_int), - ctypes.c_int, - ctypes.c_void_p, - ctypes.c_void_p) - - -TVMCFuncFinalizer = ctypes.CFUNCTYPE( - None, - ctypes.c_void_p) - - -def _return_handle(x): - """return handle""" - handle = x.v_handle - if not isinstance(handle, ctypes.c_void_p): - handle = ctypes.c_void_p(handle) - return handle - -def _return_bytes(x): - """return bytes""" - handle = x.v_handle - if not isinstance(handle, ctypes.c_void_p): - handle = ctypes.c_void_p(handle) - arr = ctypes.cast(handle, ctypes.POINTER(TVMByteArray))[0] - size = arr.size - res = bytearray(size) - rptr = (ctypes.c_byte * size).from_buffer(res) - if not ctypes.memmove(rptr, arr.data, size): - raise RuntimeError('memmove failed') - return res - -def _return_context(value): - """return TVMContext""" - # use bit unpacking from int64 view - # We use this to get around ctypes issue on Union of Structure - data = struct.pack("=q", value.v_int64) - arr = struct.unpack("=ii", data) - return TVMContext(arr[0], arr[1]) - - -def _wrap_arg_func(return_f, type_code): - tcode = ctypes.c_int(type_code) - def _wrap_func(x): - check_call(_LIB.TVMCbArgToReturn(ctypes.byref(x), tcode)) - return return_f(x) - return _wrap_func - -def _ctx_to_int64(ctx): - """Pack context into int64 in native endian""" - data = struct.pack("=ii", ctx.device_type, ctx.device_id) - return struct.unpack("=q", data)[0] - - -RETURN_SWITCH = { - TypeCode.INT: lambda x: x.v_int64, - TypeCode.FLOAT: lambda x: x.v_float64, - TypeCode.HANDLE: _return_handle, - TypeCode.NULL: lambda x: None, - TypeCode.STR: lambda x: py_str(x.v_str), - TypeCode.BYTES: _return_bytes, - TypeCode.TVM_CONTEXT: _return_context -} - -C_TO_PY_ARG_SWITCH = { - TypeCode.INT: lambda x: x.v_int64, - TypeCode.FLOAT: lambda x: x.v_float64, - TypeCode.HANDLE: _return_handle, - TypeCode.NULL: lambda x: None, - TypeCode.STR: lambda x: py_str(x.v_str), - TypeCode.BYTES: _return_bytes, - TypeCode.TVM_CONTEXT: _return_context -} diff --git a/spaces/grzegorz2047/fast_diffusion/index.html b/spaces/grzegorz2047/fast_diffusion/index.html deleted file mode 100644 index 6250c2958a7186a4e64f21c02b0359ff5ecd7e97..0000000000000000000000000000000000000000 --- a/spaces/grzegorz2047/fast_diffusion/index.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptFolders.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptFolders.tsx deleted file mode 100644 index 53632410dac85dccc8a20882256f67e50a10fd90..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptFolders.tsx +++ /dev/null @@ -1,64 +0,0 @@ -import { useContext } from 'react'; - -import { FolderInterface } from '@/types/folder'; - -import HomeContext from '@/pages/api/home/home.context'; - -import Folder from '@/components/Folder'; -import { PromptComponent } from '@/components/Promptbar/components/Prompt'; - -import PromptbarContext from '../PromptBar.context'; - -export const PromptFolders = () => { - const { - state: { folders }, - } = useContext(HomeContext); - - const { - state: { searchTerm, filteredPrompts }, - handleUpdatePrompt, - } = useContext(PromptbarContext); - - const handleDrop = (e: any, folder: FolderInterface) => { - if (e.dataTransfer) { - const prompt = JSON.parse(e.dataTransfer.getData('prompt')); - - const updatedPrompt = { - ...prompt, - folderId: folder.id, - }; - - handleUpdatePrompt(updatedPrompt); - } - }; - - const PromptFolders = (currentFolder: FolderInterface) => - filteredPrompts - .filter((p) => p.folderId) - .map((prompt, index) => { - if (prompt.folderId === currentFolder.id) { - return ( -
    - -
    - ); - } - }); - - return ( -
    - {folders - .filter((folder) => folder.type === 'prompt') - .sort((a, b) => a.name.localeCompare(b.name)) - .map((folder, index) => ( - - ))} -
    - ); -}; diff --git a/spaces/gtx4010661/dandelin-vilt-b32-finetuned-vqa/README.md b/spaces/gtx4010661/dandelin-vilt-b32-finetuned-vqa/README.md deleted file mode 100644 index a99b33f670beae8d349b85562ca8b1b6c177caea..0000000000000000000000000000000000000000 --- a/spaces/gtx4010661/dandelin-vilt-b32-finetuned-vqa/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dandelin Vilt B32 Finetuned Vqa -emoji: 📚 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gyugnsu/DragGan-Inversion/dnnlib/util.py b/spaces/gyugnsu/DragGan-Inversion/dnnlib/util.py deleted file mode 100644 index 90f91e1085239fd9672b2cbe83cbd8e85b27ec0e..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/dnnlib/util.py +++ /dev/null @@ -1,504 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def format_time_brief(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m".format(s // (60 * 60), (s // 60) % 60) - else: - return "{0}d {1:02}h".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) - for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module( - module_name) # may raise ImportError - # may raise AttributeError - get_obj_from_module(module, local_obj_name) - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module( - module_name) # may raise ImportError - # may raise AttributeError - get_obj_from_module(module, local_obj_name) - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename( - sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) - for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split( - '"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError( - "Google Drive download quota exceeded -- please try again later") - - match = re.search( - r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join( - cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.py b/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.py deleted file mode 100644 index d809aa54ba33483a52b072345c5f090b85e21a3f..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient bias and activation.""" - -import os -import numpy as np -import torch -import dnnlib - -from .. import custom_ops -from .. import misc - -# ---------------------------------------------------------------------------- - -activation_funcs = { - 'linear': dnnlib.EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False), - 'relu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False), - 'lrelu': dnnlib.EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False), - 'tanh': dnnlib.EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True), - 'sigmoid': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True), - 'elu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True), - 'selu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True), - 'softplus': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True), - 'swish': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True), -} - -# ---------------------------------------------------------------------------- - -_plugin = None -_null_tensor = torch.empty([0]) - - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='bias_act_plugin', - sources=['bias_act.cpp', 'bias_act.cu'], - headers=['bias_act.h'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math', - '--allow-unsupported-compiler'], - ) - return True - -# ---------------------------------------------------------------------------- - - -def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can be of any shape. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `dim`. - dim: The dimension in `x` corresponding to the elements of `b`. - The value of `dim` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying 1. - clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable - the clamping (default). - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b) - return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp) - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Slow reference implementation of `bias_act()` using standard TensorFlow ops. - """ - assert isinstance(x, torch.Tensor) - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Add bias. - if b is not None: - assert isinstance(b, torch.Tensor) and b.ndim == 1 - assert 0 <= dim < x.ndim - assert b.shape[0] == x.shape[dim] - x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)]) - - # Evaluate activation function. - alpha = float(alpha) - x = spec.func(x, alpha=alpha) - - # Scale by gain. - gain = float(gain) - if gain != 1: - x = x * gain - - # Clamp. - if clamp >= 0: - x = x.clamp(-clamp, clamp) # pylint: disable=invalid-unary-operand-type - return x - -# ---------------------------------------------------------------------------- - - -_bias_act_cuda_cache = dict() - - -def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Fast CUDA implementation of `bias_act()` using custom ops. - """ - # Parse arguments. - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Lookup from cache. - key = (dim, act, alpha, gain, clamp) - if key in _bias_act_cuda_cache: - return _bias_act_cuda_cache[key] - - # Forward op. - class BiasActCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, b): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride( - 1) == 1 else torch.contiguous_format - x = x.contiguous(memory_format=ctx.memory_format) - b = b.contiguous() if b is not None else _null_tensor - y = x - if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor: - y = _plugin.bias_act(x, b, _null_tensor, _null_tensor, - _null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - y if 'y' in spec.ref else _null_tensor) - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - dy = dy.contiguous(memory_format=ctx.memory_format) - x, b, y = ctx.saved_tensors - dx = None - db = None - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - dx = dy - if act != 'linear' or gain != 1 or clamp >= 0: - dx = BiasActCudaGrad.apply(dy, x, b, y) - - if ctx.needs_input_grad[1]: - db = dx.sum([i for i in range(dx.ndim) if i != dim]) - - return dx, db - - # Backward op. - class BiasActCudaGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride( - 1) == 1 else torch.contiguous_format - dx = _plugin.bias_act(dy, b, x, y, _null_tensor, - 1, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - dy if spec.has_2nd_grad else _null_tensor, - x, b, y) - return dx - - @staticmethod - def backward(ctx, d_dx): # pylint: disable=arguments-differ - d_dx = d_dx.contiguous(memory_format=ctx.memory_format) - dy, x, b, y = ctx.saved_tensors - d_dy = None - d_x = None - d_b = None - d_y = None - - if ctx.needs_input_grad[0]: - d_dy = BiasActCudaGrad.apply(d_dx, x, b, y) - - if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]): - d_x = _plugin.bias_act( - d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp) - - if spec.has_2nd_grad and ctx.needs_input_grad[2]: - d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim]) - - return d_dy, d_x, d_b, d_y - - # Add to cache. - _bias_act_cuda_cache[key] = BiasActCuda - return BiasActCuda - -# ---------------------------------------------------------------------------- diff --git a/spaces/hackathon-somos-nlp-2023/demo_DiagTrast/app.py b/spaces/hackathon-somos-nlp-2023/demo_DiagTrast/app.py deleted file mode 100644 index 10bd7970617967b3a93c3f67c0d7b1cad69c3fae..0000000000000000000000000000000000000000 --- a/spaces/hackathon-somos-nlp-2023/demo_DiagTrast/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import streamlit as st -import pandas as pd -import utils -import time - -from transformers import pipeline -from transformers import AutoTokenizer -from transformers import AutoModelForSequenceClassification - -##################### - -model_berto='hackathon-somos-nlp-2023/DiagTrast-Berto' -tokenizer_berto = AutoTokenizer.from_pretrained(model_berto) -classifier_berto = pipeline("text-classification", model=model_berto) - -##################### - -st.title('Diagnóstico Trastornos Mentales') - -DemoTab, ConclusionTab, AboutTab = st.tabs(["Demo", "Conclusiones", "Acerca de"]) - -with DemoTab: - with st.form(key="diagtrast_form"): - sintomas = st.text_input(label = 'Introduce texto:', - value = 'El paciente piensa que es la persona más bella, y se enfada cuando los demás no lo ven así.') - - submit_button = st.form_submit_button(label="Clasificar") - - if submit_button and not sintomas: - st.warning("⚠️ Debe introducir los síntomas.") - - elif submit_button: - with st.spinner('Clasificando...'): - pred_berto = classifier_berto.predict(utils.clean_text(sintomas)) - - df = pd.DataFrame({ - 'Texto': [(sintomas[:50] + '...') if len(sintomas) > 50 else sintomas], - 'Diagnóstico': [pred_berto[0]['label']] - }) - - st.markdown("### Resultado:") - st.caption("") - - st.dataframe(df, use_container_width=True) - st.caption("") - alert = st.success("✅ ¡Hecho!") - - st.markdown("##### Ejemplos") - st.markdown("Se muestra impasivo emocionalmente.") - st.markdown("Irresponsable en su trabajo, suele saltarse las normas. No le importa la opinión de los demás.") - st.markdown("El paciente piensa que es la persona más bella, y se enfada cuando los demás no lo ven así.") - st.markdown("Él siempre se siente incómodo cuando no es el centro de atención. Es una persona muy necesitada de reconocimiento y se siente ansioso cuando no es el foco de atención de los demás. A menudo busca formas de atraer la atención de los demás y siente que su autoestima depende de ello.") - st.markdown("El paciente tiene problemas con el alcohol. Normalmente toma decisiones importantes sin pensarlo profundamente. Tiene una idea pesimista de su persona y acude a sus familiares para sentirse mejor. No tiene la capacidad de controlar sus sentimientos, la mayoría de las veces los reprime.") - -with ConclusionTab: - st.subheader("Conclusiones") - st.markdown("El presente proyecto muestra una herramienta que facilita al profesional la tarea de diagnosticar a pacientes con trastornos mentales. Aunque el proyecto se encuentra en la fase de prototipado, demuestra que los modelos de aprendizaje profundo basados en el lenguaje ayudan a identificar trastornos mentales con precisión, facilitando la tarea a los profesionales.") - - st.subheader("Trabajo futuro") - st.markdown("- El modelo actual no tiene en cuenta la ausencia de trastornos mentales, ya que no se incluyó en el dataset dicha categoría.") - st.markdown("- Incluir todo el conjunto de trastornos mentales del manual DSM-5.") - st.markdown("- Implementación de un modelo de pregunta/respuesta donde el modelo realizará una pregunta al profesional en caso de no tener claro el diagnóstico a partir del texto inicial. A partir del texto inicial y las futuras respuestas, el modelo realizará un diagnóstico con mayor certeza.") - st.markdown("- En el caso de implementar el último punto, sustituir la arquitectura por un LLM, de forma que el modelo pueda manejar una mayor cantidad de información con mayor precisión.") - -with AboutTab: - st.subheader("Motivación") - st.markdown( - "Actualmente el proceso de diagnóstico de enfermedades mentales enfrenta retos importantes de subjetividad que podría llevar a un diagnóstico erróneo en un paciente. Uno de los documentos más avalados como refuerzo de diagnóstico es el DSM-5. Este conjunto de guías de diagnóstico han procedido a ser fundamentales en casos de pacientes difíciles de identificar. Sin embargo, sumergirse en las más de 500 hojas del DSM-5 puede llegar a ser abrumador. El objetivo de este proyecto ha sido tener un modelo que, por medio del lenguaje natural, los especialistas de la salud mental puedan describir el caso de un un paciente en concreto, dando así una sugerencia de diagnóstico para facilitar y concretar de manera más exacta un diagnóstico de salud mental." - ) - - st.subheader("Recursos") - st.markdown(""" - Modelo: - - [hackathon-somos-nlp-2023/DiagTrast-Berto](https://huggingface.co/hackathon-somos-nlp-2023/DiagTrast-Berto) - - Dataset: - - [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/hackathon-somos-nlp-2023/DiagTrast) - """) - - st.subheader("ODS") - st.markdown("El presente proyecto se engloba en el objetivo número 3, Salud y bienestar, de los Objetivos de Desarrollo Sostenible de la ONU.") - - st.subheader("Equipo") - st.markdown(""" - - [Alberto Martín Garrido](https://huggingface.co/Stremie) - - [Edgar Mencia](https://huggingface.co/edmenciab) - - [Miguel Ángel Solís Orozco](https://huggingface.co/homosapienssapiens) - - [Jose Carlos Vílchez Villegas](https://huggingface.co/JCarlos) - """) diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/hands012/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index c1e5dadd142de683323463d3df260cbe6eefa6d8..0000000000000000000000000000000000000000 --- "a/spaces/hands012/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,60 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -@CatchException -def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/dense.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/dense.py deleted file mode 100644 index 9638d6e86d2ae838550fefa9002a984af52e6cc8..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/dense.py +++ /dev/null @@ -1,42 +0,0 @@ -from collections import OrderedDict - -import torch -import torch.nn as nn - -from .bn import ABN - - -class DenseModule(nn.Module): - def __init__(self, in_channels, growth, layers, bottleneck_factor=4, norm_act=ABN, dilation=1): - super(DenseModule, self).__init__() - self.in_channels = in_channels - self.growth = growth - self.layers = layers - - self.convs1 = nn.ModuleList() - self.convs3 = nn.ModuleList() - for i in range(self.layers): - self.convs1.append(nn.Sequential(OrderedDict([ - ("bn", norm_act(in_channels)), - ("conv", nn.Conv2d(in_channels, self.growth * bottleneck_factor, 1, bias=False)) - ]))) - self.convs3.append(nn.Sequential(OrderedDict([ - ("bn", norm_act(self.growth * bottleneck_factor)), - ("conv", nn.Conv2d(self.growth * bottleneck_factor, self.growth, 3, padding=dilation, bias=False, - dilation=dilation)) - ]))) - in_channels += self.growth - - @property - def out_channels(self): - return self.in_channels + self.growth * self.layers - - def forward(self, x): - inputs = [x] - for i in range(self.layers): - x = torch.cat(inputs, dim=1) - x = self.convs1[i](x) - x = self.convs3[i](x) - inputs += [x] - - return torch.cat(inputs, dim=1) diff --git a/spaces/heiyubili/bingo/src/components/ui/select.tsx b/spaces/heiyubili/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/__init__.py b/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/__init__.py deleted file mode 100644 index e426fb02da452a1b54fd3b5fb555688c678a69c6..0000000000000000000000000000000000000000 --- a/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ - -from .base_color import * -from .model_architecture import * -from .util import * - diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/facerecon_model.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/facerecon_model.py deleted file mode 100644 index 252b4eda2eb8b8098b22da72798a9843dc920b7a..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/facerecon_model.py +++ /dev/null @@ -1,268 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" -import numpy as np -import torch -import trimesh -from scipy.io import savemat -from util import util -from util.nvdiffrast import MeshRenderer -from util.preprocess import estimate_norm_torch - -from . import networks -from .base_model import BaseModel -from .bfm import ParametricFaceModel -from .losses import landmark_loss -from .losses import perceptual_loss -from .losses import photo_loss -from .losses import reflectance_loss -from .losses import reg_loss - - -class FaceReconModel(BaseModel): - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Configures options specific for CUT model""" - # net structure and parameters - parser.add_argument( - "--net_recon", - type=str, - default="resnet50", - choices=["resnet18", "resnet34", "resnet50"], - help="network structure", - ) - parser.add_argument("--init_path", type=str, default="checkpoints/init_model/resnet50-0676ba61.pth") - parser.add_argument( - "--use_last_fc", - type=util.str2bool, - nargs="?", - const=True, - default=False, - help="zero initialize the last fc", - ) - parser.add_argument("--bfm_folder", type=str, default="BFM") - parser.add_argument("--bfm_model", type=str, default="BFM_model_front.mat", help="bfm model") - - # renderer parameters - parser.add_argument("--focal", type=float, default=1015.0) - parser.add_argument("--center", type=float, default=112.0) - parser.add_argument("--camera_d", type=float, default=10.0) - parser.add_argument("--z_near", type=float, default=5.0) - parser.add_argument("--z_far", type=float, default=15.0) - parser.add_argument( - "--use_opengl", type=util.str2bool, nargs="?", const=True, default=True, help="use opengl context or not" - ) - - if is_train: - # training parameters - parser.add_argument( - "--net_recog", - type=str, - default="r50", - choices=["r18", "r43", "r50"], - help="face recog network structure", - ) - parser.add_argument( - "--net_recog_path", type=str, default="checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth" - ) - parser.add_argument( - "--use_crop_face", - type=util.str2bool, - nargs="?", - const=True, - default=False, - help="use crop mask for photo loss", - ) - parser.add_argument( - "--use_predef_M", - type=util.str2bool, - nargs="?", - const=True, - default=False, - help="use predefined M for predicted face", - ) - - # augmentation parameters - parser.add_argument("--shift_pixs", type=float, default=10.0, help="shift pixels") - parser.add_argument("--scale_delta", type=float, default=0.1, help="delta scale factor") - parser.add_argument("--rot_angle", type=float, default=10.0, help="rot angles, degree") - - # loss weights - parser.add_argument("--w_feat", type=float, default=0.2, help="weight for feat loss") - parser.add_argument("--w_color", type=float, default=1.92, help="weight for loss loss") - parser.add_argument("--w_reg", type=float, default=3.0e-4, help="weight for reg loss") - parser.add_argument("--w_id", type=float, default=1.0, help="weight for id_reg loss") - parser.add_argument("--w_exp", type=float, default=0.8, help="weight for exp_reg loss") - parser.add_argument("--w_tex", type=float, default=1.7e-2, help="weight for tex_reg loss") - parser.add_argument("--w_gamma", type=float, default=10.0, help="weight for gamma loss") - parser.add_argument("--w_lm", type=float, default=1.6e-3, help="weight for lm loss") - parser.add_argument("--w_reflc", type=float, default=5.0, help="weight for reflc loss") - - opt, _ = parser.parse_known_args() - parser.set_defaults(focal=1015.0, center=112.0, camera_d=10.0, use_last_fc=False, z_near=5.0, z_far=15.0) - if is_train: - parser.set_defaults(use_crop_face=True, use_predef_M=False) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ["output_vis"] - self.model_names = ["net_recon"] - self.parallel_names = self.model_names + ["renderer"] - - self.net_recon = networks.define_net_recon( - net_recon=opt.net_recon, use_last_fc=opt.use_last_fc, init_path=opt.init_path - ) - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, - camera_distance=opt.camera_d, - focal=opt.focal, - center=opt.center, - is_train=self.isTrain, - default_name=opt.bfm_model, - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, - znear=opt.z_near, - zfar=opt.z_far, - rasterize_size=int(2 * opt.center), - use_opengl=opt.use_opengl, - ) - - if self.isTrain: - self.loss_names = ["all", "feat", "color", "lm", "reg", "gamma", "reflc"] - - self.net_recog = networks.define_net_recog(net_recog=opt.net_recog, pretrained_path=opt.net_recog_path) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ["net_recog"] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input["imgs"].to(self.device) - self.atten_mask = input["msks"].to(self.device) if "msks" in input else None - self.gt_lm = input["lms"].to(self.device) if "lms" in input else None - self.trans_m = input["M"].to(self.device) if "M" in input else None - self.image_paths = input["im_paths"] if "im_paths" in input else None - - def forward(self): - output_coeff = self.net_recon(self.input_img) - self.facemodel.to(self.device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color - ) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask - ) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = ( - self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma + self.loss_lm + self.loss_reflc - ) - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255.0 * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255.0 * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, "b") - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, "r") - - output_vis_numpy = np.concatenate((input_img_numpy, output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, output_vis_numpy_raw), axis=-2) - - self.output_vis = ( - torch.tensor(output_vis_numpy / 255.0, dtype=torch.float32).permute(0, 3, 1, 2).to(self.device) - ) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh( - vertices=recon_shape, - faces=tri, - vertex_colors=np.clip(255.0 * recon_color, 0, 255).astype(np.uint8), - process=False, - ) - mesh.export(name) - - def save_coeff(self, name): - - pred_coeffs = {key: self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack( - [pred_lm[:, :, 0], self.input_img.shape[2] - 1 - pred_lm[:, :, 1]], axis=2 - ) # transfer to image coordinate - pred_coeffs["lm68"] = pred_lm - savemat(name, pred_coeffs) diff --git a/spaces/iccv23-diffusers-demo/instruct-pix2pix/edit_app.py b/spaces/iccv23-diffusers-demo/instruct-pix2pix/edit_app.py deleted file mode 100644 index e92f755885c6bf14aa856765f8e283b2023b0cfc..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/instruct-pix2pix/edit_app.py +++ /dev/null @@ -1,259 +0,0 @@ -from __future__ import annotations - -import math -import os -import random - -import gradio as gr -import torch -from diffusers import StableDiffusionInstructPix2PixPipeline -from PIL import Image, ImageOps - -help_text = """ -If you're not getting what you want, there may be a few reasons: -1. Is the image not changing enough? Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try: - * Decreasing the Image CFG weight, or - * Increasing the Text CFG weight, or -2. Conversely, is the image changing too much, such that the details in the original image aren't preserved? Try: - * Increasing the Image CFG weight, or - * Decreasing the Text CFG weight -3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time. -4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog"). -5. Increasing the number of steps sometimes improves results. -6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try: - * Cropping the image so the face takes up a larger portion of the frame. -""" - - -example_instructions = [ - "Make it a picasso painting", - "as if it were by modigliani", - "convert to a bronze statue", - "Turn it into an anime.", - "have it look like a graphic novel", - "make him gain weight", - "what would he look like bald?", - "Have him smile", - "Put him in a cocktail party.", - "move him at the beach.", - "add dramatic lighting", - "Convert to black and white", - "What if it were snowing?", - "Give him a leather jacket", - "Turn him into a cyborg!", - "make him wear a beanie", -] - -model_id = "timbrooks/instruct-pix2pix" - - -pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - model_id, torch_dtype=torch.float16, safety_checker=None -).to("cuda") -example_image = Image.open("imgs/example.jpg").convert("RGB") - - -def randomize( - randomize_seed: bool, - seed: int, - randomize_cfg: bool, - text_cfg_scale: float, - image_cfg_scale: float, -) -> tuple[int, float, float]: - seed = random.randint(0, 100000) if randomize_seed else seed - text_cfg_scale = round(random.uniform(6.0, 9.0), ndigits=2) if randomize_cfg else text_cfg_scale - image_cfg_scale = round(random.uniform(1.2, 1.8), ndigits=2) if randomize_cfg else image_cfg_scale - return seed, text_cfg_scale, image_cfg_scale - - -def generate( - input_image: Image.Image, - instruction: str, - steps: int, - seed: int, - text_cfg_scale: float, - image_cfg_scale: float, - progress=gr.Progress(track_tqdm=True), -) -> Image.Image: - width, height = input_image.size - factor = 512 / max(width, height) - factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height) - width = int((width * factor) // 64) * 64 - height = int((height * factor) // 64) * 64 - input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS) - - if instruction == "": - return [input_image, seed] - - generator = torch.manual_seed(seed) - edited_image = pipe( - instruction, - image=input_image, - guidance_scale=text_cfg_scale, - image_guidance_scale=image_cfg_scale, - num_inference_steps=steps, - generator=generator, - ).images[0] - return edited_image - - -def load_example( - steps: int, - randomize_seed: bool, - seed: int, - randomize_cfg: bool, - text_cfg_scale: float, - image_cfg_scale: float, - progress=gr.Progress(track_tqdm=True), -): - example_instruction = random.choice(example_instructions) - seed, text_cfg_scale, image_cfg_scale = randomize( - randomize_seed, seed, randomize_cfg, text_cfg_scale, image_cfg_scale - ) - return [ - example_image, - example_instruction, - seed, - text_cfg_scale, - image_cfg_scale, - generate( - example_image, - example_instruction, - steps, - seed, - text_cfg_scale, - image_cfg_scale, - ), - ] - - -def reset(): - return [None, 50, "Randomize Seed", 1371, "Fix CFG", 7.5, 1.5, None] - - -def process_example(input_image: Image.Image, instruction: str, seed: int) -> Image.Image: - return generate(input_image, instruction, 50, seed, 7.5, 1.5) - - -with gr.Blocks() as demo: - gr.HTML( - """

    -InstructPix2Pix: Learning to Follow Image Editing Instructions -

    -

    For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. -
    - -Duplicate Space -

    """ - ) - with gr.Row(): - with gr.Column(scale=1, min_width=100): - generate_button = gr.Button("Generate") - with gr.Column(scale=1, min_width=100): - load_button = gr.Button("Load Example") - with gr.Column(scale=1, min_width=100): - reset_button = gr.Button("Reset") - with gr.Column(scale=3): - instruction = gr.Textbox(lines=1, label="Edit Instruction") - - with gr.Row(): - input_image = gr.Image(label="Input Image", type="pil", height=512, width=512) - edited_image = gr.Image(label="Edited Image", type="pil", height=512, width=512) - - with gr.Row(): - steps = gr.Number(value=50, precision=0, label="Steps") - randomize_seed = gr.Radio( - ["Fix Seed", "Randomize Seed"], - value="Randomize Seed", - type="index", - show_label=False, - ) - seed = gr.Number(value=1371, precision=0, label="Seed") - randomize_cfg = gr.Radio( - ["Fix CFG", "Randomize CFG"], - value="Fix CFG", - type="index", - show_label=False, - ) - text_cfg_scale = gr.Number(value=7.5, label="Text CFG") - image_cfg_scale = gr.Number(value=1.5, label="Image CFG") - - gr.Examples( - examples=[ - ["imgs/example.jpg", "Turn him into a cyborg", 0], - ["imgs/example.jpg", "Have him smile", 0], - ["imgs/cats.jpg", "Turn kittens into baby lions", 0], - ], - inputs=[input_image, instruction, seed], - outputs=edited_image, - fn=process_example, - cache_examples=os.getenv("CACHE_EXAMPLES") == "1", - ) - - gr.Markdown(help_text) - - load_button.click( - fn=load_example, - inputs=[ - steps, - randomize_seed, - seed, - randomize_cfg, - text_cfg_scale, - image_cfg_scale, - ], - outputs=[input_image, instruction, seed, text_cfg_scale, image_cfg_scale, edited_image], - api_name=False, - ) - reset_button.click( - fn=reset, - outputs=[ - instruction, - steps, - randomize_seed, - seed, - randomize_cfg, - text_cfg_scale, - image_cfg_scale, - edited_image, - ], - queue=False, - api_name=False, - ) - - gr.on( - triggers=[ - generate_button.click, - instruction.submit, - steps.submit, - seed.submit, - text_cfg_scale.submit, - image_cfg_scale.submit, - ], - fn=randomize, - inputs=[ - randomize_seed, - seed, - randomize_cfg, - text_cfg_scale, - image_cfg_scale, - ], - outputs=[seed, text_cfg_scale, image_cfg_scale], - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=[ - input_image, - instruction, - steps, - seed, - text_cfg_scale, - image_cfg_scale, - ], - outputs=edited_image, - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/inamXcontru/PoeticTTS/3 Highway 203 Tamil Movie Download !!EXCLUSIVE!!.md b/spaces/inamXcontru/PoeticTTS/3 Highway 203 Tamil Movie Download !!EXCLUSIVE!!.md deleted file mode 100644 index 2a48e606e1452fd5455f3a81f909620c92711142..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/3 Highway 203 Tamil Movie Download !!EXCLUSIVE!!.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    Gone are the days when you had to wait long hours to find and download your favorite Tamil movies in HD. Earlier, movie lovers had to resort to all sorts of illegal methods and websites to download Tamil movies in HD.

    -

    3 Highway 203 Tamil Movie Download


    Download 🆓 https://gohhs.com/2uz4SB



    -

    Amazon Prime Video owes its popularity in India to its flooding library of Hindi and regional cinema and TV shows collection. Prime Video comes at an affordable price and offers a stunning collection of HD Tamil movies available to stream and download.

    -

    One of the most-loved OTT platforms in India, ZEE5 launched its first Tamil web series America Mappillai last year along with Dilli Darlings, Karenjit Kaur and many other gripping series. A major chunk of content available is free to watch and download but you can also opt for a paid subscription to watch Tamil movies HD online.

    -

    Viu was launched in India by Hong Kong-based PCCW Media company named Vuclip. The OTT has a huge collection of fresh Bollywood and Indian regional movies including Tamil, Telugu, and Malayalam regional languages. You can download Tamil movies for almost free with Viu which is available at a nominal subscription fee of Rs 99.

    -

    -

    The ever-popular SONY TV Network launched its OTT platform with free Tamil, Telugu, Punjabi, Hindi, and English movies to download. However, with a premium subscription, you can also view exclusive content on the platform.

    -

    Hoichoi is yet another OTT platform getting popular. It has a huge collection of Tamil movies in HD. This app primitively focuses on Bengali content. Besides Bengali movies, Hoichoi also has tons of Tamil movies, TV shows, and web series, which you can download for free.

    -

    With a high-speed internet connection and access to Tamilgun, you can enjoy new Tamil movies online without having to block space on your memory stick. Although you can still download Tamil movies in HD quality, we advise you to stream and enjoy a hassle-free movie-watching experience.

    -

    Blocked by the Department of Telecom India, Tamilrockerz has a reputation of leaking the latest Hindi, English, and Tamil films online for free. You can stream and download endless Tamil movies in HD quality.

    -

    Tamil DBox is the most popular website to watch and download Tamil movies online for free. You can also download Tamil MP3 songs, TV shows, and full TAMIL HD movies. Tamildbest.test is a good choice to watch Tamil movies HD, Tamil Songs, Tamil movies online, and Tamil MP3 downloads.

    -

    TamilMv is one of the most popular sites to download the latest Tamil movies in HD. This site also provides Tamil, Kannada, Telugu, Malayalam, Hindi, and English movies. The website is uploaded fresh movies on a regular basis.

    -

    Isaimini is a piracy website that allows users to download the latest Tamil and Tamil dubbed movies for free in various formats. It is one of the most famous piracy websites to download the latest Tamil movies for free.

    -

    Marina Rockers is a public torrent site that illegally leaks the latest Tamil movies. Users can download the latest movies for free. Marina Rockers shares the complete information of the movies like their trailer, director, star cast, and more.

    -

    You can stream and download Tamil movies from Bolly2Tolly website without paying or logging in. The Tamil movie streaming website allows you to send in requests for new or old Tamil movies in HD quality.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Canon Lens Adjustment Software.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Canon Lens Adjustment Software.md deleted file mode 100644 index 2625d04b41ee92d7964a8c5741c4b8f725d0fb04..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Canon Lens Adjustment Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Canon Lens Adjustment Software


    Download Filehttps://urlin.us/2uEwz7



    - -A fast, reliable method of measuring and adjusting the focus performance on your camera and lens combinations. 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Chimicaapplicatabrisipdfdownload.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Chimicaapplicatabrisipdfdownload.md deleted file mode 100644 index d0a4174299497f7054ce9dfa2602fc6492558f0d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Chimicaapplicatabrisipdfdownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

    chimicaapplicatabrisipdfdownload


    Download ⚙⚙⚙ https://urlin.us/2uEy1Y



    - -June 9th, 2020 | E17. Chimicaapplicatabrisipdfdownload. June 9th, 2020 | E16. Corel Website Creator X6 V12.50 (2012) [Multilingual KeY]. June 9th, 2020 | E15 ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Libro Algebra De Goni Galarza !LINK!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Libro Algebra De Goni Galarza !LINK!.md deleted file mode 100644 index 04f79e9e7b1d827b66e9952219096e03e3abdfbd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Libro Algebra De Goni Galarza !LINK!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    descargar libro algebra de goni galarza


    DOWNLOADhttps://urlin.us/2uExbc



    -
    -Descargar autocad civil 3d 2018 2017 2016 2015 2014 2013 2012 espa ol ... Descargar Libro Algebra De Goni Galarza berkar · Kolkata Movie ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Everest Ultimate Engineer V5.50.2143b Portable REPACK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Everest Ultimate Engineer V5.50.2143b Portable REPACK.md deleted file mode 100644 index 53b8fe8600c89255566aaa90727eebafc47bccb8..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Everest Ultimate Engineer V5.50.2143b Portable REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Overall, it was a close race between the G915 TKL and the G917 Razer Blackwidow Chroma. Overall, the G915 TKL wins our race thanks to its superior build quality and the fact that it's priced at the same level as the Razer Blackwidow Chroma, but if you prefer the benefits of a full-size keyboard, the Chroma is the better buy. After looking at some of the other reviewers we were surprised by how much the 17 and 18 dollar keyboards lack in the features department. The [G915 TKL and Chroma are about the same, with the [G915 TKL being cheaper and having slightly better keycaps.] The G915 TKL has a good set of media keys and backlighting, while the Chroma doesn't. While the Chroma has a separate volume wheel, that wheel has a greater range of keys. While the G915 TKL has only four buttons at the top of the keyboard, the Chroma has six. If you're going to buy a keyboard, buy the [G915 TKL for its build quality and reliability, although, if you want a full-size keyboard, you'll want to consider the Blackwidow Chroma.]", "price": "£16.99", "reviewCount": 794, "screenshots": [ ], "categories": [ ], "image_url": "https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/6/1410683434/product/a2d94ebf-08bd-4275-9ae4-93921fa1e8dd/review_screenshots/2/19/7ed56dd5-0e45-4d34-8a42-73e30dd11b12_512.jpg", "appStoreUrl": "https://itunes.apple.com/gb/app/g915-tkl-basic/id1408294577?mt=12", "averageRating": 1, "rating": 1, "releases": [ { "platform": "iOS", "version": "v5.50.

    -

    FULL Everest Ultimate Engineer V5.50.2143b Portable


    Download File ->>->>->> https://urlin.us/2uExkD



    -

    The $100 Mad Catz Tenkeyless Switchless is made with durable PBT and fully capable mechanical switches, like the Razer BlackWidow V3 Pro. And if you plan on spending a few months using this keyboard and still want to carry around a 10-key version of the Mad Catz switchless, the GMK Pro – Switchless mechanical keyboard ($122) is a fantastic choice. Both clicky and linear variants are available, though we tried using the latter in our testing. The extra keys and build quality make this the most practical and most well-designed cheapest switchless keyboard in our guide, providing enough extras that the lack of a numpad and additional media controls are almost forgiven.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ip Video System Design Tool Crack [2021] Keygen Serial Key.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ip Video System Design Tool Crack [2021] Keygen Serial Key.md deleted file mode 100644 index 4502464e5a3a727606abcf9b48bbbb470442e2b4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ip Video System Design Tool Crack [2021] Keygen Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ip video system design tool crack keygen serial key


    Download File ––– https://urlin.us/2uEvek



    -
    -Innovative video management software (VMS) for recording up to 64 cameras, both IP and analog. Keep an eye on your home, workplace and valuables. Remote video surveillance and video capture software is a video management software (VMS) that is used to record up to 64 cameras with different types of encoding and resolutions in different locations. This software is widely used for remote video monitoring and video capture, surveillance of homes, private estates, offices, schools and other places where video surveillance is required. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Brian McKnightTen Full Album Zip.md b/spaces/inreVtussa/clothingai/Examples/Brian McKnightTen Full Album Zip.md deleted file mode 100644 index 482da83e5fb65c026969b2311c77e7c000a61fcc..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Brian McKnightTen Full Album Zip.md +++ /dev/null @@ -1,13 +0,0 @@ -

    Brian McKnightTen Full Album Zip


    Download Filehttps://tiurll.com/2uClcr



    -
    -brian mcknightten album zip sizes -Has a unique look and feel to it. -The front panel is a leather with a stamping effect. -The back panel is a leather. -This product is also available for order. -Material: Leather - Cowhide Sizes: All sizes are approximate. -Please, use the measurements in the description to order a size you need. -You can also take it to the nearest custom/obm 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Captain America Super Soldier Psp Iso WORK.md b/spaces/inreVtussa/clothingai/Examples/Captain America Super Soldier Psp Iso WORK.md deleted file mode 100644 index dedab2e102693740ff0c55b103cda49ed1604776..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Captain America Super Soldier Psp Iso WORK.md +++ /dev/null @@ -1,19 +0,0 @@ - -

    How to Download and Play Captain America Super Soldier Psp Iso

    -

    If you are a fan of Captain America and want to play his game on your PSP, you might be interested in downloading and playing Captain America Super Soldier Psp Iso. This is a ROM file that contains the game data of Captain America Super Soldier, a third-person action-adventure game based on the Marvel Comics character and the 2011 film Captain America: The First Avenger.

    -

    Captain America Super Soldier Psp Iso


    Download Zip ✒ ✒ ✒ https://tiurll.com/2uCkgp



    -

    In this article, we will show you how to download and play Captain America Super Soldier Psp Iso using a PSP emulator on your computer or phone. We will also give you some information about the game, its features, and its reviews.

    -

    What is Captain America Super Soldier Psp Iso?

    -

    Captain America Super Soldier Psp Iso is a ROM file that contains the game data of Captain America Super Soldier, a game developed by Captain America Studios, Inc. This ROM file is unsupported and unrelated to Captain America Super Soldier Inc., the official developer of the game. The game was originally released for Nintendo DS, Nintendo 3DS, Wii, Xbox 360, and PlayStation 3 in 2011, but it was canceled for PSP due to unknown reasons.

    -

    The game follows the story of Captain America as he fights against the Red Skull and his army of Hydra soldiers in World War II. The game features a highly athletic combat system, fluid platforming, and a variety of shield attacks. The game also allows the player to explore different locations, such as a castle, a forest, and a train station. The game has received mixed reviews from critics and fans, with some praising its gameplay and graphics, and others criticizing its repetitive missions and lack of originality.

    -

    How to Download and Play Captain America Super Soldier Psp Iso?

    -

    To download and play Captain America Super Soldier Psp Iso, you will need a PSP emulator on your computer or phone. A PSP emulator is a software that mimics the functions of a PSP console and allows you to run PSP games on your device. There are many PSP emulators available online, such as PPSSPP, JPCSP, and PCSX2. You can choose the one that suits your device and preferences.

    -

    -

    Once you have downloaded and installed a PSP emulator on your device, you will need to download the ROM file of Captain America Super Soldier Psp Iso from a reliable source. You can use the search engine of your choice to find the ROM file online. However, be careful not to download any malicious or illegal files that might harm your device or violate any laws.

    -

    After downloading the ROM file of Captain America Super Soldier Psp Iso, you will need to extract it using a file extractor program such as WinRAR or 7-Zip. You will get an ISO file that contains the game data of Captain America Super Soldier. You will need to place this ISO file in a folder where your PSP emulator can access it.

    -

    Finally, you will need to launch your PSP emulator and load the ISO file of Captain America Super Soldier Psp Iso. You will be able to play the game on your device using the emulator's controls. You can also adjust the settings of the emulator to optimize the performance and quality of the game.

    -

    Conclusion

    -

    Captain America Super Soldier Psp Iso is a ROM file that contains the game data of Captain America Super Soldier, a third-person action-adventure game based on the Marvel Comics character and the 2011 film Captain America: The First Avenger. The game was canceled for PSP due to unknown reasons, but it can be played on other devices using a PSP emulator.

    -

    In this article, we have shown you how to download and play Captain America Super Soldier Psp Iso using a PSP emulator on your computer or phone. We have also given you some information about the game, its features, and its reviews. We hope you have enjoyed this article and found it helpful.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/isabel/testing-streamlit/app.py b/spaces/isabel/testing-streamlit/app.py deleted file mode 100644 index 7ba7120c59fe14cd29460eb8ab7b86682dd3fc7e..0000000000000000000000000000000000000000 --- a/spaces/isabel/testing-streamlit/app.py +++ /dev/null @@ -1,186 +0,0 @@ -### ----------------------------- ### -### libraries ### -### ----------------------------- ### - -import streamlit as st -import pickle as pkl -import pandas as pd -import numpy as np -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - - -### ----------------------------- ### -### interface setup ### -### ----------------------------- ### - -with open('styles.css') as f: - st.markdown(f'', unsafe_allow_html=True) - - -### ------------------------------ ### -### data transformation ### -### ------------------------------ ### - -# load dataset -uncleaned_data = pd.read_csv('data.csv') - -# remove timestamp from dataset (always first column) -uncleaned_data = uncleaned_data.iloc[: , 1:] -data = pd.DataFrame() - -# keep track of which columns are categorical and what -# those columns' value mappings are -# structure: {colname1: {...}, colname2: {...} } -cat_value_dicts = {} -final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1] - -# for each column... -for (colname, colval) in uncleaned_data.iteritems(): - - # check if col is already a number; if so, add col directly - # to new dataframe and skip to next column - if isinstance(colval.values[0], (np.integer, float)): - data[colname] = uncleaned_data[colname].copy() - continue - - # structure: {0: "lilac", 1: "blue", ...} - new_dict = {} - val = 0 # first index per column - transformed_col_vals = [] # new numeric datapoints - - # if not, for each item in that column... - for (row, item) in enumerate(colval.values): - - # if item is not in this col's dict... - if item not in new_dict: - new_dict[item] = val - val += 1 - - # then add numerical value to transformed dataframe - transformed_col_vals.append(new_dict[item]) - - # reverse dictionary only for final col (0, 1) => (vals) - if colname == final_colname: - new_dict = {value : key for (key, value) in new_dict.items()} - - cat_value_dicts[colname] = new_dict - data[colname] = transformed_col_vals - - -### -------------------------------- ### -### model training ### -### -------------------------------- ### - -def train_model(): - # select features and prediction; automatically selects last column as prediction - cols = len(data.columns) - num_features = cols - 1 - x = data.iloc[: , :num_features] - y = data.iloc[: , num_features:] - - # split data into training and testing sets - x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25) - - # instantiate the model (using default parameters) - model = LogisticRegression() - model.fit(x_train, y_train.values.ravel()) - y_pred = model.predict(x_test) - - # save the model to file using the pickle package - with open('model.pkl', 'wb') as f: - pkl.dump(model, f) - - # save model accuracy to file using the pickle package - with open('acc.txt', 'w+') as f: - acc = metrics.accuracy_score(y_test, y_pred) - f.write(str(round(acc * 100, 1)) + '%') - - return model - -### -------------------------------- ### -### rerun logic ### -### -------------------------------- ### - -# check to see if this is the first time running the script, -# if the model has already been trained and saved, load it -try: - with open('model.pkl', 'rb') as f: - model = pkl.load(f) - -# if this is the first time running the script, train the model -# and save it to the file model.pkl -except FileNotFoundError as e: - model = train_model() - -# read the model accuracy from file -with open('acc.txt', 'r') as f: - acc = f.read() - - -### ------------------------------- ### -### interface creation ### -### ------------------------------- ### - -# uses the logistic regression to predict for a generic number -# of features -def general_predictor(input_list): - features = [] - - # transform categorical input - for colname, input in zip(data.columns, input_list): - if (colname in cat_value_dicts): - features.append(cat_value_dicts[colname][input]) - else: - features.append(input) - - # predict single datapoint - new_input = [features] - result = model.predict(new_input) - return cat_value_dicts[final_colname][result[0]] - -def get_feat(): - feats = [abs(x) for x in model.coef_[0]] - max_val = max(feats) - idx = feats.index(max_val) - return data.columns[idx] - -with open('info.md') as f: - st.title(f.readline()) - st.subheader('Take the quiz to get a personalized recommendation using AI.') - -form = st.form('ml-inputs') - -# add data labels to replace those lost via star-args -inputls = [] -for colname in data.columns: - # skip last column - if colname == final_colname: - continue - - # access categories dict if data is categorical - # otherwise, just use a number input - if colname in cat_value_dicts: - radio_options = list(cat_value_dicts[colname].keys()) - inputls.append(form.selectbox(colname, radio_options)) - else: - # add numerical input - inputls.append(form.number_imput(colname)) - -# generate gradio interface -if form.form_submit_button("Submit to get your recommendation!"): - prediction = general_predictor(inputls) - - form.subheader(prediction) - -col1, col2 = st.columns(2) -col1.metric("Number of Different Possible Results", len(cat_value_dicts[final_colname])) -col2.metric("Model Accuracy", acc) -st.metric("Most Important Question", "") -st.subheader(get_feat()) -st.markdown("***") - -with open('info.md') as f: - f.readline() - st.markdown(f.read()) \ No newline at end of file diff --git a/spaces/ismot/1702t1/config/__init__.py b/spaces/ismot/1702t1/config/__init__.py deleted file mode 100644 index 5ccaa23be821afe11edb098d1179bba4330fb95f..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/config/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -@Date: 2021/07/17 -@description: -""" diff --git a/spaces/jackli888/stable-diffusion-webui/modules/sd_models.py b/spaces/jackli888/stable-diffusion-webui/modules/sd_models.py deleted file mode 100644 index e25a5495783c2768d50b63b35e105175c1b78bbf..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/sd_models.py +++ /dev/null @@ -1,495 +0,0 @@ -import collections -import os.path -import sys -import gc -import torch -import re -import safetensors.torch -from omegaconf import OmegaConf -from os import mkdir -from urllib import request -import ldm.modules.midas as midas - -from ldm.util import instantiate_from_config - -from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config -from modules.paths import models_path -from modules.sd_hijack_inpainting import do_inpainting_hijack -from modules.timer import Timer - -model_dir = "Stable-diffusion" -model_path = os.path.abspath(os.path.join(paths.models_path, model_dir)) - -checkpoints_list = {} -checkpoint_alisases = {} -checkpoints_loaded = collections.OrderedDict() - - -class CheckpointInfo: - def __init__(self, filename): - self.filename = filename - abspath = os.path.abspath(filename) - - if shared.cmd_opts.ckpt_dir is not None and abspath.startswith(shared.cmd_opts.ckpt_dir): - name = abspath.replace(shared.cmd_opts.ckpt_dir, '') - elif abspath.startswith(model_path): - name = abspath.replace(model_path, '') - else: - name = os.path.basename(filename) - - if name.startswith("\\") or name.startswith("/"): - name = name[1:] - - self.name = name - self.name_for_extra = os.path.splitext(os.path.basename(filename))[0] - self.model_name = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0] - self.hash = model_hash(filename) - - self.sha256 = hashes.sha256_from_cache(self.filename, "checkpoint/" + name) - self.shorthash = self.sha256[0:10] if self.sha256 else None - - self.title = name if self.shorthash is None else f'{name} [{self.shorthash}]' - - self.ids = [self.hash, self.model_name, self.title, name, f'{name} [{self.hash}]'] + ([self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]'] if self.shorthash else []) - - def register(self): - checkpoints_list[self.title] = self - for id in self.ids: - checkpoint_alisases[id] = self - - def calculate_shorthash(self): - self.sha256 = hashes.sha256(self.filename, "checkpoint/" + self.name) - if self.sha256 is None: - return - - self.shorthash = self.sha256[0:10] - - if self.shorthash not in self.ids: - self.ids += [self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]'] - - checkpoints_list.pop(self.title) - self.title = f'{self.name} [{self.shorthash}]' - self.register() - - return self.shorthash - - -try: - # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start. - - from transformers import logging, CLIPModel - - logging.set_verbosity_error() -except Exception: - pass - - -def setup_model(): - if not os.path.exists(model_path): - os.makedirs(model_path) - - list_models() - enable_midas_autodownload() - - -def checkpoint_tiles(): - def convert(name): - return int(name) if name.isdigit() else name.lower() - - def alphanumeric_key(key): - return [convert(c) for c in re.split('([0-9]+)', key)] - - return sorted([x.title for x in checkpoints_list.values()], key=alphanumeric_key) - - -def list_models(): - checkpoints_list.clear() - checkpoint_alisases.clear() - - cmd_ckpt = shared.cmd_opts.ckpt - if shared.cmd_opts.no_download_sd_model or cmd_ckpt != shared.sd_model_file or os.path.exists(cmd_ckpt): - model_url = None - else: - model_url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" - - model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"]) - - if os.path.exists(cmd_ckpt): - checkpoint_info = CheckpointInfo(cmd_ckpt) - checkpoint_info.register() - - shared.opts.data['sd_model_checkpoint'] = checkpoint_info.title - elif cmd_ckpt is not None and cmd_ckpt != shared.default_sd_model_file: - print(f"Checkpoint in --ckpt argument not found (Possible it was moved to {model_path}: {cmd_ckpt}", file=sys.stderr) - - for filename in model_list: - checkpoint_info = CheckpointInfo(filename) - checkpoint_info.register() - - -def get_closet_checkpoint_match(search_string): - checkpoint_info = checkpoint_alisases.get(search_string, None) - if checkpoint_info is not None: - return checkpoint_info - - found = sorted([info for info in checkpoints_list.values() if search_string in info.title], key=lambda x: len(x.title)) - if found: - return found[0] - - return None - - -def model_hash(filename): - """old hash that only looks at a small part of the file and is prone to collisions""" - - try: - with open(filename, "rb") as file: - import hashlib - m = hashlib.sha256() - - file.seek(0x100000) - m.update(file.read(0x10000)) - return m.hexdigest()[0:8] - except FileNotFoundError: - return 'NOFILE' - - -def select_checkpoint(): - model_checkpoint = shared.opts.sd_model_checkpoint - - checkpoint_info = checkpoint_alisases.get(model_checkpoint, None) - if checkpoint_info is not None: - return checkpoint_info - - if len(checkpoints_list) == 0: - print("No checkpoints found. When searching for checkpoints, looked at:", file=sys.stderr) - if shared.cmd_opts.ckpt is not None: - print(f" - file {os.path.abspath(shared.cmd_opts.ckpt)}", file=sys.stderr) - print(f" - directory {model_path}", file=sys.stderr) - if shared.cmd_opts.ckpt_dir is not None: - print(f" - directory {os.path.abspath(shared.cmd_opts.ckpt_dir)}", file=sys.stderr) - print("Can't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations. The program will exit.", file=sys.stderr) - exit(1) - - checkpoint_info = next(iter(checkpoints_list.values())) - if model_checkpoint is not None: - print(f"Checkpoint {model_checkpoint} not found; loading fallback {checkpoint_info.title}", file=sys.stderr) - - return checkpoint_info - - -chckpoint_dict_replacements = { - 'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.', - 'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.', - 'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.', -} - - -def transform_checkpoint_dict_key(k): - for text, replacement in chckpoint_dict_replacements.items(): - if k.startswith(text): - k = replacement + k[len(text):] - - return k - - -def get_state_dict_from_checkpoint(pl_sd): - pl_sd = pl_sd.pop("state_dict", pl_sd) - pl_sd.pop("state_dict", None) - - sd = {} - for k, v in pl_sd.items(): - new_key = transform_checkpoint_dict_key(k) - - if new_key is not None: - sd[new_key] = v - - pl_sd.clear() - pl_sd.update(sd) - - return pl_sd - - -def read_state_dict(checkpoint_file, print_global_state=False, map_location=None): - _, extension = os.path.splitext(checkpoint_file) - if extension.lower() == ".safetensors": - device = map_location or shared.weight_load_location or devices.get_optimal_device_name() - pl_sd = safetensors.torch.load_file(checkpoint_file, device=device) - else: - pl_sd = torch.load(checkpoint_file, map_location=map_location or shared.weight_load_location) - - if print_global_state and "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - - sd = get_state_dict_from_checkpoint(pl_sd) - return sd - - -def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer): - sd_model_hash = checkpoint_info.calculate_shorthash() - timer.record("calculate hash") - - if checkpoint_info in checkpoints_loaded: - # use checkpoint cache - print(f"Loading weights [{sd_model_hash}] from cache") - return checkpoints_loaded[checkpoint_info] - - print(f"Loading weights [{sd_model_hash}] from {checkpoint_info.filename}") - res = read_state_dict(checkpoint_info.filename) - timer.record("load weights from disk") - - return res - - -def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer): - sd_model_hash = checkpoint_info.calculate_shorthash() - timer.record("calculate hash") - - shared.opts.data["sd_model_checkpoint"] = checkpoint_info.title - - if state_dict is None: - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - model.load_state_dict(state_dict, strict=False) - del state_dict - timer.record("apply weights to model") - - if shared.opts.sd_checkpoint_cache > 0: - # cache newly loaded model - checkpoints_loaded[checkpoint_info] = model.state_dict().copy() - - if shared.cmd_opts.opt_channelslast: - model.to(memory_format=torch.channels_last) - timer.record("apply channels_last") - - if not shared.cmd_opts.no_half: - vae = model.first_stage_model - depth_model = getattr(model, 'depth_model', None) - - # with --no-half-vae, remove VAE from model when doing half() to prevent its weights from being converted to float16 - if shared.cmd_opts.no_half_vae: - model.first_stage_model = None - # with --upcast-sampling, don't convert the depth model weights to float16 - if shared.cmd_opts.upcast_sampling and depth_model: - model.depth_model = None - - model.half() - model.first_stage_model = vae - if depth_model: - model.depth_model = depth_model - - timer.record("apply half()") - - devices.dtype = torch.float32 if shared.cmd_opts.no_half else torch.float16 - devices.dtype_vae = torch.float32 if shared.cmd_opts.no_half or shared.cmd_opts.no_half_vae else torch.float16 - devices.dtype_unet = model.model.diffusion_model.dtype - devices.unet_needs_upcast = shared.cmd_opts.upcast_sampling and devices.dtype == torch.float16 and devices.dtype_unet == torch.float16 - - model.first_stage_model.to(devices.dtype_vae) - timer.record("apply dtype to VAE") - - # clean up cache if limit is reached - while len(checkpoints_loaded) > shared.opts.sd_checkpoint_cache: - checkpoints_loaded.popitem(last=False) - - model.sd_model_hash = sd_model_hash - model.sd_model_checkpoint = checkpoint_info.filename - model.sd_checkpoint_info = checkpoint_info - shared.opts.data["sd_checkpoint_hash"] = checkpoint_info.sha256 - - model.logvar = model.logvar.to(devices.device) # fix for training - - sd_vae.delete_base_vae() - sd_vae.clear_loaded_vae() - vae_file, vae_source = sd_vae.resolve_vae(checkpoint_info.filename) - sd_vae.load_vae(model, vae_file, vae_source) - timer.record("load VAE") - - -def enable_midas_autodownload(): - """ - Gives the ldm.modules.midas.api.load_model function automatic downloading. - - When the 512-depth-ema model, and other future models like it, is loaded, - it calls midas.api.load_model to load the associated midas depth model. - This function applies a wrapper to download the model to the correct - location automatically. - """ - - midas_path = os.path.join(paths.models_path, 'midas') - - # stable-diffusion-stability-ai hard-codes the midas model path to - # a location that differs from where other scripts using this model look. - # HACK: Overriding the path here. - for k, v in midas.api.ISL_PATHS.items(): - file_name = os.path.basename(v) - midas.api.ISL_PATHS[k] = os.path.join(midas_path, file_name) - - midas_urls = { - "dpt_large": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", - "dpt_hybrid": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt", - "midas_v21": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt", - "midas_v21_small": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21_small-70d6b9c8.pt", - } - - midas.api.load_model_inner = midas.api.load_model - - def load_model_wrapper(model_type): - path = midas.api.ISL_PATHS[model_type] - if not os.path.exists(path): - if not os.path.exists(midas_path): - mkdir(midas_path) - - print(f"Downloading midas model weights for {model_type} to {path}") - request.urlretrieve(midas_urls[model_type], path) - print(f"{model_type} downloaded") - - return midas.api.load_model_inner(model_type) - - midas.api.load_model = load_model_wrapper - - -def repair_config(sd_config): - - if not hasattr(sd_config.model.params, "use_ema"): - sd_config.model.params.use_ema = False - - if shared.cmd_opts.no_half: - sd_config.model.params.unet_config.params.use_fp16 = False - elif shared.cmd_opts.upcast_sampling: - sd_config.model.params.unet_config.params.use_fp16 = True - - -sd1_clip_weight = 'cond_stage_model.transformer.text_model.embeddings.token_embedding.weight' -sd2_clip_weight = 'cond_stage_model.model.transformer.resblocks.0.attn.in_proj_weight' - -def load_model(checkpoint_info=None, already_loaded_state_dict=None, time_taken_to_load_state_dict=None): - from modules import lowvram, sd_hijack - checkpoint_info = checkpoint_info or select_checkpoint() - - if shared.sd_model: - sd_hijack.model_hijack.undo_hijack(shared.sd_model) - shared.sd_model = None - gc.collect() - devices.torch_gc() - - do_inpainting_hijack() - - timer = Timer() - - if already_loaded_state_dict is not None: - state_dict = already_loaded_state_dict - else: - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info) - clip_is_included_into_sd = sd1_clip_weight in state_dict or sd2_clip_weight in state_dict - - timer.record("find config") - - sd_config = OmegaConf.load(checkpoint_config) - repair_config(sd_config) - - timer.record("load config") - - print(f"Creating model from config: {checkpoint_config}") - - sd_model = None - try: - with sd_disable_initialization.DisableInitialization(disable_clip=clip_is_included_into_sd): - sd_model = instantiate_from_config(sd_config.model) - except Exception as e: - pass - - if sd_model is None: - print('Failed to create model quickly; will retry using slow method.', file=sys.stderr) - sd_model = instantiate_from_config(sd_config.model) - - sd_model.used_config = checkpoint_config - - timer.record("create model") - - load_model_weights(sd_model, checkpoint_info, state_dict, timer) - - if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: - lowvram.setup_for_low_vram(sd_model, shared.cmd_opts.medvram) - else: - sd_model.to(shared.device) - - timer.record("move model to device") - - sd_hijack.model_hijack.hijack(sd_model) - - timer.record("hijack") - - sd_model.eval() - shared.sd_model = sd_model - - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model - - timer.record("load textual inversion embeddings") - - script_callbacks.model_loaded_callback(sd_model) - - timer.record("scripts callbacks") - - print(f"Model loaded in {timer.summary()}.") - - return sd_model - - -def reload_model_weights(sd_model=None, info=None): - from modules import lowvram, devices, sd_hijack - checkpoint_info = info or select_checkpoint() - - if not sd_model: - sd_model = shared.sd_model - - if sd_model is None: # previous model load failed - current_checkpoint_info = None - else: - current_checkpoint_info = sd_model.sd_checkpoint_info - if sd_model.sd_model_checkpoint == checkpoint_info.filename: - return - - if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: - lowvram.send_everything_to_cpu() - else: - sd_model.to(devices.cpu) - - sd_hijack.model_hijack.undo_hijack(sd_model) - - timer = Timer() - - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info) - - timer.record("find config") - - if sd_model is None or checkpoint_config != sd_model.used_config: - del sd_model - checkpoints_loaded.clear() - load_model(checkpoint_info, already_loaded_state_dict=state_dict, time_taken_to_load_state_dict=timer.records["load weights from disk"]) - return shared.sd_model - - try: - load_model_weights(sd_model, checkpoint_info, state_dict, timer) - except Exception as e: - print("Failed to load checkpoint, restoring previous") - load_model_weights(sd_model, current_checkpoint_info, None, timer) - raise - finally: - sd_hijack.model_hijack.hijack(sd_model) - timer.record("hijack") - - script_callbacks.model_loaded_callback(sd_model) - timer.record("script callbacks") - - if not shared.cmd_opts.lowvram and not shared.cmd_opts.medvram: - sd_model.to(devices.device) - timer.record("move model to device") - - print(f"Weights loaded in {timer.summary()}.") - - return sd_model diff --git a/spaces/jackli888/stable-diffusion-webui/modules/timer.py b/spaces/jackli888/stable-diffusion-webui/modules/timer.py deleted file mode 100644 index 8187c28edea3d7ce30d1d8c086a6191eb49d960c..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/timer.py +++ /dev/null @@ -1,35 +0,0 @@ -import time - - -class Timer: - def __init__(self): - self.start = time.time() - self.records = {} - self.total = 0 - - def elapsed(self): - end = time.time() - res = end - self.start - self.start = end - return res - - def record(self, category, extra_time=0): - e = self.elapsed() - if category not in self.records: - self.records[category] = 0 - - self.records[category] += e + extra_time - self.total += e + extra_time - - def summary(self): - res = f"{self.total:.1f}s" - - additions = [x for x in self.records.items() if x[1] >= 0.1] - if not additions: - return res - - res += " (" - res += ", ".join([f"{category}: {time_taken:.1f}s" for category, time_taken in additions]) - res += ")" - - return res diff --git a/spaces/james-oldfield/PandA/networks/genforce/runners/losses/logistic_gan_loss.py b/spaces/james-oldfield/PandA/networks/genforce/runners/losses/logistic_gan_loss.py deleted file mode 100644 index f241d73d93c5c67e829b0976a0f816ed0ecbd57d..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/runners/losses/logistic_gan_loss.py +++ /dev/null @@ -1,112 +0,0 @@ -# python3.7 -"""Defines loss functions for GAN training.""" - -import numpy as np - -import torch -import torch.nn.functional as F - -__all__ = ['LogisticGANLoss'] - -apply_loss_scaling = lambda x: x * torch.exp(x * np.log(2.0)) -undo_loss_scaling = lambda x: x * torch.exp(-x * np.log(2.0)) - - -class LogisticGANLoss(object): - """Contains the class to compute logistic GAN loss.""" - - def __init__(self, runner, d_loss_kwargs=None, g_loss_kwargs=None): - """Initializes with models and arguments for computing losses.""" - self.d_loss_kwargs = d_loss_kwargs or dict() - self.g_loss_kwargs = g_loss_kwargs or dict() - self.r1_gamma = self.d_loss_kwargs.get('r1_gamma', 10.0) - self.r2_gamma = self.d_loss_kwargs.get('r2_gamma', 0.0) - - runner.running_stats.add( - f'g_loss', log_format='.3f', log_strategy='AVERAGE') - runner.running_stats.add( - f'd_loss', log_format='.3f', log_strategy='AVERAGE') - if self.r1_gamma != 0: - runner.running_stats.add( - f'real_grad_penalty', log_format='.3f', log_strategy='AVERAGE') - if self.r2_gamma != 0: - runner.running_stats.add( - f'fake_grad_penalty', log_format='.3f', log_strategy='AVERAGE') - - @staticmethod - def preprocess_image(images, lod=0, **_unused_kwargs): - """Pre-process images.""" - if lod != int(lod): - downsampled_images = F.avg_pool2d( - images, kernel_size=2, stride=2, padding=0) - upsampled_images = F.interpolate( - downsampled_images, scale_factor=2, mode='nearest') - alpha = lod - int(lod) - images = images * (1 - alpha) + upsampled_images * alpha - if int(lod) == 0: - return images - return F.interpolate( - images, scale_factor=(2 ** int(lod)), mode='nearest') - - @staticmethod - def compute_grad_penalty(images, scores): - """Computes gradient penalty.""" - image_grad = torch.autograd.grad( - outputs=scores.sum(), - inputs=images, - create_graph=True, - retain_graph=True)[0].view(images.shape[0], -1) - penalty = image_grad.pow(2).sum(dim=1).mean() - return penalty - - def d_loss(self, runner, data): - """Computes loss for discriminator.""" - G = runner.models['generator'] - D = runner.models['discriminator'] - - reals = self.preprocess_image(data['image'], lod=runner.lod) - reals.requires_grad = True - labels = data.get('label', None) - - latents = torch.randn(reals.shape[0], runner.z_space_dim).cuda() - latents.requires_grad = True - # TODO: Use random labels. - fakes = G(latents, label=labels, **runner.G_kwargs_train)['image'] - real_scores = D(reals, label=labels, **runner.D_kwargs_train) - fake_scores = D(fakes, label=labels, **runner.D_kwargs_train) - - d_loss = F.softplus(fake_scores).mean() - d_loss += F.softplus(-real_scores).mean() - runner.running_stats.update({'d_loss': d_loss.item()}) - - real_grad_penalty = torch.zeros_like(d_loss) - fake_grad_penalty = torch.zeros_like(d_loss) - if self.r1_gamma: - real_grad_penalty = self.compute_grad_penalty(reals, real_scores) - runner.running_stats.update( - {'real_grad_penalty': real_grad_penalty.item()}) - if self.r2_gamma: - fake_grad_penalty = self.compute_grad_penalty(fakes, fake_scores) - runner.running_stats.update( - {'fake_grad_penalty': fake_grad_penalty.item()}) - - return (d_loss + - real_grad_penalty * (self.r1_gamma * 0.5) + - fake_grad_penalty * (self.r2_gamma * 0.5)) - - def g_loss(self, runner, data): # pylint: disable=no-self-use - """Computes loss for generator.""" - # TODO: Use random labels. - G = runner.models['generator'] - D = runner.models['discriminator'] - batch_size = data['image'].shape[0] - labels = data.get('label', None) - - latents = torch.randn(batch_size, runner.z_space_dim).cuda() - fakes = G(latents, label=labels, **runner.G_kwargs_train)['image'] - fake_scores = D(fakes, label=labels, **runner.D_kwargs_train) - - g_loss = F.softplus(-fake_scores).mean() - runner.running_stats.update({'g_loss': g_loss.item()}) - - return g_loss diff --git a/spaces/jamesliu1217/midjourney-v5/README.md b/spaces/jamesliu1217/midjourney-v5/README.md deleted file mode 100644 index 76189ac0ce98515ed5a2cd7423218111eaf87b3e..0000000000000000000000000000000000000000 --- a/spaces/jamesliu1217/midjourney-v5/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Midjourney V5 -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: openrail -duplicated_from: hareshhecker/midjourney-v5 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jasonjones/Batman-AdMaker/README.md b/spaces/jasonjones/Batman-AdMaker/README.md deleted file mode 100644 index ffa2728b9af87f25643946d6a45b1fbd3b03d77d..0000000000000000000000000000000000000000 --- a/spaces/jasonjones/Batman-AdMaker/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Batman AdMaker -emoji: 🚀 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: bigscience-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jayesh95/Voice-QA/README.md b/spaces/jayesh95/Voice-QA/README.md deleted file mode 100644 index 24f54567a1344ce5a7d42021457bfebf0335321f..0000000000000000000000000000000000000000 --- a/spaces/jayesh95/Voice-QA/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Voice QA -emoji: 🐠 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - - -# Voice QA - -**Note**: Code is outdated now due to some depreciations. Haven't updated the HF space to make it compatible with latest package versions. - -This is the source code of Voice QA app hosted at Huggingface Spaces. It can be accessed by clicking on the following link: - -[Voice QA](https://huggingface.co/spaces/jayesh95/Voice-QA) - -You can paste any text article and then ask questions using voice and get audio answers. diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/columns.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/columns.tsx deleted file mode 100644 index d28684d04347c97ae720ee1e591012bc6f9782cd..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/columns.tsx +++ /dev/null @@ -1,151 +0,0 @@ -"use client" - -import { ColumnDef } from "@tanstack/react-table" -import { Checkbox } from "@/components/ui/checkbox" - -import { DataTableColumnHeader } from "./column-header" - -import { Video } from "@/app/types" -import { triggerDownload } from "@/lib/triggerDownload" -import { ChangeStatusButton } from "./change-status-button" - -export const columns: ColumnDef