diff --git a/spaces/123Kumar/vits-uma-genshin-honkai123/Docker/vits.sh b/spaces/123Kumar/vits-uma-genshin-honkai123/Docker/vits.sh
deleted file mode 100644
index 2b87f26eda96d3800b73b4a21b210c78888a2299..0000000000000000000000000000000000000000
--- a/spaces/123Kumar/vits-uma-genshin-honkai123/Docker/vits.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash
-run() {
- echo -e "\033[32m已完成初始化,启动服务...\033[0m"
- python3 /app/vits-uma-genshin-honkai/app.py
-}
-install() {
- echo -e "\033[33m正在初始化:安装依赖....\033[0m"
- pip install -r /app/vits-uma-genshin-honkai/requirements.txt -i https://mirrors.ustc.edu.cn/pypi/web/simple
- echo -e "\033[33m正在下载模型....\033[0m"
- rm -f /app/vits-uma-genshin-honkai/model/G_953000.pth
- wget -O /app/vits-uma-genshin-honkai/model/G_953000.pth https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai/resolve/main/model/G_953000.pth
- echo -e "\033[32m初始化完成!\033[0m"
- run
-}
-
-if [ ! -f "/app/vits-uma-genshin-honkai/model/G_953000.pth" ] || [ "$(stat -c%s "/app/vits-uma-genshin-honkai/model/G_953000.pth")" -lt 10000 ]; then
- install
-else
- run
-fi
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Badmash No.1 movie free download kickass torrent Find out why this movie is a must-watch for action lovers.md b/spaces/1gistliPinn/ChatGPT4/Examples/Badmash No.1 movie free download kickass torrent Find out why this movie is a must-watch for action lovers.md
deleted file mode 100644
index 17ae82b77ee054d35f34c6777bd25571a42c92da..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Badmash No.1 movie free download kickass torrent Find out why this movie is a must-watch for action lovers.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
1.Torrentz-Torrentz is one of the most smooth, powerful and popular platforms which makes it possible for you to download video song torrents at free of cost. This platform enables you to enter your search query and then select one of the many options of video songs provided. Download link:
-
1.The first step is to download a torrent client or software like Bittorrent. Bittorrent can be downloaded by going to its official website and then clicking on the download link which is suitable for your Operating system. Make sure you download the latest free version of the platform.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Gateway B2 Teacher Book Pdf LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Gateway B2 Teacher Book Pdf LINK.md
deleted file mode 100644
index 31d64c158a4eb1ed686e4ae9593140350c314197..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Gateway B2 Teacher Book Pdf LINK.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
-Gateway-B2-teachers.pdf - Free download as PDF File (.pdf) or read online for . 125 Units 79 Unit 10135 Classroom audio recordings 145 Workbook answer key 155 ... Spotlight 2nd grade workbook 160 Workbook key answer workbook ... Complete with the textbook Spheres English Language.
-2 grade ...
-Buy workbook for the textbook Spotlight (English in Focus) grade 2, online gdz.
-In the workbook for the English language for 2nd grade are ready answers to the exercises for the textbook Spotlight.
-2 class.
-Author: Afanasyeva, Mikheeva, Baranova, Vaulina ...
-English 2nd grade English ...
-Workbook for the textbook Spotlight ...
-2nd grade.
-Workbook.
-For the textbook on ...
- - Your book
-For the textbook "Informatics" for grade 5 (M.: BINOM. ...
-Workbook is part of the teaching kit.
-The workbook.
-For the textbook "Informatics" for grade 5
-Buy the book "Workbook.
-Workbook for the textbook "Informatics for grade 5" (Lutceva E.) in the online store My-shop.ru.
-Low price, delivery ...
-Informatics.
-5 grade.
-Workbook for the textbook L.L.
- Description:
-The workbook is part of the system of educational and methodical sets Algorithm of Success and is designed for the textbook L.L.Bosova, A.Y.Bosova on computer science for grade 5.
-The workbook includes exercises that allow students to consolidate and develop their programming skills, learn algorithms for solving typical problems, and perform creative and research tasks.
-The workbook is designed for computer science lessons in grade 5.
-Printable and downloadable version
- The workbook is a teaching aid.
-It contains .
-The workbook is part of the Computing curriculum for grades 5-6, along with the
-The workbook for the 5th grade is a part of the ATC for the 5th-6th grades
-The workbook for the 6th grade is an integral part of the informatics textbook for grades 5-6, together with
-The workbook for the 6th grade is an integral part of the informatics textbook for grades 5-6 together with the English language curriculum.
-The workbook for the 5th grade is an integral part of the informatics textbook for grades 5-6 together with the 8th grade and the 6th grade
- The workbook for the 4th grade is an integral part of the informatics textbook for 3rd-4th grades, together with
-The workbook for the 5th grade is an integral part of the informatics textbook for grades 5-6, along with
-Grade 2.
-In 2 parts.
-Part 1. FGOS.
-Matveeva N.V.
-FGOS.
-Workbook for grade 3 is part of the workbook on computer science for children.
-Educational literature in the online store Book24.
-Delivery in Kazakhstan.
-The textbook and workbook for 6th grade is part of the "Information science textbook for 5.
- (The textbook, the workbook, the collection of problems, the electronic appendix) and
-For the 6th grade, and also a manual for the teacher.
-The structure of the workbook includes: - a textbook in two parts ("Informatics.
-Bosova); - book for projects and creative works (authors: A.G. Gein, A.I. Senokosov, N.A. Yunerman); - collection of tasks and tests (author: N.A. Rodichev); - teaching aid for teachers (authors: A.V. Goriachev, K.I. Gorina, N.I. Suvorova, T.O. Volkova).
- Informatics textbooks for grades 5-8 by A.G. Gein and A.I. Senokosov are the continuation of the informatics textbooks for the elementary school.
-The 5th grade textbook studies information processes, information systems, information technologies, as well as the theoretical basics of information security.
-The textbook for 6th grade explores the logical, physical, and operational foundations of computers, information technology, and word processing technology.
- The textbook for grade 7 studies the logical foundations of the computer, information technology for processing graphic information and multimedia, computer technology for creating Web pages, network technology for processing text and graphic information, and information modeling technology.
-The grade 8 textbook explores models and structures of information systems, information technologies for numerical information processing, and information processing technologies in spreadsheets.
- The textbook for Grade 9 contains a lot of information on Information and Communication Technologies, Communication Technologies, and Informatics and ICT: Preparing for the Unified State Exam.
-It deals with technology of Web-pages creation, models and structures of different information systems, information and communication technologies, providing creation and processing of text documents by word-processing tools, and technology of information processing in electronic tables.
- In addition, the textbook covers the technologies of working with databases, creating presentations, preparing publications on Web pages, creating and processing audio and video files, information retrieval on the Internet, etc.
-Examples of different technologies and tools for working with information systems and computer networks are given in the textbook.
-Each chapter ends with self-check questions, tasks for self-check, variants of independent and laboratory works.
- For students of higher education institutions on economic specialties.
-Will be useful to students of institutions of general secondary education in preparation for centralized testing in computer science in grades 9 and 11.
-Corresponds to the current requirements of the Federal state educational standard of secondary vocational education and professional requirements.
-For students studying computer science in technical specialties and for teachers 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Si Doel Anak Sekolahan Full 14 BEST.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Si Doel Anak Sekolahan Full 14 BEST.md
deleted file mode 100644
index fcbdeebac209cc5ac65e9f428e734747a0b83064..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Si Doel Anak Sekolahan Full 14 BEST.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
Download Film Si Doel Anak Sekolahan Full 14: Lagi Lagi Huru Hara
-
Si Doel Anak Sekolahan adalah sinetron Indonesia yang pertama kali ditayangkan oleh stasiun TV RCTI pada tahun 1994. Disutradarai dan dibintangi oleh Rano Karno sebagai Doel, sinetron ini berkisah mengenai kehidupan Doel dan keluarganya, keluarga Betawi yang tetap mempertahankan nilai-nilai tradisional meskipun hidup di tengah-tengah arus perkotaan dan modernisasi.
Sinetron ini memiliki banyak penggemar yang setia mengikuti kisah cinta segitiga antara Doel, Zaenab, dan Sarah. Selain itu, sinetron ini juga menyajikan berbagai adegan lucu dan mengharukan yang melibatkan keluarga dan teman-teman Doel.
-
Salah satu episode yang paling ditunggu-tunggu oleh para penggemar adalah episode 14 yang berjudul "Lagi Lagi Huru Hara". Dalam episode ini, Doel harus menghadapi berbagai masalah yang menimpa dirinya dan orang-orang terdekatnya.
-
Doel harus berurusan dengan polisi karena dituduh mencuri sepeda motor milik Pak RT. Sementara itu, Zaenab harus menanggung malu karena foto-foto mesranya dengan Doel tersebar di media sosial. Sarah juga tidak kalah sial karena harus menerima kenyataan bahwa ayahnya meninggal dunia akibat serangan jantung.
-
Bagaimana nasib Doel dan keluarganya? Apakah mereka bisa melewati semua cobaan yang datang? Bagaimana pula hubungan Doel dengan Zaenab dan Sarah?
-
-
Jika Anda penasaran dengan jawabannya, Anda bisa download film si doel anak sekolahan full 14 di sini. Anda bisa menonton episode ini secara gratis dan mudah tanpa perlu mendaftar atau membayar biaya apapun.
-
Download film si doel anak sekolahan full 14 sekarang juga dan nikmati kisah seru dan menghibur dari Doel dan keluarganya. Jangan lupa untuk berbagi link download ini dengan teman-teman Anda yang juga suka dengan sinetron Si Doel Anak Sekolahan.
-
-
Episode 14 ini dimulai dengan adegan Doel yang sedang berada di kantor polisi bersama Sabeni dan Mandra. Mereka dituduh mencuri sepeda motor milik Pak RT yang sebenarnya adalah milik Doel sendiri. Doel harus menjelaskan panjang lebar bahwa sepeda motor itu adalah hadiah dari Zaenab yang ia simpan di rumah Pak RT karena takut dicuri di rumahnya.
-
Sementara itu, Zaenab yang sedang berada di kantor juga mendapat masalah besar. Foto-foto mesranya dengan Doel yang diambil oleh Sarah secara diam-diam telah tersebar di media sosial oleh teman-temannya yang iri. Zaenab merasa malu dan marah karena reputasinya sebagai wanita baik-baik tercoreng. Ia pun mencari tahu siapa yang menyebarkan foto-foto itu dan berniat untuk melaporkannya ke polisi.
-
Di sisi lain, Sarah yang sedang berada di Belanda mendapat kabar buruk dari ibunya. Ayahnya, Hans, telah meninggal dunia akibat serangan jantung. Sarah sangat terpukul dan bingung harus bagaimana. Ia ingin segera pulang ke Indonesia untuk mengurus jenazah ayahnya, tetapi ia juga tidak ingin meninggalkan Doel yang masih ia cintai.
-
Akankah Doel bisa keluar dari kantor polisi tanpa masalah? Apakah Zaenab bisa menemukan pelaku penyebar foto-foto mesranya dengan Doel? Bagaimana pula nasib Sarah yang harus menghadapi kematian ayahnya? Temukan jawabannya dengan download film si doel anak sekolahan full 14 di sini.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become the Drift King with Drift Clash Online Racing Mod APK Android 1.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become the Drift King with Drift Clash Online Racing Mod APK Android 1.md
deleted file mode 100644
index 31e69dd98cb4197565abac36a61c4534677ccd55..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become the Drift King with Drift Clash Online Racing Mod APK Android 1.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Drift Clash Online Racing Mod APK Android 1: A Review
-
If you are a fan of drift racing games, you might have heard of Drift Clash Online Racing, a game that offers real-time battles and realistic physics. But did you know that you can enjoy this game even more with the modded version from Mod APK Android 1? In this article, we will review Drift Clash Online Racing Mod APK Android 1 and tell you why you should try it.
-
What is Drift Clash Online Racing?
-
Drift Clash Online Racing is a drift racing game that was developed by EasyWays and released in 2018. It is the first drift racing game with real-time battles and realistic physics. You can compete with other players in online multiplayer mode and show off your drifting skills. You can also customize your car with various parts and paint jobs, and collect most wanted cars from different eras.
Free-roam mode where you can explore the map and practice your drifts
-
Retro style graphics and sound effects
-
-
How to play the game
-
The game is easy to play but hard to master. You can control your car with simple touch buttons or tilt your device. You can also adjust the sensitivity and steering angle in the settings. The goal is to drift as much as possible and earn points. The more you drift, the more boost you get. You can use the boost to speed up and overtake your opponents. You can also perform tricks like donuts, spins, and jumps to earn extra points. The player with the most points at the end of the race wins.
-
What is Mod APK Android 1?
-
Mod APK Android 1 is a website that provides modded versions of various Android games and apps. A modded version is a modified version that has some features or functions that are not available in the original version. For example, a modded version may have unlimited money, unlocked items, or no ads.
-
Benefits of using Mod APK Android 1
-
-
You can access premium features or items for free
-
You can enjoy the game without any restrictions or limitations
-
You can save your time and money by not having to spend real money on in-app purchases
-
You can have more fun and challenge by playing with different mods
-
-
How to download and install Mod APK Android 1
-
To download and install Mod APK Android 1, you need to follow these steps:
-
-
Go to [Mod APK Android 1](^1^) website and search for Drift Clash Online Racing Mod APK
-
Click on the download button and wait for the file to be downloaded
-
Go to your device settings and enable unknown sources installation
-
Locate the downloaded file and tap on it to install it
-
Launch the game and enjoy the modded version
-
-
Why you should try Drift Clash Online Racing Mod APK Android 1
-
If you are still not convinced, here are some pros and cons of Drift Clash Online Racing Mod APK Android 1 that may help you decide:
-
Pros of the modded version
-
-
You can get unlimited money to buy any car or part you want
-
You can
You can unlock all the cars and parts without having to complete the missions or achievements
-
You can remove the ads that may interrupt your gameplay
-
You can enjoy the game with better graphics and performance
-
-
Cons of the modded version
-
-
You may face some compatibility or security issues with your device
-
You may lose your progress or data if the modded version is not updated or compatible with the original version
-
You may get banned or penalized by the game developers or Google Play for using a modded version
-
-
Conclusion
-
Drift Clash Online Racing Mod APK Android 1 is a great option for drift racing enthusiasts who want to experience the game with more features and fun. It offers unlimited money, unlocked cars and parts, no ads, and improved graphics and performance. However, it also comes with some risks and drawbacks, such as compatibility, security, and ban issues. Therefore, you should use it at your own discretion and responsibility.
-
FAQs
-
-
What is the difference between drift racing and normal racing?
-
Drift racing is a type of racing where the driver intentionally oversteers the car to make it slide sideways. It requires more skill and technique than normal racing, where the driver tries to maintain traction and speed. Drift racing is more popular in Japan and other Asian countries, where it originated.
-
What are the best cars for drift racing?
-
There is no definitive answer to this question, as different cars may suit different drivers and preferences. However, some of the common factors that make a good drift car are rear-wheel drive, lightweight body, powerful engine, manual transmission, and adjustable suspension. Some of the popular drift cars are Nissan Skyline, Toyota Supra, Mazda RX-7, BMW M3, and Ford Mustang.
-
How can I improve my drift skills?
-
The best way to improve your drift skills is to practice regularly and learn from your mistakes. You can also watch videos of professional drifters and observe their techniques and tips. You can also join online communities and forums where you can interact with other drifters and get feedback and advice.
-
drift clash real-time multiplayer racing mod apk
-drift clash online racing unlimited money mod apk
-drift clash realistic physics racing mod apk
-drift clash online racing hack apk download
-drift clash online racing mod apk latest version
-drift clash online racing free-roam mod apk
-drift clash online racing retro style mod apk
-drift clash online racing mod apk happymod
-drift clash online racing mod apk android 2
-drift clash online racing mod apk android 3
-drift clash online racing mod apk android 4
-drift clash online racing mod apk android 5
-drift clash online racing mod apk android 6
-drift clash online racing mod apk android 7
-drift clash online racing mod apk android 8
-drift clash online racing mod apk android 9
-drift clash online racing mod apk android 10
-drift clash online racing mod apk android 11
-drift clash online racing mod apk android 12
-drift clash online racing mod apk android 13
-drift clash online racing mod apk android 14
-drift clash online racing mod apk android 15
-drift clash online racing mod apk android 16
-drift clash online racing mod apk android 17
-drift clash online racing mod apk android 18
-drift clash online racing mod apk android 19
-drift clash online racing mod apk android 20
-drift clash online racing motorcycles drifting mod apk
-drift clash online racing clipping zones mod apk
-drift clash online racing cars customization mod apk
-drift clash online racing stickers and decals mod apk
-drift clash online racing game with friends mod apk
-drift clash online racing win most wanted cars mod apk
-drift clash online racing burn tyres on track mod apk
-drift clash online racing unique retro style of the game mod apk
-download drift clash online racing mod apk for free
-how to install drift clash online racing mod apk on android
-how to play drift clash online racing mod apk offline
-how to update drift clash online racing mod apk
-how to get unlimited coins in drift clash online racing mod apk
-how to unlock all cars in drift clash online racing mod apk
-how to get rid of ads in drift clash online racing mod apk
-how to fix lag in drift clash online racing mod apk
-how to change language in drift clash online racing mod apk
-how to connect with facebook in drift clash online racing mod apk
-how to record gameplay in drift clash online racing mod apk
-how to share your score in drift clash online racing mod apk
-how to join a clan in drift clash online racing mod apk
-how to chat with other players in drift clash online racing mod apk
-
Is Drift Clash Online Racing Mod APK Android 1 safe to use?
-
Drift Clash Online Racing Mod APK Android 1 is not an official version of the game, so it may not be safe to use. It may contain viruses or malware that can harm your device or steal your personal information. It may also violate the terms and conditions of the game developers or Google Play, which can result in a ban or penalty. Therefore, you should use it at your own risk and discretion.
-
Where can I download Drift Clash Online Racing Mod APK Android 1?
-
You can download Drift Clash Online Racing Mod APK Android 1 from [Mod APK Android 1] website, which provides modded versions of various Android games and apps. However, you should be careful and cautious when downloading any modded version from any website, as they may not be reliable or trustworthy.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for Android The Ultimate Racing Experience.md b/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for Android The Ultimate Racing Experience.md
deleted file mode 100644
index f669117fb0eff412a21dd6de54ecfe4259d18096..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for Android The Ultimate Racing Experience.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Download Traffic Racer Mod Apk Done: How to Enjoy Unlimited Money and Cars in This Amazing Racing Game
-
Introduction
-
If you are a fan of racing games, you might have heard of Traffic Racer, a popular game that lets you drive your car through highway traffic, earn cash, upgrade your car and buy new ones. It is a fun and addictive game that challenges your reflexes and skills. But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money and cars in the game? Well, there is a way to do that, and it is by downloading Traffic Racer Mod Apk done.
Traffic Racer is a 3D racing game developed by Soner Kara, a Turkish game developer. It was released in 2012 for Android and iOS devices. The game has over 100 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game features 35 different cars, 5 game modes, 4 environments, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements.
-
What is Traffic Racer Mod Apk?
-
Traffic Racer Mod Apk is a modified version of the original game that gives you access to unlimited money and cars in the game. You can use the money to buy any car you want, upgrade it to the max level, and customize it as you like. You can also unlock all the game modes and environments without having to complete any missions or challenges. With Traffic Racer Mod Apk, you can enjoy the game without any ads or interruptions.
-
Why download Traffic Racer Mod Apk?
-
There are many reasons why you might want to download Traffic Racer Mod Apk done. Here are some of them:
-
download traffic racer mod apk unlimited money
-download traffic racer mod apk latest version
-download traffic racer mod apk for android
-download traffic racer mod apk free
-download traffic racer mod apk hack
-download traffic racer mod apk full
-download traffic racer mod apk offline
-download traffic racer mod apk 3.6
-download traffic racer mod apk revdl
-download traffic racer mod apk rexdl
-download traffic racer mod apk no ads
-download traffic racer mod apk android 1
-download traffic racer mod apk 2023
-download traffic racer mod apk apkpure
-download traffic racer mod apk happymod
-download traffic racer mod apk unlimited coins and keys
-download traffic racer mod apk unlocked all cars
-download traffic racer mod apk 3.5
-download traffic racer mod apk 3.4
-download traffic racer mod apk 3.3
-download traffic racer mod apk 3.2
-download traffic racer mod apk 3.1
-download traffic racer mod apk 3.0
-download traffic racer mod apk 2.5
-download traffic racer mod apk 2.4
-download traffic racer mod apk 2.3
-download traffic racer mod apk 2.2
-download traffic racer mod apk 2.1
-download traffic racer mod apk 2.0
-download traffic racer mod apk 1.9
-download traffic racer mod apk 1.8
-download traffic racer mod apk 1.7
-download traffic racer mod apk 1.6
-download traffic racer mod apk 1.5
-download traffic racer mod apk 1.4
-download traffic racer mod apk 1.3
-download traffic racer mod apk 1.2
-download traffic racer mod apk 1.1
-download traffic racer mod apk 1.0
-how to download traffic racer mod apk done
-
-
You can have unlimited money and cars in the game, which means you can buy any car you want, upgrade it to the max level, and customize it as you like.
-
You can unlock all the game modes and environments without having to complete any missions or challenges.
-
You can enjoy the game without any ads or interruptions.
-
You can have more fun and excitement in the game, as you can drive faster, perform more stunts, and crash more cars.
-
You can challenge yourself and your friends by competing on the online leaderboards and achievements.
-
-
How to download Traffic Racer Mod Apk done?
-
If you are interested in downloading Traffic Racer Mod Apk done, you need to follow these simple steps:
-
Step 1: Find a reliable source
-
The first thing you need to do is to find a reliable source that offers the mod apk file for download. There are many websites that claim to provide the mod apk file, but not all of them are trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful when choosing a source. One of the sources that we recommend is [AN1.com](^1^), which is a reputable website that provides various mod apk files for free.
-
Step 2: Enable unknown sources
-
The next thing you need to do is to enable unknown sources on your device. This is because the mod apk file is not from the official Google Play Store, so you need to allow the installation of apps from unknown sources. To do this, you need to go to your device settings, then security, then enable unknown sources. This will allow you to install the mod apk file without any problems.
-
Step 3: Download and install the mod apk file
-
The third thing you need to do is to download and install the mod apk file on your device. To do this, you need to go to the website that you chose in step 1, then find the download link for the Traffic Racer Mod Apk file. Click on the download link and wait for the file to be downloaded on your device. Once the file is downloaded, you need to locate it in your device storage, then tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to be completed.
-
Step 4: Launch the game and enjoy
-
The last thing you need to do is to launch the game and enjoy it. To do this, you need to find the game icon on your device home screen or app drawer, then tap on it to open the game. You will see that you have unlimited money and cars in the game, and you can access all the game modes and environments. You can also play the game without any ads or interruptions. You can now enjoy the game as much as you want and have fun.
-
Features of Traffic Racer Mod Apk
-
Traffic Racer Mod Apk has many features that make it different from the original game. Here are some of them:
-
Unlimited money
-
One of the main features of Traffic Racer Mod Apk is that it gives you unlimited money in the game. You can use this money to buy any car you want, upgrade it to the max level, and customize it as you like. You can also use this money to unlock all the game modes and environments without having to complete any missions or challenges. You can have as much money as you want and spend it as you wish.
-
Unlimited cars
-
Another feature of Traffic Racer Mod Apk is that it gives you unlimited cars in the game. You can choose from 35 different cars, ranging from sedans, sports cars, trucks, buses, police cars, and more. You can also unlock all the cars without having to earn cash or complete any missions or challenges. You can have as many cars as you want and switch between them as you like.
-
No ads
-
A third feature of Traffic Racer Mod Apk is that it removes all the ads from the game. You can play the game without any ads or interruptions. You can also save your data and battery by not having to watch any ads or videos. You can enjoy the game without any distractions or annoyances.
-
High-quality graphics and sound
-
A fourth feature of Traffic Racer Mod Apk is that it improves the graphics and sound quality of the game. You can experience realistic 3D graphics and smooth animations in the game. You can also hear realistic sound effects and music in the game. You can immerse yourself in the game and feel like you are driving a real car on a real highway.
-
Conclusion
-
Traffic Racer is a fun and addictive racing game that lets you drive your car through highway traffic, earn cash, upgrade your car and buy new ones. But if you want to enjoy the game without any limitations or restrictions, you should download Traffic Racer Mod Apk done. This mod apk file gives you access to unlimited money and cars in the game, as well as removes all the ads from the game. You can also unlock all the game modes and environments without having to complete any missions or challenges. With Traffic Racer Mod Apk, you can have more fun and excitement in the game, as well as challenge yourself and your friends by competing on the online leaderboards and achievements. If you are interested in downloading Traffic Racer Mod Apk done, you can follow the simple steps that we have explained in this article. We hope that you have found this article helpful and informative. Thank you for reading and happy racing!
-
FAQs
-
Here are some frequently asked questions about Traffic Racer Mod Apk:
-
Is Traffic Racer Mod Apk safe to download and install?
-
Yes, Traffic Racer Mod Apk is safe to download and install, as long as you use a reliable source that offers the mod apk file for free. We recommend using [AN1.com], which is a reputable website that provides various mod apk files for free. However, you should always scan the mod apk file with an antivirus or anti-malware program before installing it on your device, just to be on the safe side.
-
Is Traffic Racer Mod Apk compatible with my device?
-
Traffic Racer Mod Apk is compatible with most Android devices that run on Android 4.1 or higher. However, some devices might not support the mod apk file due to different specifications or settings. Therefore, you should always check the compatibility of the mod apk file with your device before downloading and installing it. You can also contact the developer of the mod apk file if you encounter any problems or issues with the compatibility.
-
Will Traffic Racer Mod Apk affect my game progress or account?
-
No, Traffic Racer Mod Apk will not affect your game progress or account, as it does not require any root access or login credentials to work. You can play the game as usual, with or without the mod apk file installed on your device. However, you should be aware that using the mod apk file might violate the terms and conditions of the original game, and you might face some consequences or risks if you use it online or with other players. Therefore, you should use the mod apk file at your own discretion and responsibility.
-
Can I update Traffic Racer Mod Apk to the latest version?
-
Yes, you can update Traffic Racer Mod Apk to the latest version, as long as the developer of the mod apk file releases a new version that matches the original game version. You can check for updates on the website that you used to download the mod apk file, or on other websites that offer similar mod apk files. However, you should always backup your game data before updating the mod apk file, just in case something goes wrong or you lose your game progress.
-
Can I uninstall Traffic Racer Mod Apk if I don't like it?
-
Yes, you can uninstall Traffic Racer Mod Apk if you don't like it or if you want to switch back to the original game. To do this, you need to go to your device settings, then apps, then find and select Traffic Racer Mod Apk, then tap on uninstall. This will remove the mod apk file from your device and restore the original game. You can also delete the mod apk file from your device storage if you want to free up some space.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA Mobile v18.1.01 MOD APK Play in World Cup Stadiums with Official Licenses.md b/spaces/1phancelerku/anime-remove-background/FIFA Mobile v18.1.01 MOD APK Play in World Cup Stadiums with Official Licenses.md
deleted file mode 100644
index 3e1b1d861bff8b5f2ac8be29b30d836b5214c274..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA Mobile v18.1.01 MOD APK Play in World Cup Stadiums with Official Licenses.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
FIFA Mobile v18.1.01 Mod Apk: The Ultimate Guide
-
If you are a fan of soccer games, you probably have heard of FIFA Mobile, the official mobile game of EA Sports that lets you build your ultimate team of soccer stars and compete in various modes, including the FIFA World Cup 2022™. But did you know that there is a way to make your gaming experience even more exciting and rewarding? That's right, we are talking about FIFA Mobile v18.1.01 mod apk, a modified version of the game that gives you access to unlimited money, unlocked features, and more.
-
In this article, we will tell you everything you need to know about FIFA Mobile v18.1.01 mod apk, including its benefits, how to download and install it, and how to use it to dominate the soccer field. Whether you want to build your dream team, relive the world's greatest soccer tournament, score big with soccer icons and heroes, experience immersive next-level soccer simulation, or be the soccer manager of your own dream team, FIFA Mobile v18.1.01 mod apk has something for you.
So what are you waiting for? Read on and discover how FIFA Mobile v18.1.01 mod apk can take your soccer game to the next level.
-
What is FIFA Mobile and what are its features?
-
FIFA Mobile is a free-to-play soccer game for iOS and Android devices that lets you build your ultimate team of over 15,000 authentic soccer stars from over 600 teams across over 30 leagues. You can choose from world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr, and Son Heung-min, as well as legends like Paolo Maldini, Ronaldinho, and more. You can also customize your team's kits, badges, formation, tactics, and chemistry.
-
FIFA Mobile also offers various modes for you to enjoy, such as:
-
-
Head-to-Head: Play real-time 11v11 matches against other players from around the world and climb the leaderboards.
-
VS Attack: Take turns to score goals in fast-paced matches where every attack counts.
-
Manager Mode: Be the soccer manager of your own dream team and plan your strategy and adjust your tactics in real time or choose auto-play.
-
FIFA World Cup 2022™ Mode: Relive the world's greatest soccer tournament with any of the 32 qualified national teams or rewrite history with 15 non-qualified national teams. Play in authentic World Cup stadiums with official kits, badges, and match ball.
-
Events: Participate in live events that correspond with the real-world tournaments throughout the soccer season and earn special rewards.
-
Campaigns: Complete challenges and earn players from different leagues and regions.
-
The Academy: Learn the basics of the game and improve your skills with drills and tutorials.
-
-
FIFA Mobile also features stunning graphics, realistic animations, and immersive sound effects that make you feel like you are on the pitch. You can also chat with your friends, join a league, or create your own league and compete with other players. FIFA Mobile is constantly updated with new content and features to keep you engaged and entertained.
-
What is FIFA Mobile v18.1.01 mod apk and what are its benefits?
-
FIFA Mobile v18.1.01 mod apk is a modified version of the original FIFA Mobile game that gives you some extra advantages and perks that are not available in the official version. Some of the benefits of FIFA Mobile v18.1.01 mod apk are:
-
-
Unlimited money: You can get unlimited coins and points to buy players, upgrade your team, and unlock features without spending real money.
-
Unlocked features: You can access all the features and modes of the game without any restrictions or limitations.
-
No ads: You can enjoy the game without any annoying ads or pop-ups that interrupt your gameplay.
-
No root required: You can install and run FIFA Mobile v18.1.01 mod apk on your device without rooting it or risking its security.
-
Easy to use: You can easily download and install FIFA Mobile v18.1.01 mod apk on your device and start playing right away without any complicated steps or procedures.
-
-
FIFA Mobile v18.1.01 mod apk is a great way to enhance your gaming experience and have more fun with FIFA Mobile. You can enjoy all the features and modes of the game without any limitations or costs, and build your ultimate team of soccer stars with ease.
-
How to download and install FIFA Mobile v18.1.01 mod apk?
-
Downloading and installing FIFA Mobile v18.1.01 mod apk is very simple and straightforward. Just follow these steps:
-
-
Download the FIFA Mobile v18.1.01 mod apk file from a trusted source. You can find many websites that offer the mod apk file for free, but make sure you choose a reliable and safe one. Alternatively, you can use this link to download the file directly: [FIFA Mobile v18.1.01 mod apk].
-
Allow unknown sources on your device. Before you can install the mod apk file, you need to enable the option to allow unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate and install the mod apk file. After you have downloaded the file, go to your file manager and find the folder where you saved it. Tap on the file and follow the instructions to install it on your device.
-
Launch the game and enjoy. Once you have installed the mod apk file, you can launch the game from your app drawer or home screen and start playing with unlimited money, unlocked features, and no ads.
-
-
Congratulations, you have successfully downloaded and installed FIFA Mobile v18.1.01 mod apk on your device. Now you can enjoy all the benefits of the modified version of the game and have more fun with FIFA Mobile.
-
fifa mobile v18.1.01 mod apk unlimited money
-fifa mobile v18.1.01 mod apk unlocked all
-fifa mobile v18.1.01 mod apk menu
-fifa mobile v18.1.01 mod apk download
-fifa mobile v18.1.01 mod apk android
-fifa mobile v18.1.01 mod apk latest version
-fifa mobile v18.1.01 mod apk offline
-fifa mobile v18.1.01 mod apk free
-fifa mobile v18.1.01 mod apk hack
-fifa mobile v18.1.01 mod apk 2023
-fifa mobile v18.1.01 mod apk world cup
-fifa mobile v18.1.01 mod apk mega
-fifa mobile v18.1.01 mod apk obb
-fifa mobile v18.1.01 mod apk data
-fifa mobile v18.1.01 mod apk no root
-fifa mobile v18.1.01 mod apk online
-fifa mobile v18.1.01 mod apk revdl
-fifa mobile v18.1.01 mod apk rexdl
-fifa mobile v18.1.01 mod apk 5play
-fifa mobile v18.1.01 mod apk update
-fifa mobile v18.1.01 mod apk full
-fifa mobile v18.1.01 mod apk premium
-fifa mobile v18.1.01 mod apk pro
-fifa mobile v18.1.01 mod apk cracked
-fifa mobile v18.1.01 mod apk patched
-fifa mobile v18.1.01 mod apk vip
-fifa mobile v18.1.01 mod apk cheat
-fifa mobile v18.1.01 mod apk coins
-fifa mobile v18.1.01 mod apk gems
-fifa mobile v18.1.01 mod apk gold
-fifa mobile v18.1.01 mod apk stars
-fifa mobile v18.1.01 mod apk points
-fifa mobile v18.1.01 mod apk tokens
-fifa mobile v18.1.01 mod apk players
-fifa mobile v18.1.01 mod apk teams
-fifa mobile v18.1.01 mod apk kits
-fifa mobile v18.1.01 mod apk stadiums
-fifa mobile v18.1.01 mod apk icons
-fifa mobile v18.1.01 mod apk heroes
-fifa mobile v18 2023 season 23 update download free unlimited money coins points tokens players teams kits stadiums icons heroes manager mode world cup mode offline online hack cheat menu mega obb data revdl rexdl 5play android latest version full premium pro cracked patched vip.
-
How to build your ultimate team with star players from the biggest leagues and top teams?
test your team's skills and abilities in various modes, such as Head-to-Head, VS Attack, Manager Mode, FIFA World Cup 2022™ Mode, Events, Campaigns, and The Academy. You can also chat with your friends, join a league, or create your own league and compete with other players. FIFA Mobile is the ultimate soccer game for mobile devices.
-
How to relive the world's greatest soccer tournament with FIFA World Cup 2022™ mode?
-
One of the most exciting modes in FIFA Mobile is the FIFA World Cup 2022™ mode, where you can relive the world's greatest soccer tournament with any of the 32 qualified national teams or rewrite history with 15 non-qualified national teams. You can play in authentic World Cup stadiums with official kits, badges, and match ball. You can also earn exclusive rewards and players from the World Cup events and campaigns. But how do you relive the world's greatest soccer tournament with FIFA World Cup 2022™ mode? Here are some steps to help you out:
-
-
Choose your national team. You can choose from any of the 32 qualified national teams or 15 non-qualified national teams to represent in the World Cup. You can also customize your team's kits, badges, formation, tactics, and chemistry.
-
Play the group stage. You can play against other national teams in your group and try to qualify for the knockout stage. You can earn points for winning or drawing matches and advance to the next round based on your ranking.
-
Play the knockout stage. You can play against other national teams that qualified from their groups and try to reach the final. You can win matches by scoring more goals than your opponent or by winning a penalty shootout if the score is tied after extra time.
-
Play the final. You can play against the other finalist and try to win the World Cup trophy. You can celebrate your victory with your team and fans and earn exclusive rewards and players.
-
-
By following these steps, you can relive the world's greatest soccer tournament with FIFA World Cup 2022™ mode in FIFA Mobile. You can also play friendly matches against other national teams or challenge yourself with special scenarios and objectives. FIFA World Cup 2022™ mode is a great way to experience the thrill and excitement of the World Cup on your mobile device.
-
How to score big with soccer icons and heroes?
-
Another amazing feature of FIFA Mobile is the ability to score big with soccer icons and heroes, who are legendary players that have made history in the soccer world. You can choose from over 100 icons and heroes, such as Cristiano Ronaldo, Lionel Messi, Neymar Jr, Zinedine Zidane, David Beckham, Pele, Maradona, and more. You can also unlock their stories and learn about their careers and achievements. But how do you score big with soccer icons and heroes? Here are some tips and tricks to help you out:
-
-
Earn icons and heroes from events and campaigns. You can earn icons and heroes from various events and campaigns that are available throughout the soccer season. You can complete challenges and objectives to earn players or tokens that can be exchanged for players. You can also buy players from the Market or use coins or points to open packs that contain players.
-
Train and rank up your icons and heroes to boost their OVR and stats. You can train and rank up your icons and heroes using Training XP, coins, Rank Up Tokens, and coins. Training XP can be obtained from events, campaigns, rewards, or by using other players as training material. Rank Up Tokens can be obtained from events or by using duplicate players as rank up material.
-
Use skill boosts to enhance your icons' and heroes' attributes. You can use skill boosts to boost specific attributes of your icons and heroes, such as pace, shooting, passing, defending, or physical. You can apply skill boosts using Skill Boosts Tokens and coins. Skill Boosts Tokens can be obtained from events, rewards, or by using other skill boosts as skill boost material.
-
Add icons and heroes to your team to increase chemistry. Icons and heroes have a special ability to increase chemistry among your players. Icons have a base chemistry of 5 with any player regardless of league, team, or nation. Heroes have a base chemistry of 10 with any player from their league or nation. You can also increase chemistry by using players with the same skill boost or position link.
-
Use icons' and heroes' special traits and skills to score goals and win matches. Icons and heroes have special traits and skills that make them stand out from other players. Traits are passive abilities that affect the player's performance, such as finesse shot, speed dribbler, or long shot taker. Skills are active abilities that the player can use during matches, such as rainbow flick, roulette, or heel to heel. You can use these traits and skills to score goals and win matches with your icons and heroes.
-
-
By following these tips and tricks, you can score big with soccer icons and heroes in FIFA Mobile. You can also unlock their stories and learn about their careers and achievements. Icons and heroes are the ultimate players to have in your team.
-
How to experience immersive next-level soccer simulation with upgraded stadiums and realistic audio?
-
FIFA Mobile is not only a game of skills and strategy, but also a game of immersion and realism. You can experience immersive next-level soccer simulation with upgraded stadiums and realistic audio that make you feel like you are on the pitch. You can play in authentic stadiums from around the world, such as Wembley Stadium, Camp Nou, Santiago Bernabéu, Allianz Arena, and more. You can also hear the roar of the crowd, the chants of the fans, the commentary of the announcers, and the sound of the ball hitting the net. But how do you experience immersive next-level soccer simulation with upgraded stadiums and realistic audio? Here are some steps to help you out:
-
-
Choose your preferred stadium. You can choose from various stadiums from different leagues and regions to play in. You can also unlock more stadiums by completing events and campaigns. You can change your stadium by going to Settings > Team > Stadium.
-
Adjust your graphics and sound settings. You can adjust your graphics and sound settings to optimize your gaming experience. You can change your graphics quality by going to Settings > Graphics Quality. You can change your sound settings by going to Settings > Sound Settings. You can also enable or disable music, sound effects, commentary, or crowd noise.
-
Enjoy the game. You can enjoy the game with upgraded stadiums and realistic audio that make you feel like you are on the pitch. You can see the details of the stadiums, such as the grass, the lights, the banners, and the fans. You can also hear the sounds of the game, such as the whistle, the ball, the players, and the crowd.
-
-
By following these steps, you can experience immersive next-level soccer simulation with upgraded stadiums and realistic audio in FIFA Mobile. You can also switch between different camera angles and zoom levels to get a better view of the action. FIFA Mobile is a game that brings you closer to the real soccer world.
-
How to be the soccer manager of your own dream team with manager mode?
-
One of the most challenging and rewarding modes in FIFA Mobile is the manager mode, where you can be the soccer manager of your own dream team and plan your strategy and adjust your tactics in real time or choose auto-play. You can choose from over 600 teams across over 30 leagues or create your own custom team with your favorite players. You can also compete in various tournaments and leagues or play friendly matches against other teams. But how do you be the soccer manager of your own dream team with manager mode? Here are some tips and tricks to help you out:
-
-
Select your team. You can select your team by going to Manager Mode > Select Team. You can choose from any of the available teams or create your own custom team by going to Manager Mode > Create Team. You can also edit your team's name, logo, kit, formation, tactics, chemistry, and players by going to Manager Mode > Edit Team.
-
Play matches. You can play matches by going to Manager Mode > Play Match. You can choose from various tournaments and leagues or play friendly matches against other teams. You can also select your difficulty level, match length, weather condition, stadium, ball type, and referee by going to Manager Mode > Match Settings.
-
Manage your team. You can manage your team by going to Manager Mode > Manage Team. You can plan your strategy and adjust your tactics in real time or choose auto-play. You can also make substitutions, change formations, switch players' positions, or give instructions to your players during matches.
-
Earn rewards. You can earn rewards by playing matches in manager mode. You can earn coins, points , players, skill boosts, rank up tokens, and more by winning matches, completing objectives, and ranking up in the leaderboards. You can also unlock more teams, stadiums, balls, and kits by playing matches in manager mode.
-
-
By following these tips and tricks, you can be the soccer manager of your own dream team with manager mode in FIFA Mobile. You can also compare your team's performance and stats with other teams and players by going to Manager Mode > Stats. Manager mode is a great way to test your soccer knowledge and skills.
-
Conclusion
-
FIFA Mobile v18.1.01 mod apk is a modified version of the original FIFA Mobile game that gives you access to unlimited money, unlocked features, and more. It is a great way to enhance your gaming experience and have more fun with FIFA Mobile. You can build your ultimate team with star players from the biggest leagues and top teams, relive the world's greatest soccer tournament with FIFA World Cup 2022™ mode, score big with soccer icons and heroes, experience immersive next-level soccer simulation with upgraded stadiums and realistic audio, or be the soccer manager of your own dream team with manager mode. FIFA Mobile v18.1.01 mod apk has something for everyone.
-
So what are you waiting for? Download and install FIFA Mobile v18.1.01 mod apk on your device and start playing right away. You will not regret it.
-
FAQs
-
Here are some frequently asked questions about FIFA Mobile v18.1.01 mod apk:
-
What are the requirements for FIFA Mobile v18.1.01 mod apk?
-
FIFA Mobile v18.1.01 mod apk requires Android 4.4 or higher and at least 1 GB of RAM and 100 MB of free storage space on your device.
-
Is FIFA Mobile v18.1.01 mod apk safe and legal?
-
FIFA Mobile v18.1.01 mod apk is safe to use as long as you download it from a trusted source and scan it for viruses before installing it on your device. However, it is not legal to use FIFA Mobile v18.1.01 mod apk as it violates the terms and conditions of EA Sports and Google Play Store. You may face some risks or consequences if you use FIFA Mobile v18.1.01 mod apk, such as account suspension, data loss, or legal action.
-
How to update FIFA Mobile v18.1.01 mod apk?
-
To update FIFA Mobile v18.1.01 mod apk, you need to download the latest version of the mod apk file from a trusted source and install it on your device over the existing version. You may also need to uninstall the original FIFA Mobile game before installing the mod apk file.
-
How to get unlimited coins and points in FIFA Mobile v18.1.01 mod apk?
-
To get unlimited coins and points in FIFA Mobile v18.1.01 mod apk, you just need to launch the game and check your balance. You will see that you have unlimited coins and points to spend on players, upgrades, features, and more.
-
How to contact EA Sports for support or feedback on FIFA Mobile?
-
To contact EA Sports for support or feedback on FIFA Mobile, you can go to Settings > Help & Support > Contact Us and choose your preferred option to reach out to them. You can also visit their official website or social media pages for more information.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/7hao/bingo/Dockerfile b/spaces/7hao/bingo/Dockerfile
deleted file mode 100644
index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME
-
-# Switch to the "user" user
-USER user
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app/
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app/
-
-RUN npm run build
-
-ENV PORT 7860
-EXPOSE 7860
-
-CMD npm start
diff --git a/spaces/801artistry/RVC801/diffq/__init__.py b/spaces/801artistry/RVC801/diffq/__init__.py
deleted file mode 100644
index 2b997ee4ed99a90cc43db7812383927e6fe1a3e8..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/diffq/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-"""
-This package implements different quantization strategies:
-
-- `diffq.uniform.UniformQuantizer`: classic uniform quantization over n bits.
-- `diffq.diffq.DiffQuantizer`: differentiable quantizer based on scaled noise injection.
-
-Also, do check `diffq.base.BaseQuantizer` for the common methods of all Quantizers.
-"""
-
-from .uniform import UniformQuantizer
-from .diffq import DiffQuantizer
diff --git a/spaces/A00001/bingothoo/src/components/header.tsx b/spaces/A00001/bingothoo/src/components/header.tsx
deleted file mode 100644
index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from 'react'
-import { UserMenu } from './user-menu'
-
-export async function Header() {
- return (
-
-
-
-
-
- )
-}
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/__init__.py
deleted file mode 100644
index 6ab346075f1b35366e7231054513097b87552c6f..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-AudioCraft is a general framework for training audio generative models.
-At the moment we provide the training code for:
-
-- [MusicGen](https://arxiv.org/abs/2306.05284), a state-of-the-art
- text-to-music and melody+text autoregressive generative model.
- For the solver, see `audiocraft.solvers.musicgen.MusicGenSolver`, and for the model,
- `audiocraft.models.musicgen.MusicGen`.
-- [AudioGen](https://arxiv.org/abs/2209.15352), a state-of-the-art
- text-to-general-audio generative model.
-- [EnCodec](https://arxiv.org/abs/2210.13438), efficient and high fidelity
- neural audio codec which provides an excellent tokenizer for autoregressive language models.
- See `audiocraft.solvers.compression.CompressionSolver`, and `audiocraft.models.encodec.EncodecModel`.
-- [MultiBandDiffusion](TODO), alternative diffusion-based decoder compatible with EnCodec that
- improves the perceived quality and reduces the artifacts coming from adversarial decoders.
-"""
-
-# flake8: noqa
-from . import data, modules, models
-
-__version__ = '1.0.0'
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/setup.py b/spaces/AIFILMS/generate_human_motion/pyrender/setup.py
deleted file mode 100644
index c3b5ba0da2b0f17b759e5556597981096a80bda8..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/setup.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""
-Setup of pyrender Python codebase.
-
-Author: Matthew Matl
-"""
-import sys
-from setuptools import setup
-
-# load __version__
-exec(open('pyrender/version.py').read())
-
-def get_imageio_dep():
- if sys.version[0] == "2":
- return 'imageio<=2.6.1'
- return 'imageio'
-
-requirements = [
- 'freetype-py', # For font loading
- get_imageio_dep(), # For Image I/O
- 'networkx', # For the scene graph
- 'numpy', # Numpy
- 'Pillow', # For Trimesh texture conversions
- 'pyglet>=1.4.10', # For the pyglet viewer
- 'PyOpenGL~=3.1.0', # For OpenGL
-# 'PyOpenGL_accelerate~=3.1.0', # For OpenGL
- 'scipy', # Because of trimesh missing dep
- 'six', # For Python 2/3 interop
- 'trimesh', # For meshes
-]
-
-dev_requirements = [
- 'flake8', # Code formatting checker
- 'pre-commit', # Pre-commit hooks
- 'pytest', # Code testing
- 'pytest-cov', # Coverage testing
- 'tox', # Automatic virtualenv testing
-]
-
-docs_requirements = [
- 'sphinx', # General doc library
- 'sphinx_rtd_theme', # RTD theme for sphinx
- 'sphinx-automodapi' # For generating nice tables
-]
-
-
-setup(
- name = 'pyrender',
- version=__version__,
- description='Easy-to-use Python renderer for 3D visualization',
- long_description='A simple implementation of Physically-Based Rendering '
- '(PBR) in Python. Compliant with the glTF 2.0 standard.',
- author='Matthew Matl',
- author_email='matthewcmatl@gmail.com',
- license='MIT License',
- url = 'https://github.com/mmatl/pyrender',
- classifiers = [
- 'Development Status :: 4 - Beta',
- 'License :: OSI Approved :: MIT License',
- 'Operating System :: POSIX :: Linux',
- 'Operating System :: MacOS :: MacOS X',
- 'Programming Language :: Python :: 2.7',
- 'Programming Language :: Python :: 3.5',
- 'Programming Language :: Python :: 3.6',
- 'Natural Language :: English',
- 'Topic :: Scientific/Engineering'
- ],
- keywords = 'rendering graphics opengl 3d visualization pbr gltf',
- packages = ['pyrender', 'pyrender.platforms'],
- setup_requires = requirements,
- install_requires = requirements,
- extras_require={
- 'dev': dev_requirements,
- 'docs': docs_requirements,
- },
- include_package_data=True
-)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/predict.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/predict.py
deleted file mode 100644
index e9d13f30153cd43a4a8bcfe2da4b9a53846bf1eb..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/predict.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import os
-from torch.utils.data import DataLoader
-import torchvision
-from tqdm import tqdm
-from dataset import VGGSound
-import torch
-import torch.nn as nn
-from metrics import metrics
-from omegaconf import OmegaConf
-from model import VGGishish
-from transforms import Crop, StandardNormalizeAudio, ToTensor
-
-
-if __name__ == '__main__':
- cfg_cli = OmegaConf.from_cli()
- print(cfg_cli.config)
- cfg_yml = OmegaConf.load(cfg_cli.config)
- # the latter arguments are prioritized
- cfg = OmegaConf.merge(cfg_yml, cfg_cli)
- OmegaConf.set_readonly(cfg, True)
- print(OmegaConf.to_yaml(cfg))
-
- # logger = LoggerWithTBoard(cfg)
- transforms = [
- StandardNormalizeAudio(cfg.mels_path),
- ToTensor(),
- ]
- if cfg.cropped_size not in [None, 'None', 'none']:
- transforms.append(Crop(cfg.cropped_size))
- transforms = torchvision.transforms.transforms.Compose(transforms)
-
- datasets = {
- 'test': VGGSound('test', cfg.mels_path, transforms),
- }
-
- loaders = {
- 'test': DataLoader(datasets['test'], batch_size=cfg.batch_size,
- num_workers=cfg.num_workers, pin_memory=True)
- }
-
- device = torch.device(cfg.device if torch.cuda.is_available() else 'cpu')
- model = VGGishish(cfg.conv_layers, cfg.use_bn, num_classes=len(datasets['test'].target2label))
- model = model.to(device)
-
- optimizer = torch.optim.Adam(model.parameters(), lr=cfg.learning_rate)
- criterion = nn.CrossEntropyLoss()
-
- # loading the best model
- folder_name = os.path.split(cfg.config)[0].split('/')[-1]
- print(folder_name)
- ckpt = torch.load(f'./logs/{folder_name}/vggishish-{folder_name}.pt', map_location='cpu')
- model.load_state_dict(ckpt['model'])
- print((f'The model was trained for {ckpt["epoch"]} epochs. Loss: {ckpt["loss"]:.4f}'))
-
- # Testing the model
- model.eval()
- running_loss = 0
- preds_from_each_batch = []
- targets_from_each_batch = []
-
- for i, batch in enumerate(tqdm(loaders['test'])):
- inputs = batch['input'].to(device)
- targets = batch['target'].to(device)
-
- # zero the parameter gradients
- optimizer.zero_grad()
-
- # forward + backward + optimize
- with torch.set_grad_enabled(False):
- outputs = model(inputs)
- loss = criterion(outputs, targets)
-
- # loss
- running_loss += loss.item()
-
- # for metrics calculation later on
- preds_from_each_batch += [outputs.detach().cpu()]
- targets_from_each_batch += [targets.cpu()]
-
- # logging metrics
- preds_from_each_batch = torch.cat(preds_from_each_batch)
- targets_from_each_batch = torch.cat(targets_from_each_batch)
- test_metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch)
- test_metrics_dict['avg_loss'] = running_loss / len(loaders['test'])
- test_metrics_dict['param_num'] = sum(p.numel() for p in model.parameters() if p.requires_grad)
-
- # TODO: I have no idea why tboard doesn't keep metrics (hparams) in a tensorboard when
- # I run this experiment from cli: `python main.py config=./configs/vggish.yaml`
- # while when I run it in vscode debugger the metrics are present in the tboard (weird)
- print(test_metrics_dict)
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-coslr-preciseBN_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-coslr-preciseBN_in1k.py
deleted file mode 100644
index 01fefbbf2852eeceddb0ad026fb5098e763e0710..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-coslr-preciseBN_in1k.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = 'resnet50_8xb32-coslr_in1k.py'
-
-# Precise BN hook will update the bn stats, so this hook should be executed
-# before CheckpointHook(priority of 'VERY_LOW') and
-# EMAHook(priority of 'NORMAL') So set the priority of PreciseBNHook to
-# 'ABOVENORMAL' here.
-custom_hooks = [
- dict(
- type='PreciseBNHook',
- num_samples=8192,
- interval=1,
- priority='ABOVE_NORMAL')
-]
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/create_configs.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/create_configs.py
deleted file mode 100644
index cd47aacb01ee07c8bc673ff33daff334fe85d0f2..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/create_configs.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import yaml
-
-fname = "config/gpt-cls-tash-proc.yml"
-
-stream = open(fname, 'r')
-data = yaml.load(stream, Loader=yaml.FullLoader)
-
-for i in range(0, 10):
- data['n_layer'] = i
- data['log_directory'] = f'log_dir_cls_{i}_tash_proc'
- data['max_steps'] = 5000
- with open(f"config/gpt-cls-{i}-tash-proc.yml", 'w') as yaml_file:
- yaml_file.write( yaml.dump(data, default_flow_style=False))
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/Factory.js
deleted file mode 100644
index 7b1ce6aefc4c185f293b84af6fe7cdaf6eb1c009..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Chart from './Chart.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('chart', function (x, y, width, height, config) {
- var gameObject = new Chart(this.scene, x, y, width, height, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.Chart', Chart);
-
-export default Chart;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.js
deleted file mode 100644
index 211c152887986b5d45be7c8469d3e8f9445d1031..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.js
+++ /dev/null
@@ -1,35 +0,0 @@
-import ParseYAML from './utils/ParseYAML.js';
-import Make from './Make.js';
-
-var YAMLMake = function (scene, data, view, styles, customBuilders) {
- data = ParseYAML(data);
- if (Array.isArray(data)) {
- // Parsing result of YAML data might be an array,
- // Only last item will be used to create game object, others are references
- data = data[data.length - 1];
- } else if (data.$root) {
- // Parsing result of YAML data might be an object, with $root key,
- // data.$root will be used to create game object, others are default styles
- var defaultStyles = data;
- data = data.$root;
- delete defaultStyles.$root;
-
- if (styles === undefined) {
- styles = defaultStyles;
- } else {
- for (var key in defaultStyles) {
- if (!styles[key]) {
- styles[key] = defaultStyles[key];
- }
- }
- }
- }
-
- styles = ParseYAML(styles);
-
- var gameObject = Make(scene, data, view, styles, customBuilders);
-
- return gameObject;
-}
-
-export default YAMLMake;
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.py
deleted file mode 100644
index 5fa95018f961f1aaa8013befcae7471995eee505..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.py
+++ /dev/null
@@ -1,409 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom PyTorch ops for efficient resampling of 2D images."""
-
-import os
-import warnings
-import numpy as np
-import torch
-import traceback
-
-from .. import custom_ops
-from .. import misc
-from . import conv2d_gradfix
-
-# ----------------------------------------------------------------------------
-
-_inited = False
-_plugin = None
-
-
-def _init():
- global _inited, _plugin
- if not _inited:
- sources = ['upfirdn2d.cpp', 'upfirdn2d.cu']
- sources = [os.path.join(os.path.dirname(__file__), s) for s in sources]
- try:
- _plugin = custom_ops.get_plugin(
- 'upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math'])
- except:
- warnings.warn(
- 'Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc())
- return _plugin is not None
-
-
-def _parse_scaling(scaling):
- if isinstance(scaling, int):
- scaling = [scaling, scaling]
- assert isinstance(scaling, (list, tuple))
- assert all(isinstance(x, int) for x in scaling)
- sx, sy = scaling
- assert sx >= 1 and sy >= 1
- return sx, sy
-
-
-def _parse_padding(padding):
- if isinstance(padding, int):
- padding = [padding, padding]
- assert isinstance(padding, (list, tuple))
- assert all(isinstance(x, int) for x in padding)
- if len(padding) == 2:
- padx, pady = padding
- padding = [padx, padx, pady, pady]
- padx0, padx1, pady0, pady1 = padding
- return padx0, padx1, pady0, pady1
-
-
-def _get_filter_size(f):
- if f is None:
- return 1, 1
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- fw = f.shape[-1]
- fh = f.shape[0]
- with misc.suppress_tracer_warnings():
- fw = int(fw)
- fh = int(fh)
- misc.assert_shape(f, [fh, fw][:f.ndim])
- assert fw >= 1 and fh >= 1
- return fw, fh
-
-# ----------------------------------------------------------------------------
-
-
-def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None):
- r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`.
-
- Args:
- f: Torch tensor, numpy array, or python list of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable),
- `[]` (impulse), or
- `None` (identity).
- device: Result device (default: cpu).
- normalize: Normalize the filter so that it retains the magnitude
- for constant input signal (DC)? (default: True).
- flip_filter: Flip the filter? (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- separable: Return a separable filter? (default: select automatically).
-
- Returns:
- Float32 tensor of the shape
- `[filter_height, filter_width]` (non-separable) or
- `[filter_taps]` (separable).
- """
- # Validate.
- if f is None:
- f = 1
- f = torch.as_tensor(f, dtype=torch.float32)
- assert f.ndim in [0, 1, 2]
- assert f.numel() > 0
- if f.ndim == 0:
- f = f[np.newaxis]
-
- # Separable?
- if separable is None:
- separable = (f.ndim == 1 and f.numel() >= 8)
- if f.ndim == 1 and not separable:
- f = f.ger(f)
- assert f.ndim == (1 if separable else 2)
-
- # Apply normalize, flip, gain, and device.
- if normalize:
- f /= f.sum()
- if flip_filter:
- f = f.flip(list(range(f.ndim)))
- f = f * (gain ** (f.ndim / 2))
- f = f.to(device=device)
- return f
-
-# ----------------------------------------------------------------------------
-
-
-def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Pad, upsample, filter, and downsample a batch of 2D images.
-
- Performs the following sequence of operations for each channel:
-
- 1. Upsample the image by inserting N-1 zeros after each pixel (`up`).
-
- 2. Pad the image with the specified number of zeros on each side (`padding`).
- Negative padding corresponds to cropping the image.
-
- 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it
- so that the footprint of all output pixels lies within the input image.
-
- 4. Downsample the image by keeping every Nth pixel (`down`).
-
- This sequence of operations bears close resemblance to scipy.signal.upfirdn().
- The fused op is considerably more efficient than performing the same calculation
- using standard PyTorch ops. It supports gradients of arbitrary order.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- assert isinstance(x, torch.Tensor)
- assert impl in ['ref', 'cuda']
- if impl == 'cuda' and x.device.type == 'cuda' and _init():
- return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f)
- return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain)
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- assert f.dtype == torch.float32 and not f.requires_grad
- batch_size, num_channels, in_height, in_width = x.shape
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Upsample by inserting zeros.
- x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1])
- x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1])
- x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx])
-
- # Pad or crop.
- x = torch.nn.functional.pad(
- x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)])
- x = x[:, :, max(-pady0, 0): x.shape[2] - max(-pady1, 0),
- max(-padx0, 0): x.shape[3] - max(-padx1, 0)]
-
- # Setup filter.
- f = f * (gain ** (f.ndim / 2))
- f = f.to(x.dtype)
- if not flip_filter:
- f = f.flip(list(range(f.ndim)))
-
- # Convolve with the filter.
- f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim)
- if f.ndim == 4:
- x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels)
- else:
- x = conv2d_gradfix.conv2d(
- input=x, weight=f.unsqueeze(2), groups=num_channels)
- x = conv2d_gradfix.conv2d(
- input=x, weight=f.unsqueeze(3), groups=num_channels)
-
- # Downsample by throwing away pixels.
- x = x[:, :, ::downy, ::downx]
- return x
-
-# ----------------------------------------------------------------------------
-
-
-_upfirdn2d_cuda_cache = dict()
-
-
-def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Fast CUDA implementation of `upfirdn2d()` using custom ops.
- """
- # Parse arguments.
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Lookup from cache.
- key = (upx, upy, downx, downy, padx0, padx1,
- pady0, pady1, flip_filter, gain)
- if key in _upfirdn2d_cuda_cache:
- return _upfirdn2d_cuda_cache[key]
-
- # Forward op.
- class Upfirdn2dCuda(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, f): # pylint: disable=arguments-differ
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- y = x
- if f.ndim == 2:
- y = _plugin.upfirdn2d(
- y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
- else:
- y = _plugin.upfirdn2d(y, f.unsqueeze(
- 0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain))
- y = _plugin.upfirdn2d(y, f.unsqueeze(
- 1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain))
- ctx.save_for_backward(f)
- ctx.x_shape = x.shape
- return y
-
- @staticmethod
- def backward(ctx, dy): # pylint: disable=arguments-differ
- f, = ctx.saved_tensors
- _, _, ih, iw = ctx.x_shape
- _, _, oh, ow = dy.shape
- fw, fh = _get_filter_size(f)
- p = [
- fw - padx0 - 1,
- iw * upx - ow * downx + padx0 - upx + 1,
- fh - pady0 - 1,
- ih * upy - oh * downy + pady0 - upy + 1,
- ]
- dx = None
- df = None
-
- if ctx.needs_input_grad[0]:
- dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(
- not flip_filter), gain=gain).apply(dy, f)
-
- assert not ctx.needs_input_grad[1]
- return dx, df
-
- # Add to cache.
- _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda
- return Upfirdn2dCuda
-
-# ----------------------------------------------------------------------------
-
-
-def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Filter a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape matches the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + fw // 2,
- padx1 + (fw - 1) // 2,
- pady0 + fh // 2,
- pady1 + (fh - 1) // 2,
- ]
- return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-# ----------------------------------------------------------------------------
-
-
-def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Upsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a multiple of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- upx, upy = _parse_scaling(up)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw + upx - 1) // 2,
- padx1 + (fw - upx) // 2,
- pady0 + (fh + upy - 1) // 2,
- pady1 + (fh - upy) // 2,
- ]
- return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl)
-
-# ----------------------------------------------------------------------------
-
-
-def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Downsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a fraction of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the input. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw - downx + 1) // 2,
- padx1 + (fw - downx) // 2,
- pady0 + (fh - downy + 1) // 2,
- pady1 + (fh - downy) // 2,
- ]
- return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/transformer_temporal.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/transformer_temporal.md
deleted file mode 100644
index d67cf717f92b20791bf00214bdf5627ccc34003f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/transformer_temporal.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Transformer Temporal
-
-A Transformer model for video-like data.
-
-## TransformerTemporalModel
-
-[[autodoc]] models.transformer_temporal.TransformerTemporalModel
-
-## TransformerTemporalModelOutput
-
-[[autodoc]] models.transformer_temporal.TransformerTemporalModelOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/zh/quicktour.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/zh/quicktour.md
deleted file mode 100644
index 68ab56c55a85a53c6b444d7831a059f7bed745f4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/zh/quicktour.md
+++ /dev/null
@@ -1,331 +0,0 @@
-
-
-[[open-in-colab]]
-
-# 快速上手
-
-训练扩散模型,是为了对随机高斯噪声进行逐步去噪,以生成令人感兴趣的样本,比如图像或者语音。
-
-扩散模型的发展引起了人们对生成式人工智能的极大兴趣,你可能已经在网上见过扩散生成的图像了。🧨 Diffusers库的目的是让大家更易上手扩散模型。
-
-无论你是开发人员还是普通用户,本文将向你介绍🧨 Diffusers 并帮助你快速开始生成内容!
-
-🧨 Diffusers 库的三个主要组件:
-
-
-无论你是开发者还是普通用户,这个快速指南将向你介绍🧨 Diffusers,并帮助你快速使用和生成!该库三个主要部分如下:
-
-* [`DiffusionPipeline`]是一个高级的端到端类,旨在通过预训练的扩散模型快速生成样本进行推理。
-* 作为创建扩散系统做组件的流行的预训练[模型](./api/models)框架和模块。
-* 许多不同的[调度器](./api/schedulers/overview):控制如何在训练过程中添加噪声的算法,以及如何在推理过程中生成去噪图像的算法。
-
-快速入门将告诉你如何使用[`DiffusionPipeline`]进行推理,然后指导你如何结合模型和调度器以复现[`DiffusionPipeline`]内部发生的事情。
-
-
-
-快速入门是🧨[Diffusers入门](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)的简化版,可以帮助你快速上手。如果你想了解更多关于🧨 Diffusers的目标、设计理念以及关于它的核心API的更多细节,可以点击🧨[Diffusers入门](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)查看。
-
-
-
-在开始之前,确认一下你已经安装好了所需要的库:
-
-```bash
-pip install --upgrade diffusers accelerate transformers
-```
-
-- [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) 在推理和训练过程中加速模型加载。
-- [🤗 Transformers](https://huggingface.co/docs/transformers/index) 是运行最流行的扩散模型所必须的库,比如[Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview).
-
-## 扩散模型管道
-
-[`DiffusionPipeline`]是用预训练的扩散系统进行推理的最简单方法。它是一个包含模型和调度器的端到端系统。你可以直接使用[`DiffusionPipeline`]完成许多任务。请查看下面的表格以了解一些支持的任务,要获取完整的支持任务列表,请查看[🧨 Diffusers 总结](./api/pipelines/overview#diffusers-summary) 。
-
-| **任务** | **描述** | **管道**
-|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|
-| Unconditional Image Generation | 从高斯噪声中生成图片 | [unconditional_image_generation](./using-diffusers/unconditional_image_generation) |
-| Text-Guided Image Generation | 给定文本提示生成图像 | [conditional_image_generation](./using-diffusers/conditional_image_generation) |
-| Text-Guided Image-to-Image Translation | 在文本提示的指导下调整图像 | [img2img](./using-diffusers/img2img) |
-| Text-Guided Image-Inpainting | 给出图像、遮罩和文本提示,填充图像的遮罩部分 | [inpaint](./using-diffusers/inpaint) |
-| Text-Guided Depth-to-Image Translation | 在文本提示的指导下调整图像的部分内容,同时通过深度估计保留其结构 | [depth2img](./using-diffusers/depth2img) |
-
-首先创建一个[`DiffusionPipeline`]的实例,并指定要下载的pipeline检查点。
-你可以使用存储在Hugging Face Hub上的任何[`DiffusionPipeline`][检查点](https://huggingface.co/models?library=diffusers&sort=downloads)。
-在教程中,你将加载[`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)检查点,用于文本到图像的生成。
-
-首先创建一个[DiffusionPipeline]实例,并指定要下载的管道检查点。
-您可以在Hugging Face Hub上使用[DiffusionPipeline]的任何检查点。
-在本快速入门中,您将加载stable-diffusion-v1-5检查点,用于文本到图像生成。
-
-。
-
-对于[Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion)模型,在运行该模型之前,请先仔细阅读[许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license)。🧨 Diffusers实现了一个[`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py),以防止有攻击性的或有害的内容,但Stable Diffusion模型改进图像的生成能力仍有可能产生潜在的有害内容。
-
-
-
-用[`~DiffusionPipeline.from_pretrained`]方法加载模型。
-
-```python
->>> from diffusers import DiffusionPipeline
-
->>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-```
-[`DiffusionPipeline`]会下载并缓存所有的建模、标记化和调度组件。你可以看到Stable Diffusion的pipeline是由[`UNet2DConditionModel`]和[`PNDMScheduler`]等组件组成的:
-
-```py
->>> pipeline
-StableDiffusionPipeline {
- "_class_name": "StableDiffusionPipeline",
- "_diffusers_version": "0.13.1",
- ...,
- "scheduler": [
- "diffusers",
- "PNDMScheduler"
- ],
- ...,
- "unet": [
- "diffusers",
- "UNet2DConditionModel"
- ],
- "vae": [
- "diffusers",
- "AutoencoderKL"
- ]
-}
-```
-
-我们强烈建议你在GPU上运行这个pipeline,因为该模型由大约14亿个参数组成。
-
-你可以像在Pytorch里那样把生成器对象移到GPU上:
-
-```python
->>> pipeline.to("cuda")
-```
-
-现在你可以向`pipeline`传递一个文本提示来生成图像,然后获得去噪的图像。默认情况下,图像输出被放在一个[`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class)对象中。
-
-```python
->>> image = pipeline("An image of a squirrel in Picasso style").images[0]
->>> image
-```
-
-
-
-## 下一步
-
-希望你在这次快速入门教程中用🧨Diffuser 生成了一些很酷的图像! 下一步你可以:
-
-* 在[训练](./tutorials/basic_training)教程中训练或微调一个模型来生成你自己的图像。
-* 查看官方和社区的[训练或微调脚本](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples)的例子,了解更多使用情况。
-* 在[使用不同的调度器](./using-diffusers/schedulers)指南中了解更多关于加载、访问、更改和比较调度器的信息。
-* 在[Stable Diffusion](./stable_diffusion)教程中探索提示工程、速度和内存优化,以及生成更高质量图像的技巧。
-* 通过[在GPU上优化PyTorch](./optimization/fp16)指南,以及运行[Apple (M1/M2)上的Stable Diffusion](./optimization/mps)和[ONNX Runtime](./optimization/onnx)的教程,更深入地了解如何加速🧨Diffuser。
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_temporal.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_temporal.py
deleted file mode 100644
index cfafdb055bcfedc911b0a19d1e5da8089a18b215..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_temporal.py
+++ /dev/null
@@ -1,179 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass
-from typing import Optional
-
-import torch
-from torch import nn
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput
-from .attention import BasicTransformerBlock
-from .modeling_utils import ModelMixin
-
-
-@dataclass
-class TransformerTemporalModelOutput(BaseOutput):
- """
- The output of [`TransformerTemporalModel`].
-
- Args:
- sample (`torch.FloatTensor` of shape `(batch_size x num_frames, num_channels, height, width)`):
- The hidden states output conditioned on `encoder_hidden_states` input.
- """
-
- sample: torch.FloatTensor
-
-
-class TransformerTemporalModel(ModelMixin, ConfigMixin):
- """
- A Transformer model for video-like data.
-
- Parameters:
- num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention.
- attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head.
- in_channels (`int`, *optional*):
- The number of channels in the input and output (specify if the input is **continuous**).
- num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use.
- dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
- cross_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use.
- sample_size (`int`, *optional*): The width of the latent images (specify if the input is **discrete**).
- This is fixed during training since it is used to learn a number of position embeddings.
- activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to use in feed-forward.
- attention_bias (`bool`, *optional*):
- Configure if the `TransformerBlock` attention should contain a bias parameter.
- double_self_attention (`bool`, *optional*):
- Configure if each `TransformerBlock` should contain two self-attention layers.
- """
-
- @register_to_config
- def __init__(
- self,
- num_attention_heads: int = 16,
- attention_head_dim: int = 88,
- in_channels: Optional[int] = None,
- out_channels: Optional[int] = None,
- num_layers: int = 1,
- dropout: float = 0.0,
- norm_num_groups: int = 32,
- cross_attention_dim: Optional[int] = None,
- attention_bias: bool = False,
- sample_size: Optional[int] = None,
- activation_fn: str = "geglu",
- norm_elementwise_affine: bool = True,
- double_self_attention: bool = True,
- ):
- super().__init__()
- self.num_attention_heads = num_attention_heads
- self.attention_head_dim = attention_head_dim
- inner_dim = num_attention_heads * attention_head_dim
-
- self.in_channels = in_channels
-
- self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True)
- self.proj_in = nn.Linear(in_channels, inner_dim)
-
- # 3. Define transformers blocks
- self.transformer_blocks = nn.ModuleList(
- [
- BasicTransformerBlock(
- inner_dim,
- num_attention_heads,
- attention_head_dim,
- dropout=dropout,
- cross_attention_dim=cross_attention_dim,
- activation_fn=activation_fn,
- attention_bias=attention_bias,
- double_self_attention=double_self_attention,
- norm_elementwise_affine=norm_elementwise_affine,
- )
- for d in range(num_layers)
- ]
- )
-
- self.proj_out = nn.Linear(inner_dim, in_channels)
-
- def forward(
- self,
- hidden_states,
- encoder_hidden_states=None,
- timestep=None,
- class_labels=None,
- num_frames=1,
- cross_attention_kwargs=None,
- return_dict: bool = True,
- ):
- """
- The [`TransformerTemporal`] forward method.
-
- Args:
- hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous):
- Input hidden_states.
- encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*):
- Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
- self-attention.
- timestep ( `torch.long`, *optional*):
- Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*):
- Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
- `AdaLayerZeroNorm`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain
- tuple.
-
- Returns:
- [`~models.transformer_temporal.TransformerTemporalModelOutput`] or `tuple`:
- If `return_dict` is True, an [`~models.transformer_temporal.TransformerTemporalModelOutput`] is
- returned, otherwise a `tuple` where the first element is the sample tensor.
- """
- # 1. Input
- batch_frames, channel, height, width = hidden_states.shape
- batch_size = batch_frames // num_frames
-
- residual = hidden_states
-
- hidden_states = hidden_states[None, :].reshape(batch_size, num_frames, channel, height, width)
- hidden_states = hidden_states.permute(0, 2, 1, 3, 4)
-
- hidden_states = self.norm(hidden_states)
- hidden_states = hidden_states.permute(0, 3, 4, 2, 1).reshape(batch_size * height * width, num_frames, channel)
-
- hidden_states = self.proj_in(hidden_states)
-
- # 2. Blocks
- for block in self.transformer_blocks:
- hidden_states = block(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- timestep=timestep,
- cross_attention_kwargs=cross_attention_kwargs,
- class_labels=class_labels,
- )
-
- # 3. Output
- hidden_states = self.proj_out(hidden_states)
- hidden_states = (
- hidden_states[None, None, :]
- .reshape(batch_size, height, width, channel, num_frames)
- .permute(0, 3, 4, 1, 2)
- .contiguous()
- )
- hidden_states = hidden_states.reshape(batch_frames, channel, height, width)
-
- output = hidden_states + residual
-
- if not return_dict:
- return (output,)
-
- return TransformerTemporalModelOutput(sample=output)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py
deleted file mode 100644
index 7a69a7908efa96f21ca57d0fed1814147cd72078..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py
+++ /dev/null
@@ -1,1932 +0,0 @@
-from typing import Any, Dict, List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ...configuration_utils import ConfigMixin, register_to_config
-from ...models import ModelMixin
-from ...models.activations import get_activation
-from ...models.attention import Attention
-from ...models.attention_processor import (
- AttentionProcessor,
- AttnAddedKVProcessor,
- AttnAddedKVProcessor2_0,
- AttnProcessor,
-)
-from ...models.dual_transformer_2d import DualTransformer2DModel
-from ...models.embeddings import (
- GaussianFourierProjection,
- ImageHintTimeEmbedding,
- ImageProjection,
- ImageTimeEmbedding,
- TextImageProjection,
- TextImageTimeEmbedding,
- TextTimeEmbedding,
- TimestepEmbedding,
- Timesteps,
-)
-from ...models.transformer_2d import Transformer2DModel
-from ...models.unet_2d_condition import UNet2DConditionOutput
-from ...utils import is_torch_version, logging
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def get_down_block(
- down_block_type,
- num_layers,
- in_channels,
- out_channels,
- temb_channels,
- add_downsample,
- resnet_eps,
- resnet_act_fn,
- num_attention_heads,
- resnet_groups=None,
- cross_attention_dim=None,
- downsample_padding=None,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- resnet_time_scale_shift="default",
- resnet_skip_time_act=False,
- resnet_out_scale_factor=1.0,
- cross_attention_norm=None,
-):
- down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type
- if down_block_type == "DownBlockFlat":
- return DownBlockFlat(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- downsample_padding=downsample_padding,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- elif down_block_type == "CrossAttnDownBlockFlat":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlockFlat")
- return CrossAttnDownBlockFlat(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- downsample_padding=downsample_padding,
- cross_attention_dim=cross_attention_dim,
- num_attention_heads=num_attention_heads,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- raise ValueError(f"{down_block_type} is not supported.")
-
-
-def get_up_block(
- up_block_type,
- num_layers,
- in_channels,
- out_channels,
- prev_output_channel,
- temb_channels,
- add_upsample,
- resnet_eps,
- resnet_act_fn,
- num_attention_heads,
- resnet_groups=None,
- cross_attention_dim=None,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- resnet_time_scale_shift="default",
- resnet_skip_time_act=False,
- resnet_out_scale_factor=1.0,
- cross_attention_norm=None,
-):
- up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type
- if up_block_type == "UpBlockFlat":
- return UpBlockFlat(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- elif up_block_type == "CrossAttnUpBlockFlat":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlockFlat")
- return CrossAttnUpBlockFlat(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- cross_attention_dim=cross_attention_dim,
- num_attention_heads=num_attention_heads,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- raise ValueError(f"{up_block_type} is not supported.")
-
-
-# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel with UNet2DConditionModel->UNetFlatConditionModel, nn.Conv2d->LinearMultiDim, Block2D->BlockFlat
-class UNetFlatConditionModel(ModelMixin, ConfigMixin):
- r"""
- A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
- shaped output.
-
- This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
- for all models (such as downloading or saving).
-
- Parameters:
- sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`):
- Height and width of input/output sample.
- in_channels (`int`, *optional*, defaults to 4): Number of channels in the input sample.
- out_channels (`int`, *optional*, defaults to 4): Number of channels in the output.
- center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample.
- flip_sin_to_cos (`bool`, *optional*, defaults to `False`):
- Whether to flip the sin to cos in the time embedding.
- freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding.
- down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "DownBlockFlat")`):
- The tuple of downsample blocks to use.
- mid_block_type (`str`, *optional*, defaults to `"UNetMidBlockFlatCrossAttn"`):
- Block type for middle of UNet, it can be either `UNetMidBlockFlatCrossAttn` or
- `UNetMidBlockFlatSimpleCrossAttn`. If `None`, the mid block layer is skipped.
- up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat")`):
- The tuple of upsample blocks to use.
- only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`):
- Whether to include self-attention in the basic transformer blocks, see
- [`~models.attention.BasicTransformerBlock`].
- block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`):
- The tuple of output channels for each block.
- layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
- downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution.
- mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block.
- act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
- norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization.
- If `None`, normalization and activation layers is skipped in post-processing.
- norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization.
- cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280):
- The dimension of the cross attention features.
- transformer_layers_per_block (`int` or `Tuple[int]`, *optional*, defaults to 1):
- The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`]. Only relevant for
- [`~models.unet_2d_blocks.CrossAttnDownBlockFlat`], [`~models.unet_2d_blocks.CrossAttnUpBlockFlat`],
- [`~models.unet_2d_blocks.UNetMidBlockFlatCrossAttn`].
- encoder_hid_dim (`int`, *optional*, defaults to None):
- If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim`
- dimension to `cross_attention_dim`.
- encoder_hid_dim_type (`str`, *optional*, defaults to `None`):
- If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text
- embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`.
- attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads.
- num_attention_heads (`int`, *optional*):
- The number of attention heads. If not defined, defaults to `attention_head_dim`
- resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config
- for ResNet blocks (see [`~models.resnet.ResnetBlockFlat`]). Choose from `default` or `scale_shift`.
- class_embed_type (`str`, *optional*, defaults to `None`):
- The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`,
- `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`.
- addition_embed_type (`str`, *optional*, defaults to `None`):
- Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or
- "text". "text" will use the `TextTimeEmbedding` layer.
- addition_time_embed_dim: (`int`, *optional*, defaults to `None`):
- Dimension for the timestep embeddings.
- num_class_embeds (`int`, *optional*, defaults to `None`):
- Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing
- class conditioning with `class_embed_type` equal to `None`.
- time_embedding_type (`str`, *optional*, defaults to `positional`):
- The type of position embedding to use for timesteps. Choose from `positional` or `fourier`.
- time_embedding_dim (`int`, *optional*, defaults to `None`):
- An optional override for the dimension of the projected time embedding.
- time_embedding_act_fn (`str`, *optional*, defaults to `None`):
- Optional activation function to use only once on the time embeddings before they are passed to the rest of
- the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`.
- timestep_post_act (`str`, *optional*, defaults to `None`):
- The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`.
- time_cond_proj_dim (`int`, *optional*, defaults to `None`):
- The dimension of `cond_proj` layer in the timestep embedding.
- conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer.
- conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer.
- projection_class_embeddings_input_dim (`int`, *optional*): The dimension of the `class_labels` input when
- `class_embed_type="projection"`. Required when `class_embed_type="projection"`.
- class_embeddings_concat (`bool`, *optional*, defaults to `False`): Whether to concatenate the time
- embeddings with the class embeddings.
- mid_block_only_cross_attention (`bool`, *optional*, defaults to `None`):
- Whether to use cross attention with the mid block when using the `UNetMidBlockFlatSimpleCrossAttn`. If
- `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the
- `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False`
- otherwise.
- """
-
- _supports_gradient_checkpointing = True
-
- @register_to_config
- def __init__(
- self,
- sample_size: Optional[int] = None,
- in_channels: int = 4,
- out_channels: int = 4,
- center_input_sample: bool = False,
- flip_sin_to_cos: bool = True,
- freq_shift: int = 0,
- down_block_types: Tuple[str] = (
- "CrossAttnDownBlockFlat",
- "CrossAttnDownBlockFlat",
- "CrossAttnDownBlockFlat",
- "DownBlockFlat",
- ),
- mid_block_type: Optional[str] = "UNetMidBlockFlatCrossAttn",
- up_block_types: Tuple[str] = (
- "UpBlockFlat",
- "CrossAttnUpBlockFlat",
- "CrossAttnUpBlockFlat",
- "CrossAttnUpBlockFlat",
- ),
- only_cross_attention: Union[bool, Tuple[bool]] = False,
- block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
- layers_per_block: Union[int, Tuple[int]] = 2,
- downsample_padding: int = 1,
- mid_block_scale_factor: float = 1,
- act_fn: str = "silu",
- norm_num_groups: Optional[int] = 32,
- norm_eps: float = 1e-5,
- cross_attention_dim: Union[int, Tuple[int]] = 1280,
- transformer_layers_per_block: Union[int, Tuple[int]] = 1,
- encoder_hid_dim: Optional[int] = None,
- encoder_hid_dim_type: Optional[str] = None,
- attention_head_dim: Union[int, Tuple[int]] = 8,
- num_attention_heads: Optional[Union[int, Tuple[int]]] = None,
- dual_cross_attention: bool = False,
- use_linear_projection: bool = False,
- class_embed_type: Optional[str] = None,
- addition_embed_type: Optional[str] = None,
- addition_time_embed_dim: Optional[int] = None,
- num_class_embeds: Optional[int] = None,
- upcast_attention: bool = False,
- resnet_time_scale_shift: str = "default",
- resnet_skip_time_act: bool = False,
- resnet_out_scale_factor: int = 1.0,
- time_embedding_type: str = "positional",
- time_embedding_dim: Optional[int] = None,
- time_embedding_act_fn: Optional[str] = None,
- timestep_post_act: Optional[str] = None,
- time_cond_proj_dim: Optional[int] = None,
- conv_in_kernel: int = 3,
- conv_out_kernel: int = 3,
- projection_class_embeddings_input_dim: Optional[int] = None,
- class_embeddings_concat: bool = False,
- mid_block_only_cross_attention: Optional[bool] = None,
- cross_attention_norm: Optional[str] = None,
- addition_embed_type_num_heads=64,
- ):
- super().__init__()
-
- self.sample_size = sample_size
-
- if num_attention_heads is not None:
- raise ValueError(
- "At the moment it is not possible to define the number of attention heads via `num_attention_heads`"
- " because of a naming issue as described in"
- " https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing"
- " `num_attention_heads` will only be supported in diffusers v0.19."
- )
-
- # If `num_attention_heads` is not defined (which is the case for most models)
- # it will default to `attention_head_dim`. This looks weird upon first reading it and it is.
- # The reason for this behavior is to correct for incorrectly named variables that were introduced
- # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131
- # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking
- # which is why we correct for the naming here.
- num_attention_heads = num_attention_heads or attention_head_dim
-
- # Check inputs
- if len(down_block_types) != len(up_block_types):
- raise ValueError(
- "Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`:"
- f" {down_block_types}. `up_block_types`: {up_block_types}."
- )
-
- if len(block_out_channels) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`:"
- f" {block_out_channels}. `down_block_types`: {down_block_types}."
- )
-
- if not isinstance(only_cross_attention, bool) and len(only_cross_attention) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `only_cross_attention` as `down_block_types`."
- f" `only_cross_attention`: {only_cross_attention}. `down_block_types`: {down_block_types}."
- )
-
- if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`:"
- f" {num_attention_heads}. `down_block_types`: {down_block_types}."
- )
-
- if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`:"
- f" {attention_head_dim}. `down_block_types`: {down_block_types}."
- )
-
- if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`:"
- f" {cross_attention_dim}. `down_block_types`: {down_block_types}."
- )
-
- if not isinstance(layers_per_block, int) and len(layers_per_block) != len(down_block_types):
- raise ValueError(
- "Must provide the same number of `layers_per_block` as `down_block_types`. `layers_per_block`:"
- f" {layers_per_block}. `down_block_types`: {down_block_types}."
- )
-
- # input
- conv_in_padding = (conv_in_kernel - 1) // 2
- self.conv_in = LinearMultiDim(
- in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding
- )
-
- # time
- if time_embedding_type == "fourier":
- time_embed_dim = time_embedding_dim or block_out_channels[0] * 2
- if time_embed_dim % 2 != 0:
- raise ValueError(f"`time_embed_dim` should be divisible by 2, but is {time_embed_dim}.")
- self.time_proj = GaussianFourierProjection(
- time_embed_dim // 2, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos
- )
- timestep_input_dim = time_embed_dim
- elif time_embedding_type == "positional":
- time_embed_dim = time_embedding_dim or block_out_channels[0] * 4
-
- self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift)
- timestep_input_dim = block_out_channels[0]
- else:
- raise ValueError(
- f"{time_embedding_type} does not exist. Please make sure to use one of `fourier` or `positional`."
- )
-
- self.time_embedding = TimestepEmbedding(
- timestep_input_dim,
- time_embed_dim,
- act_fn=act_fn,
- post_act_fn=timestep_post_act,
- cond_proj_dim=time_cond_proj_dim,
- )
-
- if encoder_hid_dim_type is None and encoder_hid_dim is not None:
- encoder_hid_dim_type = "text_proj"
- self.register_to_config(encoder_hid_dim_type=encoder_hid_dim_type)
- logger.info("encoder_hid_dim_type defaults to 'text_proj' as `encoder_hid_dim` is defined.")
-
- if encoder_hid_dim is None and encoder_hid_dim_type is not None:
- raise ValueError(
- f"`encoder_hid_dim` has to be defined when `encoder_hid_dim_type` is set to {encoder_hid_dim_type}."
- )
-
- if encoder_hid_dim_type == "text_proj":
- self.encoder_hid_proj = nn.Linear(encoder_hid_dim, cross_attention_dim)
- elif encoder_hid_dim_type == "text_image_proj":
- # image_embed_dim DOESN'T have to be `cross_attention_dim`. To not clutter the __init__ too much
- # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use
- # case when `addition_embed_type == "text_image_proj"` (Kadinsky 2.1)`
- self.encoder_hid_proj = TextImageProjection(
- text_embed_dim=encoder_hid_dim,
- image_embed_dim=cross_attention_dim,
- cross_attention_dim=cross_attention_dim,
- )
- elif encoder_hid_dim_type == "image_proj":
- # Kandinsky 2.2
- self.encoder_hid_proj = ImageProjection(
- image_embed_dim=encoder_hid_dim,
- cross_attention_dim=cross_attention_dim,
- )
- elif encoder_hid_dim_type is not None:
- raise ValueError(
- f"encoder_hid_dim_type: {encoder_hid_dim_type} must be None, 'text_proj' or 'text_image_proj'."
- )
- else:
- self.encoder_hid_proj = None
-
- # class embedding
- if class_embed_type is None and num_class_embeds is not None:
- self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim)
- elif class_embed_type == "timestep":
- self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim, act_fn=act_fn)
- elif class_embed_type == "identity":
- self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim)
- elif class_embed_type == "projection":
- if projection_class_embeddings_input_dim is None:
- raise ValueError(
- "`class_embed_type`: 'projection' requires `projection_class_embeddings_input_dim` be set"
- )
- # The projection `class_embed_type` is the same as the timestep `class_embed_type` except
- # 1. the `class_labels` inputs are not first converted to sinusoidal embeddings
- # 2. it projects from an arbitrary input dimension.
- #
- # Note that `TimestepEmbedding` is quite general, being mainly linear layers and activations.
- # When used for embedding actual timesteps, the timesteps are first converted to sinusoidal embeddings.
- # As a result, `TimestepEmbedding` can be passed arbitrary vectors.
- self.class_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim)
- elif class_embed_type == "simple_projection":
- if projection_class_embeddings_input_dim is None:
- raise ValueError(
- "`class_embed_type`: 'simple_projection' requires `projection_class_embeddings_input_dim` be set"
- )
- self.class_embedding = nn.Linear(projection_class_embeddings_input_dim, time_embed_dim)
- else:
- self.class_embedding = None
-
- if addition_embed_type == "text":
- if encoder_hid_dim is not None:
- text_time_embedding_from_dim = encoder_hid_dim
- else:
- text_time_embedding_from_dim = cross_attention_dim
-
- self.add_embedding = TextTimeEmbedding(
- text_time_embedding_from_dim, time_embed_dim, num_heads=addition_embed_type_num_heads
- )
- elif addition_embed_type == "text_image":
- # text_embed_dim and image_embed_dim DON'T have to be `cross_attention_dim`. To not clutter the __init__ too much
- # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use
- # case when `addition_embed_type == "text_image"` (Kadinsky 2.1)`
- self.add_embedding = TextImageTimeEmbedding(
- text_embed_dim=cross_attention_dim, image_embed_dim=cross_attention_dim, time_embed_dim=time_embed_dim
- )
- elif addition_embed_type == "text_time":
- self.add_time_proj = Timesteps(addition_time_embed_dim, flip_sin_to_cos, freq_shift)
- self.add_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim)
- elif addition_embed_type == "image":
- # Kandinsky 2.2
- self.add_embedding = ImageTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim)
- elif addition_embed_type == "image_hint":
- # Kandinsky 2.2 ControlNet
- self.add_embedding = ImageHintTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim)
- elif addition_embed_type is not None:
- raise ValueError(f"addition_embed_type: {addition_embed_type} must be None, 'text' or 'text_image'.")
-
- if time_embedding_act_fn is None:
- self.time_embed_act = None
- else:
- self.time_embed_act = get_activation(time_embedding_act_fn)
-
- self.down_blocks = nn.ModuleList([])
- self.up_blocks = nn.ModuleList([])
-
- if isinstance(only_cross_attention, bool):
- if mid_block_only_cross_attention is None:
- mid_block_only_cross_attention = only_cross_attention
-
- only_cross_attention = [only_cross_attention] * len(down_block_types)
-
- if mid_block_only_cross_attention is None:
- mid_block_only_cross_attention = False
-
- if isinstance(num_attention_heads, int):
- num_attention_heads = (num_attention_heads,) * len(down_block_types)
-
- if isinstance(attention_head_dim, int):
- attention_head_dim = (attention_head_dim,) * len(down_block_types)
-
- if isinstance(cross_attention_dim, int):
- cross_attention_dim = (cross_attention_dim,) * len(down_block_types)
-
- if isinstance(layers_per_block, int):
- layers_per_block = [layers_per_block] * len(down_block_types)
-
- if isinstance(transformer_layers_per_block, int):
- transformer_layers_per_block = [transformer_layers_per_block] * len(down_block_types)
-
- if class_embeddings_concat:
- # The time embeddings are concatenated with the class embeddings. The dimension of the
- # time embeddings passed to the down, middle, and up blocks is twice the dimension of the
- # regular time embeddings
- blocks_time_embed_dim = time_embed_dim * 2
- else:
- blocks_time_embed_dim = time_embed_dim
-
- # down
- output_channel = block_out_channels[0]
- for i, down_block_type in enumerate(down_block_types):
- input_channel = output_channel
- output_channel = block_out_channels[i]
- is_final_block = i == len(block_out_channels) - 1
-
- down_block = get_down_block(
- down_block_type,
- num_layers=layers_per_block[i],
- transformer_layers_per_block=transformer_layers_per_block[i],
- in_channels=input_channel,
- out_channels=output_channel,
- temb_channels=blocks_time_embed_dim,
- add_downsample=not is_final_block,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- resnet_groups=norm_num_groups,
- cross_attention_dim=cross_attention_dim[i],
- num_attention_heads=num_attention_heads[i],
- downsample_padding=downsample_padding,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention[i],
- upcast_attention=upcast_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- resnet_skip_time_act=resnet_skip_time_act,
- resnet_out_scale_factor=resnet_out_scale_factor,
- cross_attention_norm=cross_attention_norm,
- attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel,
- )
- self.down_blocks.append(down_block)
-
- # mid
- if mid_block_type == "UNetMidBlockFlatCrossAttn":
- self.mid_block = UNetMidBlockFlatCrossAttn(
- transformer_layers_per_block=transformer_layers_per_block[-1],
- in_channels=block_out_channels[-1],
- temb_channels=blocks_time_embed_dim,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- output_scale_factor=mid_block_scale_factor,
- resnet_time_scale_shift=resnet_time_scale_shift,
- cross_attention_dim=cross_attention_dim[-1],
- num_attention_heads=num_attention_heads[-1],
- resnet_groups=norm_num_groups,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- upcast_attention=upcast_attention,
- )
- elif mid_block_type == "UNetMidBlockFlatSimpleCrossAttn":
- self.mid_block = UNetMidBlockFlatSimpleCrossAttn(
- in_channels=block_out_channels[-1],
- temb_channels=blocks_time_embed_dim,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- output_scale_factor=mid_block_scale_factor,
- cross_attention_dim=cross_attention_dim[-1],
- attention_head_dim=attention_head_dim[-1],
- resnet_groups=norm_num_groups,
- resnet_time_scale_shift=resnet_time_scale_shift,
- skip_time_act=resnet_skip_time_act,
- only_cross_attention=mid_block_only_cross_attention,
- cross_attention_norm=cross_attention_norm,
- )
- elif mid_block_type is None:
- self.mid_block = None
- else:
- raise ValueError(f"unknown mid_block_type : {mid_block_type}")
-
- # count how many layers upsample the images
- self.num_upsamplers = 0
-
- # up
- reversed_block_out_channels = list(reversed(block_out_channels))
- reversed_num_attention_heads = list(reversed(num_attention_heads))
- reversed_layers_per_block = list(reversed(layers_per_block))
- reversed_cross_attention_dim = list(reversed(cross_attention_dim))
- reversed_transformer_layers_per_block = list(reversed(transformer_layers_per_block))
- only_cross_attention = list(reversed(only_cross_attention))
-
- output_channel = reversed_block_out_channels[0]
- for i, up_block_type in enumerate(up_block_types):
- is_final_block = i == len(block_out_channels) - 1
-
- prev_output_channel = output_channel
- output_channel = reversed_block_out_channels[i]
- input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
-
- # add upsample block for all BUT final layer
- if not is_final_block:
- add_upsample = True
- self.num_upsamplers += 1
- else:
- add_upsample = False
-
- up_block = get_up_block(
- up_block_type,
- num_layers=reversed_layers_per_block[i] + 1,
- transformer_layers_per_block=reversed_transformer_layers_per_block[i],
- in_channels=input_channel,
- out_channels=output_channel,
- prev_output_channel=prev_output_channel,
- temb_channels=blocks_time_embed_dim,
- add_upsample=add_upsample,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- resnet_groups=norm_num_groups,
- cross_attention_dim=reversed_cross_attention_dim[i],
- num_attention_heads=reversed_num_attention_heads[i],
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention[i],
- upcast_attention=upcast_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- resnet_skip_time_act=resnet_skip_time_act,
- resnet_out_scale_factor=resnet_out_scale_factor,
- cross_attention_norm=cross_attention_norm,
- attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel,
- )
- self.up_blocks.append(up_block)
- prev_output_channel = output_channel
-
- # out
- if norm_num_groups is not None:
- self.conv_norm_out = nn.GroupNorm(
- num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps
- )
-
- self.conv_act = get_activation(act_fn)
-
- else:
- self.conv_norm_out = None
- self.conv_act = None
-
- conv_out_padding = (conv_out_kernel - 1) // 2
- self.conv_out = LinearMultiDim(
- block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding
- )
-
- @property
- def attn_processors(self) -> Dict[str, AttentionProcessor]:
- r"""
- Returns:
- `dict` of attention processors: A dictionary containing all attention processors used in the model with
- indexed by its weight name.
- """
- # set recursively
- processors = {}
-
- def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]):
- if hasattr(module, "set_processor"):
- processors[f"{name}.processor"] = module.processor
-
- for sub_name, child in module.named_children():
- fn_recursive_add_processors(f"{name}.{sub_name}", child, processors)
-
- return processors
-
- for name, module in self.named_children():
- fn_recursive_add_processors(name, module, processors)
-
- return processors
-
- def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]):
- r"""
- Sets the attention processor to use to compute attention.
-
- Parameters:
- processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`):
- The instantiated processor class or a dictionary of processor classes that will be set as the processor
- for **all** `Attention` layers.
-
- If `processor` is a dict, the key needs to define the path to the corresponding cross attention
- processor. This is strongly recommended when setting trainable attention processors.
-
- """
- count = len(self.attn_processors.keys())
-
- if isinstance(processor, dict) and len(processor) != count:
- raise ValueError(
- f"A dict of processors was passed, but the number of processors {len(processor)} does not match the"
- f" number of attention layers: {count}. Please make sure to pass {count} processor classes."
- )
-
- def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor):
- if hasattr(module, "set_processor"):
- if not isinstance(processor, dict):
- module.set_processor(processor)
- else:
- module.set_processor(processor.pop(f"{name}.processor"))
-
- for sub_name, child in module.named_children():
- fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
-
- for name, module in self.named_children():
- fn_recursive_attn_processor(name, module, processor)
-
- def set_default_attn_processor(self):
- """
- Disables custom attention processors and sets the default attention implementation.
- """
- self.set_attn_processor(AttnProcessor())
-
- def set_attention_slice(self, slice_size):
- r"""
- Enable sliced attention computation.
-
- When this option is enabled, the attention module splits the input tensor in slices to compute attention in
- several steps. This is useful for saving some memory in exchange for a small decrease in speed.
-
- Args:
- slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
- When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
- `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
- provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
- must be a multiple of `slice_size`.
- """
- sliceable_head_dims = []
-
- def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module):
- if hasattr(module, "set_attention_slice"):
- sliceable_head_dims.append(module.sliceable_head_dim)
-
- for child in module.children():
- fn_recursive_retrieve_sliceable_dims(child)
-
- # retrieve number of attention layers
- for module in self.children():
- fn_recursive_retrieve_sliceable_dims(module)
-
- num_sliceable_layers = len(sliceable_head_dims)
-
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = [dim // 2 for dim in sliceable_head_dims]
- elif slice_size == "max":
- # make smallest slice possible
- slice_size = num_sliceable_layers * [1]
-
- slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
-
- if len(slice_size) != len(sliceable_head_dims):
- raise ValueError(
- f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
- f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
- )
-
- for i in range(len(slice_size)):
- size = slice_size[i]
- dim = sliceable_head_dims[i]
- if size is not None and size > dim:
- raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
-
- # Recursively walk through all the children.
- # Any children which exposes the set_attention_slice method
- # gets the message
- def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
- if hasattr(module, "set_attention_slice"):
- module.set_attention_slice(slice_size.pop())
-
- for child in module.children():
- fn_recursive_set_attention_slice(child, slice_size)
-
- reversed_slice_size = list(reversed(slice_size))
- for module in self.children():
- fn_recursive_set_attention_slice(module, reversed_slice_size)
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, (CrossAttnDownBlockFlat, DownBlockFlat, CrossAttnUpBlockFlat, UpBlockFlat)):
- module.gradient_checkpointing = value
-
- def forward(
- self,
- sample: torch.FloatTensor,
- timestep: Union[torch.Tensor, float, int],
- encoder_hidden_states: torch.Tensor,
- class_labels: Optional[torch.Tensor] = None,
- timestep_cond: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,
- down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None,
- mid_block_additional_residual: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- return_dict: bool = True,
- ) -> Union[UNet2DConditionOutput, Tuple]:
- r"""
- The [`UNetFlatConditionModel`] forward method.
-
- Args:
- sample (`torch.FloatTensor`):
- The noisy input tensor with the following shape `(batch, channel, height, width)`.
- timestep (`torch.FloatTensor` or `float` or `int`): The number of timesteps to denoise an input.
- encoder_hidden_states (`torch.FloatTensor`):
- The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
- encoder_attention_mask (`torch.Tensor`):
- A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If
- `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias,
- which adds large negative values to the attention scores corresponding to "discard" tokens.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain
- tuple.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the [`AttnProcessor`].
- added_cond_kwargs: (`dict`, *optional*):
- A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that
- are passed along to the UNet blocks.
-
- Returns:
- [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`:
- If `return_dict` is True, an [`~models.unet_2d_condition.UNet2DConditionOutput`] is returned, otherwise
- a `tuple` is returned where the first element is the sample tensor.
- """
- # By default samples have to be AT least a multiple of the overall upsampling factor.
- # The overall upsampling factor is equal to 2 ** (# num of upsampling layers).
- # However, the upsampling interpolation output size can be forced to fit any upsampling size
- # on the fly if necessary.
- default_overall_up_factor = 2**self.num_upsamplers
-
- # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor`
- forward_upsample_size = False
- upsample_size = None
-
- if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
- logger.info("Forward upsample size to force interpolation output size.")
- forward_upsample_size = True
-
- # ensure attention_mask is a bias, and give it a singleton query_tokens dimension
- # expects mask of shape:
- # [batch, key_tokens]
- # adds singleton query_tokens dimension:
- # [batch, 1, key_tokens]
- # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes:
- # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn)
- # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn)
- if attention_mask is not None:
- # assume that mask is expressed as:
- # (1 = keep, 0 = discard)
- # convert mask into a bias that can be added to attention scores:
- # (keep = +0, discard = -10000.0)
- attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0
- attention_mask = attention_mask.unsqueeze(1)
-
- # convert encoder_attention_mask to a bias the same way we do for attention_mask
- if encoder_attention_mask is not None:
- encoder_attention_mask = (1 - encoder_attention_mask.to(sample.dtype)) * -10000.0
- encoder_attention_mask = encoder_attention_mask.unsqueeze(1)
-
- # 0. center input if necessary
- if self.config.center_input_sample:
- sample = 2 * sample - 1.0
-
- # 1. time
- timesteps = timestep
- if not torch.is_tensor(timesteps):
- # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can
- # This would be a good case for the `match` statement (Python 3.10+)
- is_mps = sample.device.type == "mps"
- if isinstance(timestep, float):
- dtype = torch.float32 if is_mps else torch.float64
- else:
- dtype = torch.int32 if is_mps else torch.int64
- timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device)
- elif len(timesteps.shape) == 0:
- timesteps = timesteps[None].to(sample.device)
-
- # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
- timesteps = timesteps.expand(sample.shape[0])
-
- t_emb = self.time_proj(timesteps)
-
- # `Timesteps` does not contain any weights and will always return f32 tensors
- # but time_embedding might actually be running in fp16. so we need to cast here.
- # there might be better ways to encapsulate this.
- t_emb = t_emb.to(dtype=sample.dtype)
-
- emb = self.time_embedding(t_emb, timestep_cond)
- aug_emb = None
-
- if self.class_embedding is not None:
- if class_labels is None:
- raise ValueError("class_labels should be provided when num_class_embeds > 0")
-
- if self.config.class_embed_type == "timestep":
- class_labels = self.time_proj(class_labels)
-
- # `Timesteps` does not contain any weights and will always return f32 tensors
- # there might be better ways to encapsulate this.
- class_labels = class_labels.to(dtype=sample.dtype)
-
- class_emb = self.class_embedding(class_labels).to(dtype=sample.dtype)
-
- if self.config.class_embeddings_concat:
- emb = torch.cat([emb, class_emb], dim=-1)
- else:
- emb = emb + class_emb
-
- if self.config.addition_embed_type == "text":
- aug_emb = self.add_embedding(encoder_hidden_states)
- elif self.config.addition_embed_type == "text_image":
- # Kandinsky 2.1 - style
- if "image_embeds" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `addition_embed_type` set to 'text_image' which requires"
- " the keyword argument `image_embeds` to be passed in `added_cond_kwargs`"
- )
-
- image_embs = added_cond_kwargs.get("image_embeds")
- text_embs = added_cond_kwargs.get("text_embeds", encoder_hidden_states)
- aug_emb = self.add_embedding(text_embs, image_embs)
- elif self.config.addition_embed_type == "text_time":
- # SDXL - style
- if "text_embeds" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires"
- " the keyword argument `text_embeds` to be passed in `added_cond_kwargs`"
- )
- text_embeds = added_cond_kwargs.get("text_embeds")
- if "time_ids" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires"
- " the keyword argument `time_ids` to be passed in `added_cond_kwargs`"
- )
- time_ids = added_cond_kwargs.get("time_ids")
- time_embeds = self.add_time_proj(time_ids.flatten())
- time_embeds = time_embeds.reshape((text_embeds.shape[0], -1))
-
- add_embeds = torch.concat([text_embeds, time_embeds], dim=-1)
- add_embeds = add_embeds.to(emb.dtype)
- aug_emb = self.add_embedding(add_embeds)
- elif self.config.addition_embed_type == "image":
- # Kandinsky 2.2 - style
- if "image_embeds" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `addition_embed_type` set to 'image' which requires the"
- " keyword argument `image_embeds` to be passed in `added_cond_kwargs`"
- )
- image_embs = added_cond_kwargs.get("image_embeds")
- aug_emb = self.add_embedding(image_embs)
- elif self.config.addition_embed_type == "image_hint":
- # Kandinsky 2.2 - style
- if "image_embeds" not in added_cond_kwargs or "hint" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `addition_embed_type` set to 'image_hint' which requires"
- " the keyword arguments `image_embeds` and `hint` to be passed in `added_cond_kwargs`"
- )
- image_embs = added_cond_kwargs.get("image_embeds")
- hint = added_cond_kwargs.get("hint")
- aug_emb, hint = self.add_embedding(image_embs, hint)
- sample = torch.cat([sample, hint], dim=1)
-
- emb = emb + aug_emb if aug_emb is not None else emb
-
- if self.time_embed_act is not None:
- emb = self.time_embed_act(emb)
-
- if self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_proj":
- encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states)
- elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_image_proj":
- # Kadinsky 2.1 - style
- if "image_embeds" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'text_image_proj' which"
- " requires the keyword argument `image_embeds` to be passed in `added_conditions`"
- )
-
- image_embeds = added_cond_kwargs.get("image_embeds")
- encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states, image_embeds)
- elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "image_proj":
- # Kandinsky 2.2 - style
- if "image_embeds" not in added_cond_kwargs:
- raise ValueError(
- f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'image_proj' which requires"
- " the keyword argument `image_embeds` to be passed in `added_conditions`"
- )
- image_embeds = added_cond_kwargs.get("image_embeds")
- encoder_hidden_states = self.encoder_hid_proj(image_embeds)
- # 2. pre-process
- sample = self.conv_in(sample)
-
- # 3. down
-
- is_controlnet = mid_block_additional_residual is not None and down_block_additional_residuals is not None
- is_adapter = mid_block_additional_residual is None and down_block_additional_residuals is not None
-
- down_block_res_samples = (sample,)
- for downsample_block in self.down_blocks:
- if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention:
- # For t2i-adapter CrossAttnDownBlockFlat
- additional_residuals = {}
- if is_adapter and len(down_block_additional_residuals) > 0:
- additional_residuals["additional_residuals"] = down_block_additional_residuals.pop(0)
-
- sample, res_samples = downsample_block(
- hidden_states=sample,
- temb=emb,
- encoder_hidden_states=encoder_hidden_states,
- attention_mask=attention_mask,
- cross_attention_kwargs=cross_attention_kwargs,
- encoder_attention_mask=encoder_attention_mask,
- **additional_residuals,
- )
- else:
- sample, res_samples = downsample_block(hidden_states=sample, temb=emb)
-
- if is_adapter and len(down_block_additional_residuals) > 0:
- sample += down_block_additional_residuals.pop(0)
-
- down_block_res_samples += res_samples
-
- if is_controlnet:
- new_down_block_res_samples = ()
-
- for down_block_res_sample, down_block_additional_residual in zip(
- down_block_res_samples, down_block_additional_residuals
- ):
- down_block_res_sample = down_block_res_sample + down_block_additional_residual
- new_down_block_res_samples = new_down_block_res_samples + (down_block_res_sample,)
-
- down_block_res_samples = new_down_block_res_samples
-
- # 4. mid
- if self.mid_block is not None:
- sample = self.mid_block(
- sample,
- emb,
- encoder_hidden_states=encoder_hidden_states,
- attention_mask=attention_mask,
- cross_attention_kwargs=cross_attention_kwargs,
- encoder_attention_mask=encoder_attention_mask,
- )
-
- if is_controlnet:
- sample = sample + mid_block_additional_residual
-
- # 5. up
- for i, upsample_block in enumerate(self.up_blocks):
- is_final_block = i == len(self.up_blocks) - 1
-
- res_samples = down_block_res_samples[-len(upsample_block.resnets) :]
- down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)]
-
- # if we have not reached the final block and need to forward the
- # upsample size, we do it here
- if not is_final_block and forward_upsample_size:
- upsample_size = down_block_res_samples[-1].shape[2:]
-
- if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention:
- sample = upsample_block(
- hidden_states=sample,
- temb=emb,
- res_hidden_states_tuple=res_samples,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- upsample_size=upsample_size,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- )
- else:
- sample = upsample_block(
- hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size
- )
-
- # 6. post-process
- if self.conv_norm_out:
- sample = self.conv_norm_out(sample)
- sample = self.conv_act(sample)
- sample = self.conv_out(sample)
-
- if not return_dict:
- return (sample,)
-
- return UNet2DConditionOutput(sample=sample)
-
-
-class LinearMultiDim(nn.Linear):
- def __init__(self, in_features, out_features=None, second_dim=4, *args, **kwargs):
- in_features = [in_features, second_dim, 1] if isinstance(in_features, int) else list(in_features)
- if out_features is None:
- out_features = in_features
- out_features = [out_features, second_dim, 1] if isinstance(out_features, int) else list(out_features)
- self.in_features_multidim = in_features
- self.out_features_multidim = out_features
- super().__init__(np.array(in_features).prod(), np.array(out_features).prod())
-
- def forward(self, input_tensor, *args, **kwargs):
- shape = input_tensor.shape
- n_dim = len(self.in_features_multidim)
- input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_features)
- output_tensor = super().forward(input_tensor)
- output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_features_multidim)
- return output_tensor
-
-
-class ResnetBlockFlat(nn.Module):
- def __init__(
- self,
- *,
- in_channels,
- out_channels=None,
- dropout=0.0,
- temb_channels=512,
- groups=32,
- groups_out=None,
- pre_norm=True,
- eps=1e-6,
- time_embedding_norm="default",
- use_in_shortcut=None,
- second_dim=4,
- **kwargs,
- ):
- super().__init__()
- self.pre_norm = pre_norm
- self.pre_norm = True
-
- in_channels = [in_channels, second_dim, 1] if isinstance(in_channels, int) else list(in_channels)
- self.in_channels_prod = np.array(in_channels).prod()
- self.channels_multidim = in_channels
-
- if out_channels is not None:
- out_channels = [out_channels, second_dim, 1] if isinstance(out_channels, int) else list(out_channels)
- out_channels_prod = np.array(out_channels).prod()
- self.out_channels_multidim = out_channels
- else:
- out_channels_prod = self.in_channels_prod
- self.out_channels_multidim = self.channels_multidim
- self.time_embedding_norm = time_embedding_norm
-
- if groups_out is None:
- groups_out = groups
-
- self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=self.in_channels_prod, eps=eps, affine=True)
- self.conv1 = torch.nn.Conv2d(self.in_channels_prod, out_channels_prod, kernel_size=1, padding=0)
-
- if temb_channels is not None:
- self.time_emb_proj = torch.nn.Linear(temb_channels, out_channels_prod)
- else:
- self.time_emb_proj = None
-
- self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels_prod, eps=eps, affine=True)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels_prod, out_channels_prod, kernel_size=1, padding=0)
-
- self.nonlinearity = nn.SiLU()
-
- self.use_in_shortcut = (
- self.in_channels_prod != out_channels_prod if use_in_shortcut is None else use_in_shortcut
- )
-
- self.conv_shortcut = None
- if self.use_in_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(
- self.in_channels_prod, out_channels_prod, kernel_size=1, stride=1, padding=0
- )
-
- def forward(self, input_tensor, temb):
- shape = input_tensor.shape
- n_dim = len(self.channels_multidim)
- input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_channels_prod, 1, 1)
- input_tensor = input_tensor.view(-1, self.in_channels_prod, 1, 1)
-
- hidden_states = input_tensor
-
- hidden_states = self.norm1(hidden_states)
- hidden_states = self.nonlinearity(hidden_states)
- hidden_states = self.conv1(hidden_states)
-
- if temb is not None:
- temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None]
- hidden_states = hidden_states + temb
-
- hidden_states = self.norm2(hidden_states)
- hidden_states = self.nonlinearity(hidden_states)
-
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.conv2(hidden_states)
-
- if self.conv_shortcut is not None:
- input_tensor = self.conv_shortcut(input_tensor)
-
- output_tensor = input_tensor + hidden_states
-
- output_tensor = output_tensor.view(*shape[0:-n_dim], -1)
- output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_channels_multidim)
-
- return output_tensor
-
-
-# Copied from diffusers.models.unet_2d_blocks.DownBlock2D with DownBlock2D->DownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim
-class DownBlockFlat(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_downsample=True,
- downsample_padding=1,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- LinearMultiDim(
- out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(self, hidden_states, temb=None):
- output_states = ()
-
- for resnet in self.resnets:
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- if is_torch_version(">=", "1.11.0"):
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb, use_reentrant=False
- )
- else:
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- output_states = output_states + (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states = output_states + (hidden_states,)
-
- return hidden_states, output_states
-
-
-# Copied from diffusers.models.unet_2d_blocks.CrossAttnDownBlock2D with CrossAttnDownBlock2D->CrossAttnDownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim
-class CrossAttnDownBlockFlat(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- transformer_layers_per_block: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- num_attention_heads=1,
- cross_attention_dim=1280,
- output_scale_factor=1.0,
- downsample_padding=1,
- add_downsample=True,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.has_cross_attention = True
- self.num_attention_heads = num_attention_heads
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- if not dual_cross_attention:
- attentions.append(
- Transformer2DModel(
- num_attention_heads,
- out_channels // num_attention_heads,
- in_channels=out_channels,
- num_layers=transformer_layers_per_block,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- )
- )
- else:
- attentions.append(
- DualTransformer2DModel(
- num_attention_heads,
- out_channels // num_attention_heads,
- in_channels=out_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- LinearMultiDim(
- out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- additional_residuals=None,
- ):
- output_states = ()
-
- blocks = list(zip(self.resnets, self.attentions))
-
- for i, (resnet, attn) in enumerate(blocks):
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet),
- hidden_states,
- temb,
- **ckpt_kwargs,
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- None, # timestep
- None, # class_labels
- cross_attention_kwargs,
- attention_mask,
- encoder_attention_mask,
- **ckpt_kwargs,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
-
- # apply additional residuals to the output of the last pair of resnet and attention blocks
- if i == len(blocks) - 1 and additional_residuals is not None:
- hidden_states = hidden_states + additional_residuals
-
- output_states = output_states + (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states = output_states + (hidden_states,)
-
- return hidden_states, output_states
-
-
-# Copied from diffusers.models.unet_2d_blocks.UpBlock2D with UpBlock2D->UpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim
-class UpBlockFlat(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlockFlat(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- if is_torch_version(">=", "1.11.0"):
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb, use_reentrant=False
- )
- else:
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
-
-# Copied from diffusers.models.unet_2d_blocks.CrossAttnUpBlock2D with CrossAttnUpBlock2D->CrossAttnUpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim
-class CrossAttnUpBlockFlat(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- prev_output_channel: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- transformer_layers_per_block: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- num_attention_heads=1,
- cross_attention_dim=1280,
- output_scale_factor=1.0,
- add_upsample=True,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.has_cross_attention = True
- self.num_attention_heads = num_attention_heads
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlockFlat(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- if not dual_cross_attention:
- attentions.append(
- Transformer2DModel(
- num_attention_heads,
- out_channels // num_attention_heads,
- in_channels=out_channels,
- num_layers=transformer_layers_per_block,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- )
- )
- else:
- attentions.append(
- DualTransformer2DModel(
- num_attention_heads,
- out_channels // num_attention_heads,
- in_channels=out_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- upsample_size: Optional[int] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- for resnet, attn in zip(self.resnets, self.attentions):
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet),
- hidden_states,
- temb,
- **ckpt_kwargs,
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- None, # timestep
- None, # class_labels
- cross_attention_kwargs,
- attention_mask,
- encoder_attention_mask,
- **ckpt_kwargs,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
-
-# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DCrossAttn with UNetMidBlock2DCrossAttn->UNetMidBlockFlatCrossAttn, ResnetBlock2D->ResnetBlockFlat
-class UNetMidBlockFlatCrossAttn(nn.Module):
- def __init__(
- self,
- in_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- transformer_layers_per_block: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- num_attention_heads=1,
- output_scale_factor=1.0,
- cross_attention_dim=1280,
- dual_cross_attention=False,
- use_linear_projection=False,
- upcast_attention=False,
- ):
- super().__init__()
-
- self.has_cross_attention = True
- self.num_attention_heads = num_attention_heads
- resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
-
- # there is always at least one resnet
- resnets = [
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- ]
- attentions = []
-
- for _ in range(num_layers):
- if not dual_cross_attention:
- attentions.append(
- Transformer2DModel(
- num_attention_heads,
- in_channels // num_attention_heads,
- in_channels=in_channels,
- num_layers=transformer_layers_per_block,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- upcast_attention=upcast_attention,
- )
- )
- else:
- attentions.append(
- DualTransformer2DModel(
- num_attention_heads,
- in_channels // num_attention_heads,
- in_channels=in_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- )
- )
- resnets.append(
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ) -> torch.FloatTensor:
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
- hidden_states = resnet(hidden_states, temb)
-
- return hidden_states
-
-
-# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DSimpleCrossAttn with UNetMidBlock2DSimpleCrossAttn->UNetMidBlockFlatSimpleCrossAttn, ResnetBlock2D->ResnetBlockFlat
-class UNetMidBlockFlatSimpleCrossAttn(nn.Module):
- def __init__(
- self,
- in_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attention_head_dim=1,
- output_scale_factor=1.0,
- cross_attention_dim=1280,
- skip_time_act=False,
- only_cross_attention=False,
- cross_attention_norm=None,
- ):
- super().__init__()
-
- self.has_cross_attention = True
-
- self.attention_head_dim = attention_head_dim
- resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
-
- self.num_heads = in_channels // self.attention_head_dim
-
- # there is always at least one resnet
- resnets = [
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- skip_time_act=skip_time_act,
- )
- ]
- attentions = []
-
- for _ in range(num_layers):
- processor = (
- AttnAddedKVProcessor2_0() if hasattr(F, "scaled_dot_product_attention") else AttnAddedKVProcessor()
- )
-
- attentions.append(
- Attention(
- query_dim=in_channels,
- cross_attention_dim=in_channels,
- heads=self.num_heads,
- dim_head=self.attention_head_dim,
- added_kv_proj_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- bias=True,
- upcast_softmax=True,
- only_cross_attention=only_cross_attention,
- cross_attention_norm=cross_attention_norm,
- processor=processor,
- )
- )
- resnets.append(
- ResnetBlockFlat(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- skip_time_act=skip_time_act,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
-
- if attention_mask is None:
- # if encoder_hidden_states is defined: we are doing cross-attn, so we should use cross-attn mask.
- mask = None if encoder_hidden_states is None else encoder_attention_mask
- else:
- # when attention_mask is defined: we don't even check for encoder_attention_mask.
- # this is to maintain compatibility with UnCLIP, which uses 'attention_mask' param for cross-attn masks.
- # TODO: UnCLIP should express cross-attn mask via encoder_attention_mask param instead of via attention_mask.
- # then we can simplify this whole if/else block to:
- # mask = attention_mask if encoder_hidden_states is None else encoder_attention_mask
- mask = attention_mask
-
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- # attn
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- attention_mask=mask,
- **cross_attention_kwargs,
- )
-
- # resnet
- hidden_states = resnet(hidden_states, temb)
-
- return hidden_states
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_1x_coco.py
deleted file mode 100644
index 76566bdb0fe827f222924142c22c846a86fd1d32..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,108 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-# model settings
-model = dict(
- type='VFNet',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs=True,
- extra_convs_on_inputs=False, # use P5
- num_outs=5,
- relu_before_extra_convs=True),
- bbox_head=dict(
- type='VFNetHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=3,
- feat_channels=256,
- strides=[8, 16, 32, 64, 128],
- center_sampling=False,
- dcn_on_last_conv=False,
- use_atss=True,
- use_vfl=True,
- loss_cls=dict(
- type='VarifocalLoss',
- use_sigmoid=True,
- alpha=0.75,
- gamma=2.0,
- iou_weighted=True,
- loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=1.5),
- loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(type='ATSSAssigner', topk=9),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.6),
- max_per_img=100))
-
-# data setting
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-
-# optimizer
-optimizer = dict(
- lr=0.01, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.))
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=0.1,
- step=[8, 11])
-runner = dict(type='EpochBasedRunner', max_epochs=12)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/voc.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/voc.py
deleted file mode 100644
index abd4cb8947238936faff48fc92c093c8ae06daff..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/voc.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from collections import OrderedDict
-
-from mmcv.utils import print_log
-
-from mmdet.core import eval_map, eval_recalls
-from .builder import DATASETS
-from .xml_style import XMLDataset
-
-
-@DATASETS.register_module()
-class VOCDataset(XMLDataset):
-
- CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',
- 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',
- 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',
- 'tvmonitor')
-
- def __init__(self, **kwargs):
- super(VOCDataset, self).__init__(**kwargs)
- if 'VOC2007' in self.img_prefix:
- self.year = 2007
- elif 'VOC2012' in self.img_prefix:
- self.year = 2012
- else:
- raise ValueError('Cannot infer dataset year from img_prefix')
-
- def evaluate(self,
- results,
- metric='mAP',
- logger=None,
- proposal_nums=(100, 300, 1000),
- iou_thr=0.5,
- scale_ranges=None):
- """Evaluate in VOC protocol.
-
- Args:
- results (list[list | tuple]): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated. Options are
- 'mAP', 'recall'.
- logger (logging.Logger | str, optional): Logger used for printing
- related information during evaluation. Default: None.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thr (float | list[float]): IoU threshold. Default: 0.5.
- scale_ranges (list[tuple], optional): Scale ranges for evaluating
- mAP. If not specified, all bounding boxes would be included in
- evaluation. Default: None.
-
- Returns:
- dict[str, float]: AP/recall metrics.
- """
-
- if not isinstance(metric, str):
- assert len(metric) == 1
- metric = metric[0]
- allowed_metrics = ['mAP', 'recall']
- if metric not in allowed_metrics:
- raise KeyError(f'metric {metric} is not supported')
- annotations = [self.get_ann_info(i) for i in range(len(self))]
- eval_results = OrderedDict()
- iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr
- if metric == 'mAP':
- assert isinstance(iou_thrs, list)
- if self.year == 2007:
- ds_name = 'voc07'
- else:
- ds_name = self.CLASSES
- mean_aps = []
- for iou_thr in iou_thrs:
- print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}')
- mean_ap, _ = eval_map(
- results,
- annotations,
- scale_ranges=None,
- iou_thr=iou_thr,
- dataset=ds_name,
- logger=logger)
- mean_aps.append(mean_ap)
- eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3)
- eval_results['mAP'] = sum(mean_aps) / len(mean_aps)
- elif metric == 'recall':
- gt_bboxes = [ann['bboxes'] for ann in annotations]
- recalls = eval_recalls(
- gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
- for i, num in enumerate(proposal_nums):
- for j, iou in enumerate(iou_thr):
- eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
- if recalls.shape[1] > 1:
- ar = recalls.mean(axis=1)
- for i, num in enumerate(proposal_nums):
- eval_results[f'AR@{num}'] = ar[i]
- return eval_results
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 22aaf857c3212d0b36b0b04e7990616025a3ef9b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index d6ade67b76ce04e1ede3ff99aab4863705cff446..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './encnet_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformerv2_demo/transforms.py b/spaces/Andy1621/uniformerv2_demo/transforms.py
deleted file mode 100644
index 2483fdf8569e25978b922774e84cc2244315fe61..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformerv2_demo/transforms.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import torchvision
-import random
-from PIL import Image, ImageOps
-import numpy as np
-import numbers
-import math
-import torch
-
-
-class GroupRandomCrop(object):
- def __init__(self, size):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class MultiGroupRandomCrop(object):
- def __init__(self, size, groups=1):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
- self.groups = groups
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- for i in range(self.groups):
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class GroupCenterCrop(object):
- def __init__(self, size):
- self.worker = torchvision.transforms.CenterCrop(size)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupRandomHorizontalFlip(object):
- """Randomly horizontally flips the given PIL.Image with a probability of 0.5
- """
-
- def __init__(self, is_flow=False):
- self.is_flow = is_flow
-
- def __call__(self, img_group, is_flow=False):
- v = random.random()
- if v < 0.5:
- ret = [img.transpose(Image.FLIP_LEFT_RIGHT) for img in img_group]
- if self.is_flow:
- for i in range(0, len(ret), 2):
- # invert flow pixel values when flipping
- ret[i] = ImageOps.invert(ret[i])
- return ret
- else:
- return img_group
-
-
-class GroupNormalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, tensor):
- rep_mean = self.mean * (tensor.size()[0] // len(self.mean))
- rep_std = self.std * (tensor.size()[0] // len(self.std))
-
- # TODO: make efficient
- for t, m, s in zip(tensor, rep_mean, rep_std):
- t.sub_(m).div_(s)
-
- return tensor
-
-
-class GroupScale(object):
- """ Rescales the input PIL.Image to the given 'size'.
- 'size' will be the size of the smaller edge.
- For example, if height > width, then image will be
- rescaled to (size * height / width, size)
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.worker = torchvision.transforms.Resize(size, interpolation)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupOverSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- offsets = GroupMultiScaleCrop.fill_fix_offset(
- False, image_w, image_h, crop_w, crop_h)
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- if self.flip:
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupFullResSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- offsets = list()
- offsets.append((0 * w_step, 2 * h_step)) # left
- offsets.append((4 * w_step, 2 * h_step)) # right
- offsets.append((2 * w_step, 2 * h_step)) # center
-
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- if self.flip:
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupMultiScaleCrop(object):
-
- def __init__(self, input_size, scales=None, max_distort=1,
- fix_crop=True, more_fix_crop=True):
- self.scales = scales if scales is not None else [1, .875, .75, .66]
- self.max_distort = max_distort
- self.fix_crop = fix_crop
- self.more_fix_crop = more_fix_crop
- self.input_size = input_size if not isinstance(input_size, int) else [
- input_size, input_size]
- self.interpolation = Image.BILINEAR
-
- def __call__(self, img_group):
-
- im_size = img_group[0].size
-
- crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size)
- crop_img_group = [
- img.crop(
- (offset_w,
- offset_h,
- offset_w +
- crop_w,
- offset_h +
- crop_h)) for img in img_group]
- ret_img_group = [img.resize((self.input_size[0], self.input_size[1]), self.interpolation)
- for img in crop_img_group]
- return ret_img_group
-
- def _sample_crop_size(self, im_size):
- image_w, image_h = im_size[0], im_size[1]
-
- # find a crop size
- base_size = min(image_w, image_h)
- crop_sizes = [int(base_size * x) for x in self.scales]
- crop_h = [
- self.input_size[1] if abs(
- x - self.input_size[1]) < 3 else x for x in crop_sizes]
- crop_w = [
- self.input_size[0] if abs(
- x - self.input_size[0]) < 3 else x for x in crop_sizes]
-
- pairs = []
- for i, h in enumerate(crop_h):
- for j, w in enumerate(crop_w):
- if abs(i - j) <= self.max_distort:
- pairs.append((w, h))
-
- crop_pair = random.choice(pairs)
- if not self.fix_crop:
- w_offset = random.randint(0, image_w - crop_pair[0])
- h_offset = random.randint(0, image_h - crop_pair[1])
- else:
- w_offset, h_offset = self._sample_fix_offset(
- image_w, image_h, crop_pair[0], crop_pair[1])
-
- return crop_pair[0], crop_pair[1], w_offset, h_offset
-
- def _sample_fix_offset(self, image_w, image_h, crop_w, crop_h):
- offsets = self.fill_fix_offset(
- self.more_fix_crop, image_w, image_h, crop_w, crop_h)
- return random.choice(offsets)
-
- @staticmethod
- def fill_fix_offset(more_fix_crop, image_w, image_h, crop_w, crop_h):
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- ret = list()
- ret.append((0, 0)) # upper left
- ret.append((4 * w_step, 0)) # upper right
- ret.append((0, 4 * h_step)) # lower left
- ret.append((4 * w_step, 4 * h_step)) # lower right
- ret.append((2 * w_step, 2 * h_step)) # center
-
- if more_fix_crop:
- ret.append((0, 2 * h_step)) # center left
- ret.append((4 * w_step, 2 * h_step)) # center right
- ret.append((2 * w_step, 4 * h_step)) # lower center
- ret.append((2 * w_step, 0 * h_step)) # upper center
-
- ret.append((1 * w_step, 1 * h_step)) # upper left quarter
- ret.append((3 * w_step, 1 * h_step)) # upper right quarter
- ret.append((1 * w_step, 3 * h_step)) # lower left quarter
- ret.append((3 * w_step, 3 * h_step)) # lower righ quarter
-
- return ret
-
-
-class GroupRandomSizedCrop(object):
- """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size
- and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio
- This is popularly used to train the Inception networks
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.size = size
- self.interpolation = interpolation
-
- def __call__(self, img_group):
- for attempt in range(10):
- area = img_group[0].size[0] * img_group[0].size[1]
- target_area = random.uniform(0.08, 1.0) * area
- aspect_ratio = random.uniform(3. / 4, 4. / 3)
-
- w = int(round(math.sqrt(target_area * aspect_ratio)))
- h = int(round(math.sqrt(target_area / aspect_ratio)))
-
- if random.random() < 0.5:
- w, h = h, w
-
- if w <= img_group[0].size[0] and h <= img_group[0].size[1]:
- x1 = random.randint(0, img_group[0].size[0] - w)
- y1 = random.randint(0, img_group[0].size[1] - h)
- found = True
- break
- else:
- found = False
- x1 = 0
- y1 = 0
-
- if found:
- out_group = list()
- for img in img_group:
- img = img.crop((x1, y1, x1 + w, y1 + h))
- assert(img.size == (w, h))
- out_group.append(
- img.resize(
- (self.size, self.size), self.interpolation))
- return out_group
- else:
- # Fallback
- scale = GroupScale(self.size, interpolation=self.interpolation)
- crop = GroupRandomCrop(self.size)
- return crop(scale(img_group))
-
-
-class ConvertDataFormat(object):
- def __init__(self, model_type):
- self.model_type = model_type
-
- def __call__(self, images):
- if self.model_type == '2D':
- return images
- tc, h, w = images.size()
- t = tc // 3
- images = images.view(t, 3, h, w)
- images = images.permute(1, 0, 2, 3)
- return images
-
-
-class Stack(object):
-
- def __init__(self, roll=False):
- self.roll = roll
-
- def __call__(self, img_group):
- if img_group[0].mode == 'L':
- return np.concatenate([np.expand_dims(x, 2)
- for x in img_group], axis=2)
- elif img_group[0].mode == 'RGB':
- if self.roll:
- return np.concatenate([np.array(x)[:, :, ::-1]
- for x in img_group], axis=2)
- else:
- #print(np.concatenate(img_group, axis=2).shape)
- # print(img_group[0].shape)
- return np.concatenate(img_group, axis=2)
-
-
-class ToTorchFormatTensor(object):
- """ Converts a PIL.Image (RGB) or numpy.ndarray (H x W x C) in the range [0, 255]
- to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] """
-
- def __init__(self, div=True):
- self.div = div
-
- def __call__(self, pic):
- if isinstance(pic, np.ndarray):
- # handle numpy array
- img = torch.from_numpy(pic).permute(2, 0, 1).contiguous()
- else:
- # handle PIL Image
- img = torch.ByteTensor(
- torch.ByteStorage.from_buffer(
- pic.tobytes()))
- img = img.view(pic.size[1], pic.size[0], len(pic.mode))
- # put it from HWC to CHW format
- # yikes, this transpose takes 80% of the loading time/CPU
- img = img.transpose(0, 1).transpose(0, 2).contiguous()
- return img.float().div(255) if self.div else img.float()
-
-
-class IdentityTransform(object):
-
- def __call__(self, data):
- return data
-
-
-if __name__ == "__main__":
- trans = torchvision.transforms.Compose([
- GroupScale(256),
- GroupRandomCrop(224),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225]
- )]
- )
-
- im = Image.open('../tensorflow-model-zoo.torch/lena_299.png')
-
- color_group = [im] * 3
- rst = trans(color_group)
-
- gray_group = [im.convert('L')] * 9
- gray_rst = trans(gray_group)
-
- trans2 = torchvision.transforms.Compose([
- GroupRandomSizedCrop(256),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225])
- ])
- print(trans2(color_group))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_wsl.bat b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_wsl.bat
deleted file mode 100644
index d7bacead6b0ea94656ecacd8bccede01d7d53cc8..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_wsl.bat
+++ /dev/null
@@ -1,11 +0,0 @@
-@echo off
-
-cd /D "%~dp0"
-
-set PATH=%PATH%;%SystemRoot%\system32
-
-@rem sed -i 's/\x0D$//' ./wsl.sh converts newlines to unix format in the wsl script
-call wsl -e bash -lic "sed -i 's/\x0D$//' ./wsl.sh; source ./wsl.sh %*"
-
-:end
-pause
diff --git a/spaces/Arnx/MusicGenXvAKN/tests/common_utils/wav_utils.py b/spaces/Arnx/MusicGenXvAKN/tests/common_utils/wav_utils.py
deleted file mode 100644
index d3a563ee1749a58217ece55c9a08b8d93c0fc386..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/tests/common_utils/wav_utils.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from pathlib import Path
-import typing as tp
-
-import torch
-import torchaudio
-
-
-def get_white_noise(chs: int = 1, num_frames: int = 1):
- wav = torch.randn(chs, num_frames)
- return wav
-
-
-def get_batch_white_noise(bs: int = 1, chs: int = 1, num_frames: int = 1):
- wav = torch.randn(bs, chs, num_frames)
- return wav
-
-
-def save_wav(path: str, wav: torch.Tensor, sample_rate: int):
- fp = Path(path)
- kwargs: tp.Dict[str, tp.Any] = {}
- if fp.suffix == '.wav':
- kwargs['encoding'] = 'PCM_S'
- kwargs['bits_per_sample'] = 16
- elif fp.suffix == '.mp3':
- kwargs['compression'] = 320
- torchaudio.save(str(fp), wav, sample_rate, **kwargs)
diff --git a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/tuneavideo_text2video.py b/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/tuneavideo_text2video.py
deleted file mode 100644
index 774f7ee6d259953ea8716b05b1f6b99c92b0e9bb..0000000000000000000000000000000000000000
--- a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/tuneavideo_text2video.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import gradio as gr
-import torch
-
-from video_diffusion.tuneavideo.models.unet import UNet3DConditionModel
-from video_diffusion.tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
-from video_diffusion.tuneavideo.util import save_videos_grid
-from video_diffusion.utils.model_list import stable_model_list
-
-video_diffusion_model_list = [
- "Tune-A-Video-library/a-man-is-surfing",
- "Tune-A-Video-library/mo-di-bear-guitar",
- "Tune-A-Video-library/redshift-man-skiing",
-]
-
-
-class TunaVideoText2VideoGenerator:
- def __init__(self):
- self.pipe = None
- self.unet = None
-
- def load_model(self, video_diffusion_model_list, stable_model_list):
- if self.pipe is None:
- if self.unet is None:
- self.unet = UNet3DConditionModel.from_pretrained(
- video_diffusion_model_list, subfolder="unet", torch_dtype=torch.float16
- ).to("cuda")
-
- self.pipe = TuneAVideoPipeline.from_pretrained(
- stable_model_list, unet=self.unet, torch_dtype=torch.float16
- )
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
-
- return self.pipe
-
- def generate_video(
- self,
- video_diffusion_model: str,
- stable_model_list: str,
- prompt: str,
- negative_prompt: str,
- video_length: int,
- height: int,
- width: int,
- num_inference_steps: int,
- guidance_scale: int,
- fps: int,
- ):
- pipe = self.load_model(video_diffusion_model, stable_model_list)
- video = pipe(
- prompt,
- negative_prompt=negative_prompt,
- video_length=video_length,
- height=height,
- width=width,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- ).videos
-
- save_videos_grid(videos=video, path="output.gif", fps=fps)
- return "output.gif"
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- tunevideo_video_diffusion_model_list = gr.Dropdown(
- choices=video_diffusion_model_list,
- label="Video Diffusion Model",
- value=video_diffusion_model_list[0],
- )
- tunevideo_stable_model_list = gr.Dropdown(
- choices=stable_model_list,
- label="Stable Model List",
- value=stable_model_list[0],
- )
- with gr.Row():
- with gr.Column():
- tunevideo_prompt = gr.Textbox(
- lines=1,
- placeholder="Prompt",
- show_label=False,
- )
- tunevideo_video_length = gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=10,
- label="Video Length",
- )
- tunevideo_num_inference_steps = gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=50,
- label="Num Inference Steps",
- )
- tunevideo_fps = gr.Slider(
- minimum=1,
- maximum=60,
- step=1,
- value=5,
- label="Fps",
- )
- with gr.Row():
- with gr.Column():
- tunevideo_negative_prompt = gr.Textbox(
- lines=1,
- placeholder="Negative Prompt",
- show_label=False,
- )
- tunevideo_guidance_scale = gr.Slider(
- minimum=1,
- maximum=15,
- step=1,
- value=7.5,
- label="Guidance Scale",
- )
- tunevideo_height = gr.Slider(
- minimum=1,
- maximum=1280,
- step=32,
- value=512,
- label="Height",
- )
- tunevideo_width = gr.Slider(
- minimum=1,
- maximum=1280,
- step=32,
- value=512,
- label="Width",
- )
- tunevideo_generate = gr.Button(value="Generator")
-
- with gr.Column():
- tunevideo_output = gr.Video(label="Output")
-
- tunevideo_generate.click(
- fn=TunaVideoText2VideoGenerator().generate_video,
- inputs=[
- tunevideo_video_diffusion_model_list,
- tunevideo_stable_model_list,
- tunevideo_prompt,
- tunevideo_negative_prompt,
- tunevideo_video_length,
- tunevideo_height,
- tunevideo_width,
- tunevideo_num_inference_steps,
- tunevideo_guidance_scale,
- tunevideo_fps,
- ],
- outputs=tunevideo_output,
- )
diff --git a/spaces/AsakuraMizu/moe-tts/text/mandarin.py b/spaces/AsakuraMizu/moe-tts/text/mandarin.py
deleted file mode 100644
index ff71de9788e4f20c897b971a775d1ecfbfe1c7b7..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/text/mandarin.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import os
-import sys
-import re
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba
-import cn2an
-import logging
-
-logging.getLogger('jieba').setLevel(logging.WARNING)
-jieba.initialize()
-
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-# List of (bopomofo, ipa) pairs:
-_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'x'),
- ('ㄐ', 'tʃ⁼'),
- ('ㄑ', 'tʃʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ts`⁼'),
- ('ㄔ', 'ts`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ts⁼'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'ɥæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'ɥn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'əŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (bopomofo, ipa2) pairs:
-_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'pwo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'tɕ'),
- ('ㄑ', 'tɕʰ'),
- ('ㄒ', 'ɕ'),
- ('ㄓ', 'tʂ'),
- ('ㄔ', 'tʂʰ'),
- ('ㄕ', 'ʂ'),
- ('ㄖ', 'ɻ'),
- ('ㄗ', 'ts'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ɤ'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'yæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'yn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'ɤŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'y'),
- ('ˉ', '˥'),
- ('ˊ', '˧˥'),
- ('ˇ', '˨˩˦'),
- ('ˋ', '˥˩'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
- if text != '':
- text += ' '
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa(text):
- for regex, replacement in _bopomofo_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa2(text):
- for regex, replacement in _bopomofo_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i([aoe])', r'y\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_ipa(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa(text)
- text = re.sub('i([aoe])', r'j\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_ipa2(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa2(text)
- text = re.sub(r'i([aoe])', r'j\1', text)
- text = re.sub(r'u([aoəe])', r'w\1', text)
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
- return text
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/glibc.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/glibc.py
deleted file mode 100644
index 7bd3c20681d865cb4fa42617cf939b5512c7663f..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/glibc.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import os
-import sys
-from typing import Optional, Tuple
-
-
-def glibc_version_string() -> Optional[str]:
- "Returns glibc version string, or None if not using glibc."
- return glibc_version_string_confstr() or glibc_version_string_ctypes()
-
-
-def glibc_version_string_confstr() -> Optional[str]:
- "Primary implementation of glibc_version_string using os.confstr."
- # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
- # to be broken or missing. This strategy is used in the standard library
- # platform module:
- # https://github.com/python/cpython/blob/fcf1d003bf4f0100c9d0921ff3d70e1127ca1b71/Lib/platform.py#L175-L183
- if sys.platform == "win32":
- return None
- try:
- # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17":
- _, version = os.confstr("CS_GNU_LIBC_VERSION").split()
- except (AttributeError, OSError, ValueError):
- # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
- return None
- return version
-
-
-def glibc_version_string_ctypes() -> Optional[str]:
- "Fallback implementation of glibc_version_string using ctypes."
-
- try:
- import ctypes
- except ImportError:
- return None
-
- # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
- # manpage says, "If filename is NULL, then the returned handle is for the
- # main program". This way we can let the linker do the work to figure out
- # which libc our process is actually using.
- process_namespace = ctypes.CDLL(None)
- try:
- gnu_get_libc_version = process_namespace.gnu_get_libc_version
- except AttributeError:
- # Symbol doesn't exist -> therefore, we are not linked to
- # glibc.
- return None
-
- # Call gnu_get_libc_version, which returns a string like "2.5"
- gnu_get_libc_version.restype = ctypes.c_char_p
- version_str = gnu_get_libc_version()
- # py2 / py3 compatibility:
- if not isinstance(version_str, str):
- version_str = version_str.decode("ascii")
-
- return version_str
-
-
-# platform.libc_ver regularly returns completely nonsensical glibc
-# versions. E.g. on my computer, platform says:
-#
-# ~$ python2.7 -c 'import platform; print(platform.libc_ver())'
-# ('glibc', '2.7')
-# ~$ python3.5 -c 'import platform; print(platform.libc_ver())'
-# ('glibc', '2.9')
-#
-# But the truth is:
-#
-# ~$ ldd --version
-# ldd (Debian GLIBC 2.22-11) 2.22
-#
-# This is unfortunate, because it means that the linehaul data on libc
-# versions that was generated by pip 8.1.2 and earlier is useless and
-# misleading. Solution: instead of using platform, use our code that actually
-# works.
-def libc_ver() -> Tuple[str, str]:
- """Try to determine the glibc version
-
- Returns a tuple of strings (lib, version) which default to empty strings
- in case the lookup fails.
- """
- glibc_version = glibc_version_string()
- if glibc_version is None:
- return ("", "")
- else:
- return ("glibc", glibc_version)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py
deleted file mode 100644
index 0f2464be744c083985898a25f9e71d00104f689d..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# An example config to train a mmdetection model using detectron2.
-
-from ..common.data.coco import dataloader
-from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
-from ..common.optim import SGD as optimizer
-from ..common.train import train
-
-from detectron2.modeling.mmdet_wrapper import MMDetDetector
-from detectron2.config import LazyCall as L
-
-model = L(MMDetDetector)(
- detector=dict(
- type="MaskRCNN",
- pretrained="torchvision://resnet50",
- backbone=dict(
- type="ResNet",
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type="BN", requires_grad=True),
- norm_eval=True,
- style="pytorch",
- ),
- neck=dict(type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5),
- rpn_head=dict(
- type="RPNHead",
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type="AnchorGenerator",
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64],
- ),
- bbox_coder=dict(
- type="DeltaXYWHBBoxCoder",
- target_means=[0.0, 0.0, 0.0, 0.0],
- target_stds=[1.0, 1.0, 1.0, 1.0],
- ),
- loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type="L1Loss", loss_weight=1.0),
- ),
- roi_head=dict(
- type="StandardRoIHead",
- bbox_roi_extractor=dict(
- type="SingleRoIExtractor",
- roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32],
- ),
- bbox_head=dict(
- type="Shared2FCBBoxHead",
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type="DeltaXYWHBBoxCoder",
- target_means=[0.0, 0.0, 0.0, 0.0],
- target_stds=[0.1, 0.1, 0.2, 0.2],
- ),
- reg_class_agnostic=False,
- loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type="L1Loss", loss_weight=1.0),
- ),
- mask_roi_extractor=dict(
- type="SingleRoIExtractor",
- roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32],
- ),
- mask_head=dict(
- type="FCNMaskHead",
- num_convs=4,
- in_channels=256,
- conv_out_channels=256,
- num_classes=80,
- loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0),
- ),
- ),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type="MaxIoUAssigner",
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1,
- ),
- sampler=dict(
- type="RandomSampler",
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False,
- ),
- allowed_border=-1,
- pos_weight=-1,
- debug=False,
- ),
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=1000,
- nms=dict(type="nms", iou_threshold=0.7),
- min_bbox_size=0,
- ),
- rcnn=dict(
- assigner=dict(
- type="MaxIoUAssigner",
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=True,
- ignore_iof_thr=-1,
- ),
- sampler=dict(
- type="RandomSampler",
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- ),
- mask_size=28,
- pos_weight=-1,
- debug=False,
- ),
- ),
- test_cfg=dict(
- rpn=dict(
- nms_pre=1000,
- max_per_img=1000,
- nms=dict(type="nms", iou_threshold=0.7),
- min_bbox_size=0,
- ),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type="nms", iou_threshold=0.5),
- max_per_img=100,
- mask_thr_binary=0.5,
- ),
- ),
- ),
- pixel_mean=[123.675, 116.280, 103.530],
- pixel_std=[58.395, 57.120, 57.375],
-)
-
-dataloader.train.mapper.image_format = "RGB" # torchvision pretrained model
-train.init_checkpoint = None # pretrained model is loaded inside backbone
diff --git a/spaces/Axolotlily/Interpolate/README.md b/spaces/Axolotlily/Interpolate/README.md
deleted file mode 100644
index c6400b55c51292975a56d5de3261d8d9d255c7fa..0000000000000000000000000000000000000000
--- a/spaces/Axolotlily/Interpolate/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Interpolate
-emoji: 🌖
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.17
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/README.md b/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/README.md
deleted file mode 100644
index 9188d339026237beaba58ca5510cd43bf9170a1a..0000000000000000000000000000000000000000
--- a/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 22h Vintedois Diffusion V0 1
-emoji: 🦀
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benebene/Chat-question-answering/app.py b/spaces/Benebene/Chat-question-answering/app.py
deleted file mode 100644
index 7fec02c7643da5cbcde94ffad0ce5fd81012379b..0000000000000000000000000000000000000000
--- a/spaces/Benebene/Chat-question-answering/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from utils import Stuff
-from test import test, test_bench
-from interface import launch_gradio
-
-s = Stuff()
-
-#test(test_bench, s)
-
-launch_gradio(s)
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Buscar En La Lista De Miembros.md b/spaces/Benson/text-generation/Examples/Buscar En La Lista De Miembros.md
deleted file mode 100644
index 86732f917063d9cd74fed45980d5c367c62e9663..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Buscar En La Lista De Miembros.md
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
Ghost Rider 3 Dawn of Darkness: Todo lo que necesitas saber
-
Si usted es un fan del antihéroe de fuego Ghost Rider, es posible que se pregunte si hay una tercera película en las obras. La respuesta no es tan simple, ya que ha habido rumores, especulaciones y remolques hechos por fans sobre Ghost Rider 3 Dawn of Darkness, pero no hay confirmación oficial de Marvel Studios o cualquier otra compañía de producción. En este artículo, exploraremos todo lo que necesitas saber sobre Ghost Rider 3 Dawn of Darkness, incluyendo qué es, quién está en el reparto, cuál es la trama y cómo se relaciona con las películas anteriores de Ghost Rider y el Universo Cinematográfico de Marvel (MCU).
Ghost Rider es un personaje de cómics estadounidenses publicado por Marvel Comics. Es un ser sobrenatural que monta una motocicleta en llamas y tiene un cráneo por cabeza. También es conocido como el Espíritu de Venganza, ya que castiga a los malvados con sus poderes de fuego infernal. Ha habido varias versiones de Ghost Rider en los cómics, pero el más famoso es Johnny Blaze, un motociclista especialista que vendió su alma al diablo para salvar la vida de su padre.
-
Ghost Rider ha aparecido en dos películas de acción en vivo hasta ahora, ambas protagonizadas por Nicolas Cage como Johnny Blaze. El primero fue lanzado en 2007 y fue dirigido por Mark Steven Johnson. El segundo fue lanzado en 2011 y fue dirigido por Mark Neveldine y Brian Taylor. Ambas películas recibieron críticas mixtas a negativas de críticos y fans, pero tuvieron éxito comercial, recaudando más de $400 millones en todo el mundo combinados.
-
¿Qué es Ghost Rider 3 Dawn of Darkness?
-
-
Uno de los trailers más populares de Ghost Rider 3 Dawn of Darkness fue subido a YouTube por Mega Movie Trailer en 2017. Presenta clips de varias películas y programas, como Blade, Constantine, Supernatural y Agents of S.H.I.E.L.D., para crear una historia mash-up que involucra a Wesley Snipes como Blade, Idris Elba como Moreau y Nicolas Cage como Johnny Blaze/ Ghost Rider. El tráiler tiene más de 2 millones de visitas y ha recibido comentarios positivos de los espectadores.
-
Otro trailer hecho por fans para Ghost Rider 3 Dawn of Darkness fue subido a YouTube por End Of The Galaxy en 2020. Presenta clips de varias películas y programas, como Doctor Strange, Thor: Ragnarok, Avengers: Endgame y Lucifer, para crear una historia de mash-up que involucra a Benedict Cumberbatch como Doctor Strange, Chris Hemsworth como Thor, Tom Ellis como Lucifer Morningstar, y Nicolas Cage como Johnny Blaze/Ghost Rider. El trailer tiene más de 300 mil visitas y ha recibido comentarios positivos de los espectadores.
-
¿Quién está en el elenco de Ghost Rider 3 Dawn of Darkness?
-
Como Ghost Rider 3 Dawn of Darkness no es una película oficial, no hay un elenco oficial para ella. Sin embargo, en base a los trailers y carteles hechos por los fans, algunos de los actores que les gustaría ver en la película son:
-
-
-
Nicolas Cage como Johnny Blaze/ Ghost Rider: Cage jugó el papel en las dos primeras películas y ha expresado interés en repetirlo en el futuro.
-
Wesley Snipes as Blade: Snipes jugó el papel en las tres primeras películas de Blade y está programado para regresar en el próximo reinicio del personaje en MCU.
-
Idris Elba como Moreau: Elba jugó el papel en Ghost Rider: Spirit of Vengeance y también es conocido por sus papeles en el MCU, Luther y The Dark Tower.
-
-
Chris Hemsworth como Thor: Hemsworth jugó el papel en Thor, Los Vengadores, Thor: El Mundo Oscuro, Avengers: Age of Ultron, Thor: Ragnarok, Avengers: Infinity War, Avengers: Endgame, y lo hará en Thor: Love and Thunder.
-
Tom Ellis como Lucifer Morningstar: Ellis interpretó el papel en Lucifer, una serie de televisión basada en el personaje de DC Comics del mismo nombre.
-
-
Por supuesto, estos son solo deseos de los fans y no miembros del reparto confirmados. Es poco probable que todos estos actores aparezcan en una película de Ghost Rider 3, especialmente porque algunos de ellos pertenecen a diferentes franquicias y estudios. Sin embargo, es divertido imaginar cómo sería una película de crossover como esta.
-
¿Cuál es la trama de Ghost Rider 3 Dawn of Darkness?
-
De nuevo, ya que Ghost Rider 3 Dawn of Darkness no es una película oficial, no hay ninguna trama oficial para ella. Sin embargo, sobre la base de los remolques y carteles hechos por fans, algunos de los posibles elementos de la trama son:
-
-Johnny Blaze/Ghost Rider sigue huyendo de sus enemigos y su maldición. Es contactado por Moreau, un antiguo monje que lo ayudó en Ghost Rider: Spirit of Vengeance. Moreau le dice que hay una manera de terminar su sufrimiento y liberar su alma del diablo.
-
La manera de hacer eso es encontrar y destruir el Libro de Cagliostro, un tomo antiguo que contiene oscuros secretos y hechizos. El libro está escondido en algún lugar de Europa y está custodiado por un culto de vampiros dirigido por Blade, un medio vampiro mitad humano que caza a su propia especie.
-Johnny Blaze/ Ghost Rider se une a Moreau y otros aliados, como el Doctor Strange, Thor y Lucifer Morningstar, para encontrar el libro y enfrentar a Blade y sus secuaces. En el camino, se encuentran con varias amenazas y desafíos de fuerzas sobrenaturales y enemigos.
-
-
-
Por supuesto, esto es solo una trama hecha por fans y no una historia oficial. Es poco probable que una película de Ghost Rider 3 siga esta historia exacta, especialmente porque involucra personajes y elementos de diferentes franquicias y estudios. Sin embargo, es divertido imaginar cómo sería una película de crossover como esta.
-
La historia de las películas de Ghost Rider
-
Antes de sumergirnos en el futuro de Ghost Rider en el MCU, echemos un vistazo a la historia de las películas de Ghost Rider. Aquí hay algunos breves resúmenes y reseñas de las dos primeras películas protagonizadas por Nicolas Cage como Johnny Blaze/Ghost Rider.
-
Jinete fantasma (2007)
-
Sinopsis
-
Ghost Rider es una película de superhéroes de 2007 basada en el personaje de Marvel Comics del mismo nombre. Fue dirigida por Mark Steven Johnson y protagonizada por Nicolas Cage como Johnny Blaze/Ghost Rider, Eva Mendes como Roxanne Simpson, Wes Bentley como Blackheart, Sam Elliott como Carter Slade/Caretaker, Peter Fonda como Mephistopheles, y Donal Logue como Mack.
-
La película cuenta la historia de origen de Johnny Blaze/ Ghost Rider, un motociclista especialista que vendió su alma a Mefistófeles para salvar la vida de su padre. Años más tarde, es llamado por Mefistófeles para detener a Blackheart, su hijo rebelde que planea desatar el infierno en la tierra. En el camino, se reúne con su amor de la infancia Roxanne Simpson, que ahora es periodista.
-
Recepción
-
Ghost Rider recibió críticas mixtas a negativas de críticos y fans. Tiene una calificación del 26% en Rotten Tomatoes basada en 173 comentarios. El consenso dice: "Ghost Rider es una mezcla amarga de triste, [asistente] (#mensaje) comedia y efectos especiales, y no puede estar a la altura de su material de origen."
-
Algunas de las críticas de la película fueron su débil guion, mala actuación, diálogo cursi, falta de humor y tono inconsistente. Algunas de las alabanzas de la película fueron sus efectos visuales, escenas de acción y la actuación de Cage como Ghost Rider.
-
-
Jinete fantasma: Espíritu de venganza (2011)
-
Sinopsis
-
Ghost Rider: Spirit of Vengeance es una película de superhéroes de 2011 basada en el personaje de Marvel Comics del mismo nombre. Fue dirigida por Mark Neveldine y Brian Taylor y protagonizada por Nicolas Cage como Johnny Blaze/Ghost Rider, Ciarán Hinds como Roarke/Mephisto, Violante Placido como Nadya Ketch, Johnny Whitworth como Ray Carrigan/Blackout, Christopher Lambert como Methodius, e Idris Elba como Moreau
p.
-
La película es una secuela de Ghost Rider, pero también un reinicio suave que ignora algunos de los eventos y personajes de la primera película. Sigue a Johnny Blaze/ Ghost Rider, que se esconde en Europa del Este y trata de controlar su maldición. Es reclutado por Moreau, un miembro de una orden religiosa secreta, para proteger a un joven llamado Danny Ketch de Roarke/ Mephisto, que quiere usarlo como un recipiente para su poder.
-
Recepción
-
Ghost Rider: Spirit of Vengeance recibió críticas negativas de críticos y fans. Tiene una calificación del 19% en Rotten Tomatoes basado en 121 comentarios. El consenso dice: "Con un guion débil, un trabajo de CG desigual, y una actuación de Nic Cage tan predeciblemente loco que ya no es divertido, Ghost Rider: Spirit of Vengeance tiene como objetivo ser diversión basura pero termina como basura."
-
Algunas de las críticas de la película fueron su trama sin sentido, personajes sosos, acción aburrida, efectos baratos y violencia excesiva. Algunas de las alabanzas de la película fueron su tono más oscuro, su estilo más atrevido y el compromiso de Cage con el papel.
-
La película fue un fracaso de taquilla, sin embargo, recaudando solo $ 132 millones en todo el mundo con un presupuesto de $ 57 millones. Fue una de las películas menos taquilleras basadas en un personaje de Marvel Comics.
-
El futuro de Ghost Rider en el MCU
-
-
Desde entonces, ha habido varios rumores y especulaciones sobre la participación de Ghost Rider en el MCU. Aquí están algunos de los más notables:
-
Ryan Gosling como Ghost Rider?
-
En 2016, hubo un rumor de que Ryan Gosling estaba en conversaciones para interpretar a Johnny Blaze/Ghost Rider en una nueva película que sería parte de la Fase 4 del UCM. El rumor afirmaba que Gosling estaba interesado en trabajar con Marvel Studios después de ver al Doctor Strange y que se había reunido con Kevin Feige para discutir el papel. El rumor también afirmaba que la película sería dirigida por Neil Marshall (The Descent) y que incluiría a Doctor Strange como personaje secundario.
-
Sin embargo, este rumor nunca fue confirmado o negado por Marvel Studios o el propio Gosling. Es posible que solo fuera un deseo de los fans o un informe falso. A partir de ahora, no hay noticias oficiales o anuncio sobre Gosling jugando Ghost Rider en el MCU.
-
Cómo Ghost Rider podría caber en el MCU
-
Incluso si Gosling no está jugando Ghost Rider en el MCU, todavía hay otras formas en que el personaje podría encajar en la franquicia. Estos son algunos de ellos:
-
-
Ghost Rider podría aparecer en Doctor Strange en el Multiverso de la Locura. Se espera que esta película explore diferentes realidades y dimensiones dentro del UCM, que podría incluir una donde exista Ghost Rider. Ghost Rider también podría tener una conexión con Scarlet Witch, que se confirma que aparece en la película y que tiene poderes de deformación de la realidad.
-
Ghost Rider podría aparecer en Blade. Esta película está programada para reiniciar Blade como parte de la Fase 5 del UCM y la estrella Mahershala Ali como el cazador de vampiros titular. Ghost Rider podría tener un cameo o un papel secundario en esta película, ya que se ha cruzado con Blade en los cómics antes. Ghost Rider y Blade podrían unirse para luchar contra vampiros y otras amenazas sobrenaturales.
-
-
Ghost Rider podría aparecer en su propia película en solitario o serie de televisión. Esta es la opción más obvia y deseada para muchos fans, ya que daría a Ghost Rider la oportunidad de explorar su origen, sus poderes, sus enemigos y sus aliados. Una película o serie de televisión en solitario también podría introducir una nueva versión de Ghost Rider, como Danny Ketch, Robbie Reyes o Alejandra Jones, que tienen diferentes antecedentes e historias de Johnny Blaze.
-
-
Por supuesto, estas son solo algunas de las posibles formas en que Ghost Rider podría caber en el MCU. Hay muchos otros escenarios y conexiones potenciales que podrían ser explorados. Lo único seguro es que Ghost Rider es un personaje popular e icónico que merece la oportunidad de brillar en el MCU.
-
Conclusión
-
En conclusión, Ghost Rider 3 Dawn of Darkness no es una película oficial, sino un título y concepto hecho por fans que ha estado circulando en Internet durante años. No hay confirmación o anuncio de que tal película exista o esté en desarrollo. Sin embargo, hay muchos remolques y carteles hechos por fans que han creado algo de bombo y curiosidad entre los fans de Ghost Rider.
-
Ghost Rider ha aparecido en dos películas de acción en vivo hasta ahora, ambas protagonizadas por Nicolas Cage como Johnny Blaze/ Ghost Rider. El primero fue lanzado en 2007 y el segundo en 2011. Ambas películas recibieron críticas mixtas a negativas de críticos y fans, pero tuvieron éxito comercial.
-
Los derechos de Ghost Rider volvieron a Marvel Studios en 2013, abriendo nuevas posibilidades para el futuro del personaje. Ha habido varios rumores y especulaciones sobre la participación de Ghost Rider en el UCM, pero nada ha sido confirmado o anunciado todavía. Sin embargo, hay muchas maneras en que Ghost Rider podría caber en el MCU, ya sea como un cameo, un papel de apoyo, o una estrella en solitario.
-
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Ghost Rider 3 Dawn of Darkness:
-
-¿Es Ghost Rider 3 Dawn of Darkness real? No, Ghost Rider 3 Dawn of Darkness no es una película real, sino un título hecho por fans y un concepto que ha estado circulando en Internet durante años. No hay confirmación o anuncio de que tal película exista o esté en desarrollo.
-
¿Quién está jugando Ghost Rider en Ghost Rider 3 Dawn of Darkness? Como Ghost Rider 3 Dawn of Darkness no es una película real, no hay un elenco oficial para ella. Sin embargo, basados en los trailers y carteles hechos por fans, algunos de los actores que los fans quisieran ver en la película son Nicolas Cage como Johnny Blaze/ Ghost Rider, Wesley Snipes como Blade, Idris Elba como Moreau, Benedict Cumberbatch como Doctor Strange, Chris Hemsworth como Thor, y Tom Ellis como Lucifer Morningstar.
-¿Cuál es la trama de Ghost Rider 3 Dawn of Darkness? Como Ghost Rider 3 Dawn of Darkness no es una película real, no hay ningún argumento oficial para ello. Sin embargo, sobre la base de los trailers y carteles hechos por fans, algunos de los posibles elementos de la trama son Johnny Blaze/ Ghost Rider haciendo equipo con Moreau y otros aliados para encontrar y destruir el Libro de Cagliostro, un antiguo tomo que contiene oscuros secretos y hechizos; Johnny Blaze/ Ghost Rider frente a Blade y su culto de vampiros que guardan el libro; Johnny Blaze/ Ghost Rider destruyendo el libro y liberándose de las garras del diablo.
-
¿Cuándo se lanzará Ghost Rider 3 Dawn of Darkness? Como Ghost Rider 3 Dawn of Darkness no es una película real, no hay fecha oficial para su lanzamiento. Sin embargo, basado en los remolques y carteles hechos por fans, algunas de las posibles fechas de lanzamiento son 2023, 2024 o 2025.
-
-
-
Espero que este artículo haya respondido a sus preguntas y satisfecho su curiosidad sobre Ghost Rider 3 Dawn of Darkness. Si eres un fan de Ghost Rider, también puedes ver los cómics, los programas de televisión, los videojuegos y la mercancía relacionada con el personaje. Gracias por leer y tener un gran día!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Choque Mini Descarga Pc.md b/spaces/Benson/text-generation/Examples/Choque Mini Descarga Pc.md
deleted file mode 100644
index af5825ff7956ede67339bd2474ac1beb44eb502e..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Choque Mini Descarga Pc.md
+++ /dev/null
@@ -1,63 +0,0 @@
-
-
Clash Mini: Un juego de mesa divertido y estratégico en el universo de choque
-
¿Te encanta el universo Clash y sus personajes icónicos? ¿Te gustan los juegos de estrategia que desafían tu mente y ponen a prueba tus habilidades? Si es así, quizás quieras echar un vistazo a Clash Mini, un nuevo juego de Supercell, los creadores de Clash of Clans y Clash Royale.
-
Clash Mini es un juego de mesa de estrategia que te permite recoger, convocar y actualizar tu ejército de Minis, que son versiones en miniatura de los personajes familiares del universo Clash. Puedes llevar a tu adorable ejército a la batalla junto a héroes legendarios como el Rey Bárbaro, la Doncella de Escudo, la Reina Arquera y más. También puedes liberar poderosas unidades como Pekka, magos y arqueros mágicos para cambiar la marea de la batalla.
En este artículo, le diremos todo lo que necesita saber sobre Clash Mini, incluyendo lo que es, cómo jugarlo en PC, cuándo se lanzará, y cómo registrarse para la versión beta. ¡Vamos a empezar!
-
¿Qué es Clash Mini?
-
Clash Mini es un juego de elecciones, duelo y retumbar, miniaturas, héroes y habilidades, y combinaciones dinámicas y un sinfín de posibilidades. Echemos un vistazo más de cerca a cada aspecto.
-
Un juego de elecciones, duelo y retumbar
-
En Clash Mini, puedes jugar en modo 1v1 o rumble contra otros 7 jugadores. En cada modo, tienes que predecir los movimientos de tu oponente y luego armar tu estrategia ganadora y formación. Puedes colocar tus Minis en un tablero al mismo tiempo que tu oponente, y luego verlos chocar automáticamente en tiempo real.
-
Cada juego está lleno de acción y dura menos de 5 minutos. Puedes jugar casualmente por diversión o en partidos clasificados para aumentar tu posición en la liga. También puedes completar misiones para recoger minis y desbloquear nuevas habilidades.
-
Un juego de miniaturas, héroes y habilidades
-
-
También puedes elegir entre 8 héroes que pueden liderar tu ejército. Cada héroe tiene su propia habilidad especial que puede cambiar las tornas a tu favor. Por ejemplo, el Rey Bárbaro puede cargar hacia adelante y aturdir a los enemigos con su martillo, la Doncella de Escudo puede proteger a tus Minis con su muro de escudo, y la Reina Arquera puede disparar flechas que atraviesan múltiples objetivos.
-
Puedes personalizar a tus héroes y Minis con pieles únicas que muestran tu individualidad y estilo en el campo de batalla.
-
-
Un juego de combinaciones dinámicas y un sinfín de posibilidades
-
Uno de los aspectos más emocionantes de Clash Mini es la variedad de estrategias y combinaciones que puedes crear con tus Minis y héroes. Puedes experimentar con diferentes formaciones, sinergias, contadores y tácticas para encontrar la mejor manera de ganar.
-
También puedes ajustar tu estrategia en el juego con tanques, cuerpo a cuerpo y Minis a distancia dependiendo de la situación. Puedes actualizar Minis durante la batalla para activar habilidades más fuertes o intercambiarlas entre rondas para adaptarlas a los movimientos de tu oponente.
-
Con tantas opciones y variables, cada batalla en Clash Mini es diferente e impredecible. Tienes que ser creativo y flexible para superar a tus rivales y reclamar la victoria.
-
¿Cómo se juega Clash Mini en PC?
-
Clash Mini está diseñado para ser jugado en dispositivos móviles, pero es posible que se pregunte si se puede jugar en el PC, así. La respuesta es sí, se puede! Jugar Clash Mini en PC tiene varias ventajas, como una pantalla más grande, mejores gráficos, un rendimiento más rápido y controles más cómodos. Así es como puedes hacerlo.
-
¿Por qué jugar Clash Mini en PC?
-
Jugar Clash Mini en PC puede mejorar su experiencia de juego de muchas maneras. Aquí están algunos de los beneficios de jugar Clash Mini en PC:
-
-
Puedes disfrutar de una vista más amplia y clara del tablero y los Minis, lo que puede ayudarte a planificar mejor tus movimientos y ver los detalles de las animaciones y efectos.
-
-
Puedes usar el teclado y el ratón para controlar el juego, lo que puede ser más preciso y conveniente que usar los dedos en una pantalla táctil.
-
Puede acceder a otras características y aplicaciones en su PC mientras juega Clash Mini, como chatear con sus amigos, navegar por la web o transmitir su juego.
-
-
¿Cómo descargar e instalar Clash Mini en el PC usando un emulador?
-
La forma más fácil de jugar Clash Mini en PC es usar un emulador. Un emulador es un software que le permite ejecutar aplicaciones Android o iOS en su PC. Hay muchos emuladores disponibles en línea, pero recomendamos usar BlueStacks, que es uno de los emuladores más populares y confiables para juegos.
-
Aquí están los pasos para descargar e instalar Clash Mini en el PC usando BlueStacks:
Inicie BlueStacks e inicie sesión con su cuenta de Google.
-
Ir a la Google Play Store o la App Store en BlueStacks y buscar Clash Mini.
-
Haga clic en el botón Instalar y espere a que el juego se descargue e instale.
-
Una vez instalado el juego, haga clic en el botón Abrir o encuentre el icono del juego en la pantalla de inicio de BlueStacks.
-
Disfruta jugando Clash Mini en PC!
-
-
¿Cómo se juega Clash Mini en el PC con el teclado y el ratón?
-
Una de las ventajas de jugar Clash Mini en PC es que puedes usar tu teclado y ratón para controlar el juego. Esto puede darle más precisión y comodidad que usar los dedos en una pantalla táctil. Sin embargo, es posible que necesite ajustar algunos ajustes y asignaciones de claves para optimizar su juego.
-
Aquí hay algunos consejos para jugar Clash Mini en el PC con el teclado y el ratón:
-
-
Puede utilizar el ratón para arrastrar y soltar sus Minis en el tablero, así como para seleccionar su héroe y habilidades.
-
-
Puede utilizar el teclado para girar el tablero pulsando las teclas de flecha izquierda y derecha.
-
Puedes usar el teclado para acceder al menú, chat, configuración, tienda, perfil, misiones, liga, clan y amigos presionando las teclas correspondientes. Puede comprobar las asignaciones de teclas haciendo clic en el icono del teclado en la esquina inferior derecha de BlueStacks.
-
Puede personalizar las asignaciones de teclas haciendo clic en el icono del teclado y luego haciendo clic en Editar. Puede arrastrar y soltar diferentes teclas en diferentes funciones o crear nuevas según sus preferencias.
-
-
¿Cuándo es la fecha de lanzamiento de Clash Mini?
-
Si te emociona jugar a Clash Mini, es posible que te estés preguntando cuándo se lanzará. La respuesta no es tan simple, ya que hay diferentes fechas de lanzamiento para diferentes regiones y plataformas. Esto es lo que sabemos hasta ahora.
-
La versión beta de Clash Mini
-
La versión beta de Clash Mini es una versión de prueba del juego que permite a los jugadores probarlo antes de su lanzamiento oficial. La versión beta de Clash Mini está disponible actualmente en países seleccionados solo para dispositivos Android. Estos países son Finlandia, Suecia, Noruega, Dinamarca, Islandia, Nueva Zelanda, Australia, Canadá, Singapur, Filipinas, Malasia, Indonesia, India, Hong Kong SAR China.
-
La versión beta de Clash Mini no es un producto final y puede contener errores, fallas o errores. La versión beta de Clash Mini también puede sufrir cambios o actualizaciones basadas en los comentarios de los jugadores. La versión beta de Clash Mini no representa la calidad ni las características del juego final.
-
El global para dispositivos Android e iOS. Sin embargo, el juego podría lanzarse en diferentes regiones en diferentes momentos, dependiendo de la retroalimentación y el rendimiento de la versión beta.
-
¿Cómo se juega Clash Mini en PC?
-
-
¿Cómo registrarse para la versión beta de Clash Mini?
-
Puede registrarse para la versión beta de Clash Mini visitando el sitio web de Supercell e ingresando su dirección de correo electrónico. La versión beta de Clash Mini está abierta para cualquier persona que tenga un dispositivo Android y viva en uno de los siguientes países: Finlandia, Suecia, Noruega, Dinamarca, Islandia, Nueva Zelanda, Australia, Canadá, Singapur, Filipinas, Malasia, Indonesia, India, Hong Kong SAR China. Si cumple con estos criterios, recibirá un correo electrónico de Supercell con un enlace para descargar el juego desde la Google Play Store o la App Store.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fonte Clash Royale.md b/spaces/Benson/text-generation/Examples/Descargar Fonte Clash Royale.md
deleted file mode 100644
index 313255f60d97df3d9cba0eae22b9461270e1d858..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fonte Clash Royale.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
Descargar Clash Royale para Windows 10: Una guía completa
-
Si eres un fan de los juegos de estrategia en tiempo real, es posible que hayas oído hablar de Clash Royale, uno de los juegos más populares y adictivos del género. Clash Royale es un juego desarrollado por Supercell, la misma compañía detrás del exitoso juego Clash of Clans. En este juego, puedes recoger y actualizar docenas de cartas con tus personajes y hechizos favoritos de Clash, y usarlos para luchar contra otros jugadores en línea en partidas trepidantes y emocionantes. También puedes unirte o crear un clan, chatear con otros jugadores y participar en guerras de clanes para ganar recompensas y gloria.
Clash Royale está disponible para dispositivos Android e iOS, pero ¿qué pasa si desea reproducirlo en su PC con Windows 10? Bueno, hay dos maneras de hacer eso, y te mostraremos cómo en este artículo. Pero primero, echemos un vistazo a algunas de las características de Clash Royale que lo hacen tan divertido y atractivo.
-
¿Qué es Clash Royale?
-
Clash Royale es un juego multijugador en tiempo real que combina elementos de juegos de cartas, torre de defensa y MOBA (campo de batalla multijugador en línea). El juego se desarrolla en el mismo universo que Clash of Clans, pero con un estilo de juego diferente. El juego consta de dos modos: modo escalera y modo torneo. En el modo escalera, puedes jugar contra otros jugadores de nivel de habilidad similar y ganar trofeos, que determinan tu rango en la clasificación global. En el modo torneo, puedes unirte o crear torneos personalizados con diferentes reglas y premios.
-
-
Características de Clash Royale
-
Algunas de las características que hacen de Clash Royale un juego emocionante y adictivo son:
-
-
Duelistas de todo el mundo: Puedes desafiar a cualquiera en línea en tiempo real y mostrar tus habilidades y estrategias. También puedes ver las repeticiones de batallas de otros jugadores y aprender de sus movimientos.
-
Gana cofres para desbloquear recompensas: Cada vez que ganes una partida, recibirás un cofre que contenga cartas, oro, gemas u otros objetos. Puede utilizar estos recursos para actualizar sus tarjetas o comprar nuevas en la tienda. Hay diferentes tipos de cofres, como el de plata, oro, gigante, mágico, épico, legendario y de clan.
-
Recoge y actualiza docenas de cartas: Puedes recoger cartas de diferentes arenas, cada una con su propio tema y personajes. Hay cuatro rarezas de cartas: comunes, raras, épicas y legendarias. Puede actualizar sus tarjetas mediante el uso de oro y tarjetas duplicadas para aumentar su nivel y poder.
-
Crear o unirse a un clan: Puedes
Crear o unirse a un clan: Puedes unir fuerzas con otros jugadores y formar un clan, donde puedes chatear, donar cartas, solicitar cartas y participar en guerras de clanes. Las guerras de clanes son un modo especial donde puedes competir con otros clanes por la gloria y las recompensas. También puedes crear tu propio clan e invitar a tus amigos a unirse.
-
Progresa a través de múltiples arenas: A medida que ganes partidos y ganes trofeos, desbloquearás nuevas arenas, cada una con su propio tema y grupo de cartas. Hay 13 arenas en total, además de una arena legendaria especial para los mejores jugadores. Cada arena tiene sus propias recompensas y desafíos.
-
-
-
Cómo jugar Clash Royale en Windows 10
-
Ahora que sabe lo que es Clash Royale y lo que ofrece, es posible que se pregunte cómo jugarlo en su PC con Windows 10. Bueno, hay dos métodos que puedes usar para hacer eso: usar un emulador o usar un sitio web. Veamos cómo funciona cada método y cuáles son los pros y los contras de cada uno.
-
Cómo descargar Clash Royale para Windows 10
-
Método 1: Usando el emulador de Bluestacks
-
El primer método es utilizar un emulador, que es un software que le permite ejecutar aplicaciones Android en su PC. Hay muchos emuladores disponibles en línea, pero uno de los más populares y confiables es Bluestacks. Bluestacks es un emulador gratuito que tiene una interfaz fácil de usar y es compatible con muchos juegos y aplicaciones de Android, incluyendo Clash Royale. Estos son los pasos que debe seguir para descargar Clash Royale para Windows 10 usando Bluestacks:
-
Paso 1: Descargar e instalar Bluestacks
-
Lo primero que tienes que hacer es descargar Bluestacks desde su sitio web oficial: https://www.bluestacks.com/. Verá un botón de descarga en la página de inicio que detectará automáticamente su sistema operativo y descargará la versión adecuada para usted. Una vez finalizada la descarga, ejecute el instalador y siga las instrucciones para instalar Bluestacks en su PC.
-
Paso 2: Inicie Bluestacks e inicie sesión con la cuenta de Google
-
Después de instalar Bluestacks, inicie desde su escritorio o menú de inicio. Verás una pantalla de bienvenida que te pedirá que inicies sesión con tu cuenta de Google. Esto es necesario porque usted necesita para acceder a la Google Play Store para descargar Clash Royale. Si no tienes una cuenta de Google, puedes crear una gratis. Una vez que inicie sesión, verá la pantalla de inicio de Bluestacks, que parece una tableta Android.
-
Paso 3: Buscar Clash Royale en la Play Store e instalarlo
-
-
Paso 4: Disfruta jugando Clash Royale en tu PC
-
Una vez realizada la instalación, verá un botón "Abrir" en la página Clash Royale. Haga clic en él para iniciar Clash Royale en su PC. Verás la pantalla de carga del juego y el menú principal. Ahora puedes jugar a Clash Royale en tu PC con el ratón y el teclado. También puede ajustar la configuración, como sonido, gráficos, idioma, etc., haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla.
-
-
Método 2: Usando Filehippo.com
-
El segundo método es utilizar un sitio web que ofrece descargas gratuitas de aplicaciones de Android para PC. Uno de estos sitios web es Filehippo.com, que tiene una gran colección de juegos y aplicaciones para Android que puedes descargar e instalar en tu PC sin usar un emulador. Estos son los pasos que debe seguir para descargar Clash Royale para Windows 10 usando Filehippo.com:
-
Paso 1: Vaya a Filehippo.com y busque Clash Royale
-
Lo primero que debe hacer es ir a Filehippo.com desde su navegador web: < a href=">https://filehippo.com/. Verá una barra de búsqueda en la parte superior de la página principal. Escriba "Clash Royale" en la barra de búsqueda y pulse enter. Verás la aplicación Clash Royale entre los resultados de búsqueda. Haz clic en ella para abrir su página.
-
Paso 2: Haga clic en el botón de descarga y guarde el archivo
-
En la página Clash Royale, verá un botón verde "Descargar la última versión" en el lado derecho de la pantalla. Haga clic en él para comenzar a descargar el archivo Clash Royale. Verá una ventana emergente que le pedirá que guarde el archivo. Elija una ubicación en su PC donde desea guardar el archivo y haga clic en "Guardar". El tamaño del archivo es de aproximadamente 110 MB, por lo que podría tomar algún tiempo dependiendo de su velocidad de Internet.
-
Paso 3: Ejecute el archivo y siga las instrucciones para instalar Clash Royale
-
-
Paso 4: Inicie Clash Royale y comience a jugar
-
Después de la instalación, verá un icono de acceso directo de Clash Royale en su escritorio o menú de inicio. Haga clic en él para lanzar Clash Royale en su PC. Verá la pantalla de carga del juego y luego el menú principal. Ahora puede jugar Clash Royale en su PC con el ratón y el teclado. También puede ajustar la configuración, como sonido, gráficos, idioma, etc., haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla.
-
Conclusión
-
Clash Royale es uno de los juegos de estrategia en tiempo real más populares y adictivos que puedes jugar en tu dispositivo Android o iOS. Pero si quieres disfrutarlo en una pantalla más grande y con mejores controles, también puedes reproducirlo en tu PC con Windows 10 usando uno de los dos métodos que te mostramos en este artículo: usando el emulador de Bluestacks o usando Filehippo.com. Ambos métodos son fáciles y gratuitos, y te permitirán descargar e instalar Clash Royale para Windows 10 en poco tiempo. Entonces, ¿qué estás esperando? Descargar Clash Royale para Windows 10 hoy y unirse a millones de jugadores de todo el mundo en batallas épicas y torneos!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Clash Royale para Windows 10:
-
-
Clash Royale es libre de jugar?
-
Sí, Clash Royale es gratis para jugar, pero también ofrece compras en la aplicación que pueden mejorar su experiencia de juego. Puedes comprar gemas, oro, cofres, tarjetas u otros artículos con dinero real. Sin embargo, estas compras son opcionales y no se requieren para jugar o progresar en el juego.
-
¿Es seguro descargar Clash Royale?
-
-
¿Puedo jugar a Clash Royale sin conexión?
-
No, Clash Royale requiere una conexión a Internet para jugar, ya que es un juego multijugador que te conecta con otros jugadores en línea. Necesitas tener una conexión a Internet estable y rápida para jugar a Clash Royale sin ningún retraso o interrupción.
-
¿Puedo sincronizar mi progreso entre mi dispositivo y PC?
-
Sí, puedes sincronizar tu progreso entre tu dispositivo y PC usando tu cuenta de Google. Es necesario iniciar sesión con la misma cuenta de Google en su dispositivo y PC al jugar Clash Royale. De esta forma, puedes acceder a tus datos de juego, como tus cartas, oro, gemas, trofeos, clan, etc., en ambas plataformas.
-
¿Puedo jugar a Clash Royale con mis amigos?
-
Sí, puedes jugar a Clash Royale con tus amigos uniéndote o creando un clan. Un clan es un grupo de jugadores que pueden chatear, donar cartas, solicitar cartas y participar en guerras de clanes juntos. Puedes invitar a tus amigos a unirse a tu clan o unirse a su clan usando su nombre o etiqueta de clan. También puedes retar a tus amigos a batallas amistosas o ver sus partidos tocando su nombre en el chat del clan.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/__init__.py
deleted file mode 100644
index 962173c8d0a6906b59f2910c9cae759010534786..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2012-2022 Vinay Sajip.
-# Licensed to the Python Software Foundation under a contributor agreement.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-import logging
-
-__version__ = '0.3.6'
-
-class DistlibException(Exception):
- pass
-
-try:
- from logging import NullHandler
-except ImportError: # pragma: no cover
- class NullHandler(logging.Handler):
- def handle(self, record): pass
- def emit(self, record): pass
- def createLock(self): self.lock = None
-
-logger = logging.getLogger(__name__)
-logger.addHandler(NullHandler())
diff --git a/spaces/CVPR/LIVE/pybind11/tools/check-style.sh b/spaces/CVPR/LIVE/pybind11/tools/check-style.sh
deleted file mode 100644
index f7af2a4169744334af0b9c28823e98a502b813be..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tools/check-style.sh
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/bin/bash
-#
-# Script to check include/test code for common pybind11 code style errors.
-#
-# This script currently checks for
-#
-# 1. missing space between keyword and parenthesis, e.g.: for(, if(, while(
-# 2. Missing space between right parenthesis and brace, e.g. 'for (...){'
-# 3. opening brace on its own line. It should always be on the same line as the
-# if/while/for/do statement.
-#
-# Invoke as: tools/check-style.sh
-#
-
-check_style_errors=0
-IFS=$'\n'
-
-
-found="$(grep '\<\(if\|for\|while\|catch\)(\|){' $@ -rn --color=always)"
-if [ -n "$found" ]; then
- echo -e '\033[31;01mError: found the following coding style problems:\033[0m'
- check_style_errors=1
- echo "$found" | sed -e 's/^/ /'
-fi
-
-found="$(awk '
-function prefix(filename, lineno) {
- return " \033[35m" filename "\033[36m:\033[32m" lineno "\033[36m:\033[0m"
-}
-function mark(pattern, string) { sub(pattern, "\033[01;31m&\033[0m", string); return string }
-last && /^\s*{/ {
- print prefix(FILENAME, FNR-1) mark("\\)\\s*$", last)
- print prefix(FILENAME, FNR) mark("^\\s*{", $0)
- last=""
-}
-{ last = /(if|for|while|catch|switch)\s*\(.*\)\s*$/ ? $0 : "" }
-' $(find include -type f) $@)"
-if [ -n "$found" ]; then
- check_style_errors=1
- echo -e '\033[31;01mError: braces should occur on the same line as the if/while/.. statement. Found issues in the following files:\033[0m'
- echo "$found"
-fi
-
-exit $check_style_errors
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_reduce.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_reduce.h
deleted file mode 100644
index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_reduce.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special version of this algorithm
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/unique.h b/spaces/CVPR/LIVE/thrust/thrust/unique.h
deleted file mode 100644
index b4b2118d321374e2dac04592914d33b2003fad8a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/unique.h
+++ /dev/null
@@ -1,968 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file unique.h
- * \brief Move unique elements to the front of a range
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup stream_compaction
- * \{
- */
-
-
-/*! For each group of consecutive elements in the range [first, last)
- * with the same value, \p unique removes all but the first element of
- * the group. The return value is an iterator \c new_last such that
- * no two consecutive elements in the range [first, new_last) are
- * equal. The iterators in the range [new_last, last) are all still
- * dereferenceable, but the elements that they point to are unspecified.
- * \p unique is stable, meaning that the relative order of elements that are
- * not removed is unchanged.
- *
- * This version of \p unique uses \c operator== to test for equality.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \return The end of the unique range [first, new_last).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- *
- * The following code snippet demonstrates how to use \p unique to
- * compact a sequence of numbers to remove consecutive duplicates using the \p thrust::host execution policy
- * for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int *new_end = thrust::unique(thrust::host, A, A + N);
- * // The first four values of A are now {1, 3, 2, 1}
- * // Values beyond new_end are unspecified.
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/unique.html
- * \see unique_copy
- */
-template
-__host__ __device__
-ForwardIterator unique(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-/*! For each group of consecutive elements in the range [first, last)
- * with the same value, \p unique removes all but the first element of
- * the group. The return value is an iterator \c new_last such that
- * no two consecutive elements in the range [first, new_last) are
- * equal. The iterators in the range [new_last, last) are all still
- * dereferenceable, but the elements that they point to are unspecified.
- * \p unique is stable, meaning that the relative order of elements that are
- * not removed is unchanged.
- *
- * This version of \p unique uses \c operator== to test for equality.
- *
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \return The end of the unique range [first, new_last).
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- *
- * The following code snippet demonstrates how to use \p unique to
- * compact a sequence of numbers to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int *new_end = thrust::unique(A, A + N);
- * // The first four values of A are now {1, 3, 2, 1}
- * // Values beyond new_end are unspecified.
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/unique.html
- * \see unique_copy
- */
-template
-ForwardIterator unique(ForwardIterator first,
- ForwardIterator last);
-
-
-/*! For each group of consecutive elements in the range [first, last)
- * with the same value, \p unique removes all but the first element of
- * the group. The return value is an iterator \c new_last such that
- * no two consecutive elements in the range [first, new_last) are
- * equal. The iterators in the range [new_last, last) are all still
- * dereferenceable, but the elements that they point to are unspecified.
- * \p unique is stable, meaning that the relative order of elements that are
- * not removed is unchanged.
- *
- * This version of \p unique uses the function object \p binary_pred to test
- * for equality.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [first, new_last)
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and \p ForwardIterator's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type and to \p BinaryPredicate's \c second_argument_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * The following code snippet demonstrates how to use \p unique to
- * compact a sequence of numbers to remove consecutive duplicates using the \p thrust::host execution policy
- * for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int *new_end = thrust::unique(thrust::host, A, A + N, thrust::equal_to());
- * // The first four values of A are now {1, 3, 2, 1}
- * // Values beyond new_end are unspecified.
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/unique.html
- * \see unique_copy
- */
-template
-__host__ __device__
-ForwardIterator unique(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate binary_pred);
-
-
-/*! For each group of consecutive elements in the range [first, last)
- * with the same value, \p unique removes all but the first element of
- * the group. The return value is an iterator \c new_last such that
- * no two consecutive elements in the range [first, new_last) are
- * equal. The iterators in the range [new_last, last) are all still
- * dereferenceable, but the elements that they point to are unspecified.
- * \p unique is stable, meaning that the relative order of elements that are
- * not removed is unchanged.
- *
- * This version of \p unique uses the function object \p binary_pred to test
- * for equality.
- *
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [first, new_last)
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and \p ForwardIterator's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type and to \p BinaryPredicate's \c second_argument_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * The following code snippet demonstrates how to use \p unique to
- * compact a sequence of numbers to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int *new_end = thrust::unique(A, A + N, thrust::equal_to());
- * // The first four values of A are now {1, 3, 2, 1}
- * // Values beyond new_end are unspecified.
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/unique.html
- * \see unique_copy
- */
-template
-ForwardIterator unique(ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_copy copies elements from the range [first, last)
- * to a range beginning with \p result, except that in a consecutive group
- * of duplicate elements only the first one is copied. The return value
- * is the end of the range to which the elements are copied.
- *
- * The reason there are two different versions of unique_copy is that there
- * are two different definitions of what it means for a consecutive group of
- * elements to be duplicates. In the first version, the test is simple
- * equality: the elements in a range [f, l) are duplicates if,
- * for every iterator \p i in the range, either i == f or else
- * *i == *(i-1). In the second, the test is an arbitrary
- * \p BinaryPredicate \p binary_pred: the elements in [f, l) are
- * duplicates if, for every iterator \p i in the range, either i == f
- * or else binary_pred(*i, *(i-1)) is \p true.
- *
- * This version of \p unique_copy uses \c operator== to test for equality.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param result The beginning of the output range.
- * \return The end of the unique range [result, result_end).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is a model of Equality Comparable.
- * \tparam OutputIterator is a model of Output Iterator and
- * and \p InputIterator's \c value_type is convertible to \c OutputIterator's \c value_type.
- *
- * \pre The range [first,last) and the range [result, result + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_copy to
- * compact a sequence of numbers to remove consecutive duplicates using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int B[N];
- * int *result_end = thrust::unique_copy(thrust::host, A, A + N, B);
- * // The first four values of B are now {1, 3, 2, 1} and (result_end - B) is 4
- * // Values beyond result_end are unspecified
- * \endcode
- *
- * \see unique
- * \see http://www.sgi.com/tech/stl/unique_copy.html
- */
-template
-__host__ __device__
-OutputIterator unique_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result);
-
-
-/*! \p unique_copy copies elements from the range [first, last)
- * to a range beginning with \p result, except that in a consecutive group
- * of duplicate elements only the first one is copied. The return value
- * is the end of the range to which the elements are copied.
- *
- * The reason there are two different versions of unique_copy is that there
- * are two different definitions of what it means for a consecutive group of
- * elements to be duplicates. In the first version, the test is simple
- * equality: the elements in a range [f, l) are duplicates if,
- * for every iterator \p i in the range, either i == f or else
- * *i == *(i-1). In the second, the test is an arbitrary
- * \p BinaryPredicate \p binary_pred: the elements in [f, l) are
- * duplicates if, for every iterator \p i in the range, either i == f
- * or else binary_pred(*i, *(i-1)) is \p true.
- *
- * This version of \p unique_copy uses \c operator== to test for equality.
- *
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param result The beginning of the output range.
- * \return The end of the unique range [result, result_end).
- *
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is a model of Equality Comparable.
- * \tparam OutputIterator is a model of Output Iterator and
- * and \p InputIterator's \c value_type is convertible to \c OutputIterator's \c value_type.
- *
- * \pre The range [first,last) and the range [result, result + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_copy to
- * compact a sequence of numbers to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int B[N];
- * int *result_end = thrust::unique_copy(A, A + N, B);
- * // The first four values of B are now {1, 3, 2, 1} and (result_end - B) is 4
- * // Values beyond result_end are unspecified
- * \endcode
- *
- * \see unique
- * \see http://www.sgi.com/tech/stl/unique_copy.html
- */
-template
-OutputIterator unique_copy(InputIterator first,
- InputIterator last,
- OutputIterator result);
-
-
-/*! \p unique_copy copies elements from the range [first, last)
- * to a range beginning with \p result, except that in a consecutive group
- * of duplicate elements only the first one is copied. The return value
- * is the end of the range to which the elements are copied.
- *
- * This version of \p unique_copy uses the function object \c binary_pred
- * to test for equality.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param result The beginning of the output range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [result, result_end).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is a model of Equality Comparable.
- * \tparam OutputIterator is a model of Output Iterator and
- * and \p InputIterator's \c value_type is convertible to \c OutputIterator's \c value_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The range [first,last) and the range [result, result + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_copy to
- * compact a sequence of numbers to remove consecutive duplicates using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int B[N];
- * int *result_end = thrust::unique_copy(thrust::host, A, A + N, B, thrust::equal_to());
- * // The first four values of B are now {1, 3, 2, 1} and (result_end - B) is 4
- * // Values beyond result_end are unspecified.
- * \endcode
- *
- * \see unique
- * \see http://www.sgi.com/tech/stl/unique_copy.html
- */
-template
-__host__ __device__
-OutputIterator unique_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_copy copies elements from the range [first, last)
- * to a range beginning with \p result, except that in a consecutive group
- * of duplicate elements only the first one is copied. The return value
- * is the end of the range to which the elements are copied.
- *
- * This version of \p unique_copy uses the function object \c binary_pred
- * to test for equality.
- *
- * \param first The beginning of the input range.
- * \param last The end of the input range.
- * \param result The beginning of the output range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [result, result_end).
- *
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is a model of Equality Comparable.
- * \tparam OutputIterator is a model of Output Iterator and
- * and \p InputIterator's \c value_type is convertible to \c OutputIterator's \c value_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The range [first,last) and the range [result, result + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_copy to
- * compact a sequence of numbers to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1};
- * int B[N];
- * int *result_end = thrust::unique_copy(A, A + N, B, thrust::equal_to());
- * // The first four values of B are now {1, 3, 2, 1} and (result_end - B) is 4
- * // Values beyond result_end are unspecified.
- * \endcode
- *
- * \see unique
- * \see http://www.sgi.com/tech/stl/unique_copy.html
- */
-template
-OutputIterator unique_copy(InputIterator first,
- InputIterator last,
- OutputIterator result,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_by_key is a generalization of \p unique to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key removes all but the first element of
- * the group. Similarly, the corresponding values in the range
- * [values_first, values_first + (keys_last - keys_first))
- * are also removed.
- *
- * The return value is a \p pair of iterators (new_keys_last,new_values_last)
- * such that no two consecutive elements in the range [keys_first, new_keys_last)
- * are equal.
- *
- * This version of \p unique_by_key uses \c operator== to test for equality and
- * \c project1st to reduce values with equal keys.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first The beginning of the key range.
- * \param keys_last The end of the key range.
- * \param values_first The beginning of the value range.
- * \return A pair of iterators at end of the ranges [key_first, keys_new_last) and [values_first, values_new_last).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator1 is a model of Forward Iterator,
- * and \p ForwardIterator1 is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- * \tparam ForwardIterator2 is a model of Forward Iterator,
- * and \p ForwardIterator2 is mutable.
- *
- * \pre The range [keys_first, keys_last) and the range [values_first, values_first + (keys_last - keys_first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_by_key to
- * compact a sequence of key/value pairs to remove consecutive duplicates using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // values
- *
- * thrust::pair new_end;
- * new_end = thrust::unique_by_key(thrust::host, A, A + N, B);
- *
- * // The first four keys in A are now {1, 3, 2, 1} and new_end.first - A is 4.
- * // The first four values in B are now {9, 8, 5, 3} and new_end.second - B is 4.
- * \endcode
- *
- * \see unique
- * \see unique_by_key_copy
- * \see reduce_by_key
- */
-template
-__host__ __device__
- thrust::pair
- unique_by_key(const thrust::detail::execution_policy_base &exec,
- ForwardIterator1 keys_first,
- ForwardIterator1 keys_last,
- ForwardIterator2 values_first);
-
-
-/*! \p unique_by_key is a generalization of \p unique to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key removes all but the first element of
- * the group. Similarly, the corresponding values in the range
- * [values_first, values_first + (keys_last - keys_first))
- * are also removed.
- *
- * The return value is a \p pair of iterators (new_keys_last,new_values_last)
- * such that no two consecutive elements in the range [keys_first, new_keys_last)
- * are equal.
- *
- * This version of \p unique_by_key uses \c operator== to test for equality and
- * \c project1st to reduce values with equal keys.
- *
- * \param keys_first The beginning of the key range.
- * \param keys_last The end of the key range.
- * \param values_first The beginning of the value range.
- * \return A pair of iterators at end of the ranges [key_first, keys_new_last) and [values_first, values_new_last).
- *
- * \tparam ForwardIterator1 is a model of Forward Iterator,
- * and \p ForwardIterator1 is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- * \tparam ForwardIterator2 is a model of Forward Iterator,
- * and \p ForwardIterator2 is mutable.
- *
- * \pre The range [keys_first, keys_last) and the range [values_first, values_first + (keys_last - keys_first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_by_key to
- * compact a sequence of key/value pairs to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // values
- *
- * thrust::pair new_end;
- * new_end = thrust::unique_by_key(A, A + N, B);
- *
- * // The first four keys in A are now {1, 3, 2, 1} and new_end.first - A is 4.
- * // The first four values in B are now {9, 8, 5, 3} and new_end.second - B is 4.
- * \endcode
- *
- * \see unique
- * \see unique_by_key_copy
- * \see reduce_by_key
- */
-template
- thrust::pair
- unique_by_key(ForwardIterator1 keys_first,
- ForwardIterator1 keys_last,
- ForwardIterator2 values_first);
-
-
-/*! \p unique_by_key is a generalization of \p unique to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key removes all but the first element of
- * the group. Similarly, the corresponding values in the range
- * [values_first, values_first + (keys_last - keys_first))
- * are also removed.
- *
- * This version of \p unique_by_key uses the function object \c binary_pred
- * to test for equality and \c project1st to reduce values with equal keys.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first The beginning of the key range.
- * \param keys_last The end of the key range.
- * \param values_first The beginning of the value range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [first, new_last).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator1 is a model of Forward Iterator,
- * and \p ForwardIterator1 is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- * \tparam ForwardIterator2 is a model of Forward Iterator,
- * and \p ForwardIterator2 is mutable.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The range [keys_first, keys_last) and the range [values_first, values_first + (keys_last - keys_first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_by_key to
- * compact a sequence of key/value pairs to remove consecutive duplicates using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // values
- *
- * thrust::pair new_end;
- * thrust::equal_to binary_pred;
- * new_end = thrust::unique_by_key(thrust::host, keys, keys + N, values, binary_pred);
- *
- * // The first four keys in A are now {1, 3, 2, 1} and new_end.first - A is 4.
- * // The first four values in B are now {9, 8, 5, 3} and new_end.second - B is 4.
- * \endcode
- *
- * \see unique
- * \see unique_by_key_copy
- * \see reduce_by_key
- */
-template
-__host__ __device__
- thrust::pair
- unique_by_key(const thrust::detail::execution_policy_base &exec,
- ForwardIterator1 keys_first,
- ForwardIterator1 keys_last,
- ForwardIterator2 values_first,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_by_key is a generalization of \p unique to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key removes all but the first element of
- * the group. Similarly, the corresponding values in the range
- * [values_first, values_first + (keys_last - keys_first))
- * are also removed.
- *
- * This version of \p unique_by_key uses the function object \c binary_pred
- * to test for equality and \c project1st to reduce values with equal keys.
- *
- * \param keys_first The beginning of the key range.
- * \param keys_last The end of the key range.
- * \param values_first The beginning of the value range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return The end of the unique range [first, new_last).
- *
- * \tparam ForwardIterator1 is a model of Forward Iterator,
- * and \p ForwardIterator1 is mutable,
- * and \p ForwardIterator's \c value_type is a model of Equality Comparable.
- * \tparam ForwardIterator2 is a model of Forward Iterator,
- * and \p ForwardIterator2 is mutable.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The range [keys_first, keys_last) and the range [values_first, values_first + (keys_last - keys_first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p unique_by_key to
- * compact a sequence of key/value pairs to remove consecutive duplicates.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // values
- *
- * thrust::pair new_end;
- * thrust::equal_to binary_pred;
- * new_end = thrust::unique_by_key(keys, keys + N, values, binary_pred);
- *
- * // The first four keys in A are now {1, 3, 2, 1} and new_end.first - A is 4.
- * // The first four values in B are now {9, 8, 5, 3} and new_end.second - B is 4.
- * \endcode
- *
- * \see unique
- * \see unique_by_key_copy
- * \see reduce_by_key
- */
-template
- thrust::pair
- unique_by_key(ForwardIterator1 keys_first,
- ForwardIterator1 keys_last,
- ForwardIterator2 values_first,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_by_key_copy is a generalization of \p unique_copy to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key_copy copies the first element of the group to
- * a range beginning with \c keys_result and the corresponding values from the range
- * [values_first, values_first + (keys_last - keys_first)) are copied to a range
- * beginning with \c values_result.
- *
- * This version of \p unique_by_key_copy uses \c operator== to test for equality and
- * \c project1st to reduce values with equal keys.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first The beginning of the input key range.
- * \param keys_last The end of the input key range.
- * \param values_first The beginning of the input value range.
- * \param keys_result The beginning of the output key range.
- * \param values_result The beginning of the output value range.
- * \return A pair of iterators at end of the ranges [keys_result, keys_result_last) and [values_result, values_result_last).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \tparam InputIterator2 is a model of Input Iterator,
- * \tparam OutputIterator1 is a model of Output Iterator and
- * and \p InputIterator1's \c value_type is convertible to \c OutputIterator1's \c value_type.
- * \tparam OutputIterator2 is a model of Output Iterator and
- * and \p InputIterator2's \c value_type is convertible to \c OutputIterator2's \c value_type.
- *
- * \pre The input ranges shall not overlap either output range.
- *
- * The following code snippet demonstrates how to use \p unique_by_key_copy to
- * compact a sequence of key/value pairs and with equal keys using the \p thrust::host execution policy
- * for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // input keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // input values
- * int C[N]; // output keys
- * int D[N]; // output values
- *
- * thrust::pair new_end;
- * new_end = thrust::unique_by_key_copy(thrust::host, A, A + N, B, C, D);
- *
- * // The first four keys in C are now {1, 3, 2, 1} and new_end.first - C is 4.
- * // The first four values in D are now {9, 8, 5, 3} and new_end.second - D is 4.
- * \endcode
- *
- * \see unique_copy
- * \see unique_by_key
- * \see reduce_by_key
- */
-template
-__host__ __device__
- thrust::pair
- unique_by_key_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p unique_by_key_copy is a generalization of \p unique_copy to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key_copy copies the first element of the group to
- * a range beginning with \c keys_result and the corresponding values from the range
- * [values_first, values_first + (keys_last - keys_first)) are copied to a range
- * beginning with \c values_result.
- *
- * This version of \p unique_by_key_copy uses \c operator== to test for equality and
- * \c project1st to reduce values with equal keys.
- *
- * \param keys_first The beginning of the input key range.
- * \param keys_last The end of the input key range.
- * \param values_first The beginning of the input value range.
- * \param keys_result The beginning of the output key range.
- * \param values_result The beginning of the output value range.
- * \return A pair of iterators at end of the ranges [keys_result, keys_result_last) and [values_result, values_result_last).
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \tparam InputIterator2 is a model of Input Iterator,
- * \tparam OutputIterator1 is a model of Output Iterator and
- * and \p InputIterator1's \c value_type is convertible to \c OutputIterator1's \c value_type.
- * \tparam OutputIterator2 is a model of Output Iterator and
- * and \p InputIterator2's \c value_type is convertible to \c OutputIterator2's \c value_type.
- *
- * \pre The input ranges shall not overlap either output range.
- *
- * The following code snippet demonstrates how to use \p unique_by_key_copy to
- * compact a sequence of key/value pairs and with equal keys.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // input keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // input values
- * int C[N]; // output keys
- * int D[N]; // output values
- *
- * thrust::pair new_end;
- * new_end = thrust::unique_by_key_copy(A, A + N, B, C, D);
- *
- * // The first four keys in C are now {1, 3, 2, 1} and new_end.first - C is 4.
- * // The first four values in D are now {9, 8, 5, 3} and new_end.second - D is 4.
- * \endcode
- *
- * \see unique_copy
- * \see unique_by_key
- * \see reduce_by_key
- */
-template
- thrust::pair
- unique_by_key_copy(InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p unique_by_key_copy is a generalization of \p unique_copy to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key_copy copies the first element of the group to
- * a range beginning with \c keys_result and the corresponding values from the range
- * [values_first, values_first + (keys_last - keys_first)) are copied to a range
- * beginning with \c values_result.
- *
- * This version of \p unique_by_key_copy uses the function object \c binary_pred
- * to test for equality and \c project1st to reduce values with equal keys.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first The beginning of the input key range.
- * \param keys_last The end of the input key range.
- * \param values_first The beginning of the input value range.
- * \param keys_result The beginning of the output key range.
- * \param values_result The beginning of the output value range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return A pair of iterators at end of the ranges [keys_result, keys_result_last) and [values_result, values_result_last).
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \tparam InputIterator2 is a model of Input Iterator,
- * \tparam OutputIterator1 is a model of Output Iterator and
- * and \p InputIterator1's \c value_type is convertible to \c OutputIterator1's \c value_type.
- * \tparam OutputIterator2 is a model of Output Iterator and
- * and \p InputIterator2's \c value_type is convertible to \c OutputIterator2's \c value_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The input ranges shall not overlap either output range.
- *
- * The following code snippet demonstrates how to use \p unique_by_key_copy to
- * compact a sequence of key/value pairs and with equal keys using the \p thrust::host execution policy for
- * parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // input keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // input values
- * int C[N]; // output keys
- * int D[N]; // output values
- *
- * thrust::pair new_end;
- * thrust::equal_to binary_pred;
- * new_end = thrust::unique_by_key_copy(thrust::host, A, A + N, B, C, D, binary_pred);
- *
- * // The first four keys in C are now {1, 3, 2, 1} and new_end.first - C is 4.
- * // The first four values in D are now {9, 8, 5, 3} and new_end.second - D is 4.
- * \endcode
- *
- * \see unique_copy
- * \see unique_by_key
- * \see reduce_by_key
- */
-template
-__host__ __device__
- thrust::pair
- unique_by_key_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- BinaryPredicate binary_pred);
-
-
-/*! \p unique_by_key_copy is a generalization of \p unique_copy to key-value pairs.
- * For each group of consecutive keys in the range [keys_first, keys_last)
- * that are equal, \p unique_by_key_copy copies the first element of the group to
- * a range beginning with \c keys_result and the corresponding values from the range
- * [values_first, values_first + (keys_last - keys_first)) are copied to a range
- * beginning with \c values_result.
- *
- * This version of \p unique_by_key_copy uses the function object \c binary_pred
- * to test for equality and \c project1st to reduce values with equal keys.
- *
- * \param keys_first The beginning of the input key range.
- * \param keys_last The end of the input key range.
- * \param values_first The beginning of the input value range.
- * \param keys_result The beginning of the output key range.
- * \param values_result The beginning of the output value range.
- * \param binary_pred The binary predicate used to determine equality.
- * \return A pair of iterators at end of the ranges [keys_result, keys_result_last) and [values_result, values_result_last).
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \tparam InputIterator2 is a model of Input Iterator,
- * \tparam OutputIterator1 is a model of Output Iterator and
- * and \p InputIterator1's \c value_type is convertible to \c OutputIterator1's \c value_type.
- * \tparam OutputIterator2 is a model of Output Iterator and
- * and \p InputIterator2's \c value_type is convertible to \c OutputIterator2's \c value_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * \pre The input ranges shall not overlap either output range.
- *
- * The following code snippet demonstrates how to use \p unique_by_key_copy to
- * compact a sequence of key/value pairs and with equal keys.
- *
- * \code
- * #include
- * ...
- * const int N = 7;
- * int A[N] = {1, 3, 3, 3, 2, 2, 1}; // input keys
- * int B[N] = {9, 8, 7, 6, 5, 4, 3}; // input values
- * int C[N]; // output keys
- * int D[N]; // output values
- *
- * thrust::pair new_end;
- * thrust::equal_to binary_pred;
- * new_end = thrust::unique_by_key_copy(A, A + N, B, C, D, binary_pred);
- *
- * // The first four keys in C are now {1, 3, 2, 1} and new_end.first - C is 4.
- * // The first four values in D are now {9, 8, 5, 3} and new_end.second - D is 4.
- * \endcode
- *
- * \see unique_copy
- * \see unique_by_key
- * \see reduce_by_key
- */
-template
- thrust::pair
- unique_by_key_copy(InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- BinaryPredicate binary_pred);
-
-
-/*! \} // end stream_compaction
- */
-
-
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/Cat125/text-generator-v2/utils.py b/spaces/Cat125/text-generator-v2/utils.py
deleted file mode 100644
index b71c6d736a53d0d3bed081b87931edc1e485dadb..0000000000000000000000000000000000000000
--- a/spaces/Cat125/text-generator-v2/utils.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from termcolor import colored
-
-def log(text):
- '''The function logs a given text to a file named 'runtime.log'.
-
- Parameters
- ----------
- text
- The text that will be written to the log file.
-
- '''
- print(text, file=open('runtime.log', 'a+'))
-
-# Print iterations progress
-
-
-def progressbar(iteration, total, prefix='', suffix='', decimals=1, length=100, fill=colored('█', 'green'), print_end="\r"):
- """
- Call in a loop to create terminal progress bar
- @params:
- iteration - Required : current iteration (Int)
- total - Required : total iterations (Int)
- prefix - Optional : prefix string (Str)
- suffix - Optional : suffix string (Str)
- decimals - Optional : positive number of decimals in percent complete (Int)
- length - Optional : character length of bar (Int)
- fill - Optional : bar fill character (Str)
- printEnd - Optional : end character (e.g. "\r", "\r\n") (Str)
- """
- percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
- filled_length = int(length * iteration // total)
- bar = fill * filled_length + colored('-', 'red') * (length - filled_length)
- print(f'\r{prefix} [{bar}] {percent}% ({iteration}/{total}) {suffix}', end = print_end)
- # Print New Line on Complete
- if iteration == total:
- print()
\ No newline at end of file
diff --git a/spaces/Celestinian/Topic-Detection/app.py b/spaces/Celestinian/Topic-Detection/app.py
deleted file mode 100644
index 9a0d67fe57bd941c185e1f9e39bc06f579fe8c24..0000000000000000000000000000000000000000
--- a/spaces/Celestinian/Topic-Detection/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from transformers import AutoTokenizer, AutoModelForCausalLM
-import gradio as gr
-import torch
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-tokenizer = AutoTokenizer.from_pretrained("Celestinian/TopicGPT")
-model = AutoModelForCausalLM.from_pretrained("Celestinian/TopicGPT")
-
-def generate_text(prompt, temperature, max_size):
- input_ids = tokenizer.encode("#CONTEXT# " + prompt + " #TOPIC#", return_tensors='pt')
- input_ids = input_ids.to(device)
- model.eval()
- model.to(device)
-
- output_tokens = []
- eos_token_id = tokenizer.encode('#')[0]
-
- for _ in range(max_size):
- with torch.no_grad():
- outputs = model(input_ids)
- logits = outputs.logits[:, -1, :] / temperature
- next_token = torch.multinomial(torch.softmax(logits, dim=-1), num_samples=1)
- if next_token.item() == eos_token_id:
- break
- input_ids = torch.cat((input_ids, next_token), dim=-1)
- output_tokens.append(next_token.item())
-
- output = tokenizer.decode(output_tokens)
- clean_output = output.replace('\n', '\n')
- print(prompt + clean_output)
- return clean_output
-
-input_text = gr.inputs.Textbox(lines=5, label="Input Text")
-temperature_input = gr.inputs.Slider(minimum=0.01, maximum=2, step=0.01, default=0.01, label="Temperature")
-max_size_input = gr.inputs.Slider(minimum=1, maximum=250, step=1, default=30, label="Max Size")
-output_text = gr.outputs.Textbox(label="Generated Text")
-
-gr.Interface(generate_text, inputs=[input_text, temperature_input, max_size_input], outputs=output_text).launch()
\ No newline at end of file
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/base.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/base.py
deleted file mode 100644
index d74fa51be75b5078134c510b393a06deb0267b2a..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/base.py
+++ /dev/null
@@ -1,50 +0,0 @@
-"""Base class for all voice classes."""
-import abc
-from threading import Lock
-
-from autogpt.config import AbstractSingleton
-
-
-class VoiceBase(AbstractSingleton):
- """
- Base class for all voice classes.
- """
-
- def __init__(self):
- """
- Initialize the voice class.
- """
- self._url = None
- self._headers = None
- self._api_key = None
- self._voices = []
- self._mutex = Lock()
- self._setup()
-
- def say(self, text: str, voice_index: int = 0) -> bool:
- """
- Say the given text.
-
- Args:
- text (str): The text to say.
- voice_index (int): The index of the voice to use.
- """
- with self._mutex:
- return self._speech(text, voice_index)
-
- @abc.abstractmethod
- def _setup(self) -> None:
- """
- Setup the voices, API key, etc.
- """
- pass
-
- @abc.abstractmethod
- def _speech(self, text: str, voice_index: int = 0) -> bool:
- """
- Play the given text.
-
- Args:
- text (str): The text to play.
- """
- pass
diff --git a/spaces/ChongCJ/fish/README.md b/spaces/ChongCJ/fish/README.md
deleted file mode 100644
index 13495f81ed3c597071872098d12467459e55f2b4..0000000000000000000000000000000000000000
--- a/spaces/ChongCJ/fish/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Fish
-emoji: 👁
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/CuriousDolphin/MobileSAM/app.py b/spaces/CuriousDolphin/MobileSAM/app.py
deleted file mode 100644
index abc8bbcbcf62db8ecdbd65d182ecb0860ea1e8e6..0000000000000000000000000000000000000000
--- a/spaces/CuriousDolphin/MobileSAM/app.py
+++ /dev/null
@@ -1,319 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-import os
-from mobile_sam import SamAutomaticMaskGenerator, SamPredictor, sam_model_registry
-from PIL import ImageDraw
-from utils.tools import box_prompt, format_results, point_prompt
-from utils.tools_gradio import fast_process
-
-# Most of our demo code is from [FastSAM Demo](https://huggingface.co/spaces/An-619/FastSAM). Huge thanks for AN-619.
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-# Load the pre-trained model
-sam_checkpoint = "./mobile_sam.pt"
-model_type = "vit_t"
-
-mobile_sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
-mobile_sam = mobile_sam.to(device=device)
-mobile_sam.eval()
-
-mask_generator = SamAutomaticMaskGenerator(mobile_sam)
-predictor = SamPredictor(mobile_sam)
-
-# Description
-title = "
Faster Segment Anything(MobileSAM)
"
-
-description_e = """This is a demo of [Faster Segment Anything(MobileSAM) Model](https://github.com/ChaoningZhang/MobileSAM).
-
- We will provide box mode soon.
-
- Enjoy!
-
- """
-
-description_p = """ # Instructions for point mode
-
- 0. Restart by click the Restart button
- 1. Select a point with Add Mask for the foreground (Must)
- 2. Select a point with Remove Area for the background (Optional)
- 3. Click the Start Segmenting.
-
- """
-
-examples = [
- ["assets/picture3.jpg"],
- ["assets/picture4.jpg"],
- ["assets/picture5.jpg"],
- ["assets/picture6.jpg"],
- ["assets/picture1.jpg"],
- ["assets/picture2.jpg"],
-]
-
-default_example = examples[0]
-
-css = "h1 { text-align: center } .about { text-align: justify; padding-left: 10%; padding-right: 10%; }"
-
-
-@torch.no_grad()
-def segment_everything(
- image,
- input_size=1024,
- better_quality=False,
- withContours=True,
- use_retina=True,
- mask_random_color=True,
-):
- global mask_generator
-
- input_size = int(input_size)
- w, h = image.size
- scale = input_size / max(w, h)
- new_w = int(w * scale)
- new_h = int(h * scale)
- image = image.resize((new_w, new_h))
-
- nd_image = np.array(image)
- annotations = mask_generator.generate(nd_image)
-
- fig = fast_process(
- annotations=annotations,
- image=image,
- device=device,
- scale=(1024 // input_size),
- better_quality=better_quality,
- mask_random_color=mask_random_color,
- bbox=None,
- use_retina=use_retina,
- withContours=withContours,
- )
- return fig
-
-
-def segment_with_points(
- image,
- input_size=1024,
- better_quality=False,
- withContours=True,
- use_retina=True,
- mask_random_color=True,
-):
- global global_points
- global global_point_label
-
- input_size = int(input_size)
- w, h = image.size
- scale = input_size / max(w, h)
- new_w = int(w * scale)
- new_h = int(h * scale)
- image = image.resize((new_w, new_h))
-
- scaled_points = np.array([[int(x * scale) for x in point] for point in global_points])
- scaled_point_label = np.array(global_point_label)
-
- nd_image = np.array(image)
- predictor.set_image(nd_image)
- masks, scores, logits = predictor.predict(
- point_coords=scaled_points,
- point_labels=scaled_point_label,
- multimask_output=True,
- )
-
- results = format_results(masks, scores, logits, 0)
-
- annotations, _ = point_prompt(
- results, scaled_points, scaled_point_label, new_h, new_w
- )
- annotations = np.array([annotations])
-
- fig = fast_process(
- annotations=annotations,
- image=image,
- device=device,
- scale=(1024 // input_size),
- better_quality=better_quality,
- mask_random_color=mask_random_color,
- bbox=None,
- use_retina=use_retina,
- withContours=withContours,
- )
-
- global_points = []
- global_point_label = []
- # return fig, None
- return fig, image
-
-
-def get_points_with_draw(image, label, evt: gr.SelectData):
- global global_points
- global global_point_label
-
- x, y = evt.index[0], evt.index[1]
- point_radius, point_color = 15, (255, 255, 0) if label == "Add Mask" else (
- 255,
- 0,
- 255,
- )
- global_points.append([x, y])
- global_point_label.append(1 if label == "Add Mask" else 0)
-
- print(x, y, label == "Add Mask")
-
- # 创建一个可以在图像上绘图的对象
- draw = ImageDraw.Draw(image)
- draw.ellipse(
- [(x - point_radius, y - point_radius), (x + point_radius, y + point_radius)],
- fill=point_color,
- )
- return image
-
-
-cond_img_e = gr.Image(label="Input", value=default_example[0], type="pil")
-cond_img_p = gr.Image(label="Input with points", value=default_example[0], type="pil")
-
-segm_img_e = gr.Image(label="Segmented Image", interactive=False, type="pil")
-segm_img_p = gr.Image(
- label="Segmented Image with points", interactive=False, type="pil"
-)
-
-global_points = []
-global_point_label = []
-
-input_size_slider = gr.components.Slider(
- minimum=512,
- maximum=1024,
- value=1024,
- step=64,
- label="Input_size",
- info="Our model was trained on a size of 1024",
-)
-
-with gr.Blocks(css=css, title="Faster Segment Anything(MobileSAM)") as demo:
- with gr.Row():
- with gr.Column(scale=1):
- # Title
- gr.Markdown(title)
-
- # with gr.Tab("Everything mode"):
- # # Images
- # with gr.Row(variant="panel"):
- # with gr.Column(scale=1):
- # cond_img_e.render()
- #
- # with gr.Column(scale=1):
- # segm_img_e.render()
- #
- # # Submit & Clear
- # with gr.Row():
- # with gr.Column():
- # input_size_slider.render()
- #
- # with gr.Row():
- # contour_check = gr.Checkbox(
- # value=True,
- # label="withContours",
- # info="draw the edges of the masks",
- # )
- #
- # with gr.Column():
- # segment_btn_e = gr.Button(
- # "Segment Everything", variant="primary"
- # )
- # clear_btn_e = gr.Button("Clear", variant="secondary")
- #
- # gr.Markdown("Try some of the examples below ⬇️")
- # gr.Examples(
- # examples=examples,
- # inputs=[cond_img_e],
- # outputs=segm_img_e,
- # fn=segment_everything,
- # cache_examples=True,
- # examples_per_page=4,
- # )
- #
- # with gr.Column():
- # with gr.Accordion("Advanced options", open=False):
- # # text_box = gr.Textbox(label="text prompt")
- # with gr.Row():
- # mor_check = gr.Checkbox(
- # value=False,
- # label="better_visual_quality",
- # info="better quality using morphologyEx",
- # )
- # with gr.Column():
- # retina_check = gr.Checkbox(
- # value=True,
- # label="use_retina",
- # info="draw high-resolution segmentation masks",
- # )
- # # Description
- # gr.Markdown(description_e)
- #
- with gr.Tab("Point mode"):
- # Images
- with gr.Row(variant="panel"):
- with gr.Column(scale=1):
- cond_img_p.render()
-
- with gr.Column(scale=1):
- segm_img_p.render()
-
- # Submit & Clear
- with gr.Row():
- with gr.Column():
- with gr.Row():
- add_or_remove = gr.Radio(
- ["Add Mask", "Remove Area"],
- value="Add Mask",
- )
-
- with gr.Column():
- segment_btn_p = gr.Button(
- "Start segmenting!", variant="primary"
- )
- clear_btn_p = gr.Button("Restart", variant="secondary")
-
- gr.Markdown("Try some of the examples below ⬇️")
- gr.Examples(
- examples=examples,
- inputs=[cond_img_p],
- # outputs=segm_img_p,
- # fn=segment_with_points,
- # cache_examples=True,
- examples_per_page=4,
- )
-
- with gr.Column():
- # Description
- gr.Markdown(description_p)
-
- cond_img_p.select(get_points_with_draw, [cond_img_p, add_or_remove], cond_img_p)
-
- # segment_btn_e.click(
- # segment_everything,
- # inputs=[
- # cond_img_e,
- # input_size_slider,
- # mor_check,
- # contour_check,
- # retina_check,
- # ],
- # outputs=segm_img_e,
- # )
-
- segment_btn_p.click(
- segment_with_points, inputs=[cond_img_p], outputs=[segm_img_p, cond_img_p]
- )
-
- def clear():
- return None, None
-
- def clear_text():
- return None, None, None
-
- # clear_btn_e.click(clear, outputs=[cond_img_e, segm_img_e])
- clear_btn_p.click(clear, outputs=[cond_img_p, segm_img_p])
-
-demo.queue()
-demo.launch()
diff --git a/spaces/Cybsechuman/Consistency_analysis/README.md b/spaces/Cybsechuman/Consistency_analysis/README.md
deleted file mode 100644
index e71df6f03de85112c83e913b0f1ed39da1be79da..0000000000000000000000000000000000000000
--- a/spaces/Cybsechuman/Consistency_analysis/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Consistency Analysis
-emoji: 🏢
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/keypoint.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/keypoint.py
deleted file mode 100644
index a6881f72f4f757855105638f2f7a9fca81760bb7..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/keypoint.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import torch
-
-
-# transpose
-FLIP_LEFT_RIGHT = 0
-FLIP_TOP_BOTTOM = 1
-
-class Keypoints(object):
- def __init__(self, keypoints, size, mode=None):
- # FIXME remove check once we have better integration with device
- # in my version this would consistently return a CPU tensor
- device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device('cpu')
- keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device)
- num_keypoints = keypoints.shape[0]
- if num_keypoints:
- keypoints = keypoints.view(num_keypoints, -1, 3)
-
- # TODO should I split them?
- # self.visibility = keypoints[..., 2]
- self.keypoints = keypoints# [..., :2]
-
- self.size = size
- self.mode = mode
- self.extra_fields = {}
-
- def crop(self, box):
- raise NotImplementedError()
-
- def resize(self, size, *args, **kwargs):
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(size, self.size))
- ratio_w, ratio_h = ratios
- resized_data = self.keypoints.clone()
- resized_data[..., 0] *= ratio_w
- resized_data[..., 1] *= ratio_h
- keypoints = type(self)(resized_data, size, self.mode)
- for k, v in self.extra_fields.items():
- keypoints.add_field(k, v)
- return keypoints
-
- def transpose(self, method):
- if method not in (FLIP_LEFT_RIGHT,):
- raise NotImplementedError(
- "Only FLIP_LEFT_RIGHT implemented")
-
- flip_inds = type(self).FLIP_INDS
- flipped_data = self.keypoints[:, flip_inds]
- width = self.size[0]
- TO_REMOVE = 1
- # Flip x coordinates
- flipped_data[..., 0] = width - flipped_data[..., 0] - TO_REMOVE
-
- # Maintain COCO convention that if visibility == 0, then x, y = 0
- inds = flipped_data[..., 2] == 0
- flipped_data[inds] = 0
-
- keypoints = type(self)(flipped_data, self.size, self.mode)
- for k, v in self.extra_fields.items():
- keypoints.add_field(k, v)
- return keypoints
-
- def to(self, *args, **kwargs):
- keypoints = type(self)(self.keypoints.to(*args, **kwargs), self.size, self.mode)
- for k, v in self.extra_fields.items():
- if hasattr(v, "to"):
- v = v.to(*args, **kwargs)
- keypoints.add_field(k, v)
- return keypoints
-
- def __getitem__(self, item):
- keypoints = type(self)(self.keypoints[item], self.size, self.mode)
- for k, v in self.extra_fields.items():
- keypoints.add_field(k, v[item])
- return keypoints
-
- def add_field(self, field, field_data):
- self.extra_fields[field] = field_data
-
- def get_field(self, field):
- return self.extra_fields[field]
-
- def __repr__(self):
- s = self.__class__.__name__ + '('
- s += 'num_instances={}, '.format(len(self.keypoints))
- s += 'image_width={}, '.format(self.size[0])
- s += 'image_height={})'.format(self.size[1])
- return s
-
-
-def _create_flip_indices(names, flip_map):
- full_flip_map = flip_map.copy()
- full_flip_map.update({v: k for k, v in flip_map.items()})
- flipped_names = [i if i not in full_flip_map else full_flip_map[i] for i in names]
- flip_indices = [names.index(i) for i in flipped_names]
- return torch.tensor(flip_indices)
-
-
-class PersonKeypoints(Keypoints):
- NAMES = [
- 'nose',
- 'left_eye',
- 'right_eye',
- 'left_ear',
- 'right_ear',
- 'left_shoulder',
- 'right_shoulder',
- 'left_elbow',
- 'right_elbow',
- 'left_wrist',
- 'right_wrist',
- 'left_hip',
- 'right_hip',
- 'left_knee',
- 'right_knee',
- 'left_ankle',
- 'right_ankle'
- ]
- FLIP_MAP = {
- 'left_eye': 'right_eye',
- 'left_ear': 'right_ear',
- 'left_shoulder': 'right_shoulder',
- 'left_elbow': 'right_elbow',
- 'left_wrist': 'right_wrist',
- 'left_hip': 'right_hip',
- 'left_knee': 'right_knee',
- 'left_ankle': 'right_ankle'
- }
-
-
-# TODO this doesn't look great
-PersonKeypoints.FLIP_INDS = _create_flip_indices(PersonKeypoints.NAMES, PersonKeypoints.FLIP_MAP)
-def kp_connections(keypoints):
- kp_lines = [
- [keypoints.index('left_eye'), keypoints.index('right_eye')],
- [keypoints.index('left_eye'), keypoints.index('nose')],
- [keypoints.index('right_eye'), keypoints.index('nose')],
- [keypoints.index('right_eye'), keypoints.index('right_ear')],
- [keypoints.index('left_eye'), keypoints.index('left_ear')],
- [keypoints.index('right_shoulder'), keypoints.index('right_elbow')],
- [keypoints.index('right_elbow'), keypoints.index('right_wrist')],
- [keypoints.index('left_shoulder'), keypoints.index('left_elbow')],
- [keypoints.index('left_elbow'), keypoints.index('left_wrist')],
- [keypoints.index('right_hip'), keypoints.index('right_knee')],
- [keypoints.index('right_knee'), keypoints.index('right_ankle')],
- [keypoints.index('left_hip'), keypoints.index('left_knee')],
- [keypoints.index('left_knee'), keypoints.index('left_ankle')],
- [keypoints.index('right_shoulder'), keypoints.index('left_shoulder')],
- [keypoints.index('right_hip'), keypoints.index('left_hip')],
- ]
- return kp_lines
-PersonKeypoints.CONNECTIONS = kp_connections(PersonKeypoints.NAMES)
-
-
-# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop)
-def keypoints_to_heat_map(keypoints, rois, heatmap_size):
- if rois.numel() == 0:
- return rois.new().long(), rois.new().long()
- offset_x = rois[:, 0]
- offset_y = rois[:, 1]
- scale_x = heatmap_size / (rois[:, 2] - rois[:, 0])
- scale_y = heatmap_size / (rois[:, 3] - rois[:, 1])
-
- offset_x = offset_x[:, None]
- offset_y = offset_y[:, None]
- scale_x = scale_x[:, None]
- scale_y = scale_y[:, None]
-
- x = keypoints[..., 0]
- y = keypoints[..., 1]
-
- x_boundary_inds = x == rois[:, 2][:, None]
- y_boundary_inds = y == rois[:, 3][:, None]
-
- x = (x - offset_x) * scale_x
- x = x.floor().long()
- y = (y - offset_y) * scale_y
- y = y.floor().long()
-
- x[x_boundary_inds] = heatmap_size - 1
- y[y_boundary_inds] = heatmap_size - 1
-
- valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size)
- vis = keypoints[..., 2] > 0
- valid = (valid_loc & vis).long()
-
- lin_ind = y * heatmap_size + x
- heatmaps = lin_ind * valid
-
- return heatmaps, valid
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/__init__.py
deleted file mode 100644
index 123a3fb5f048408f59a80cc0fa80097b652ceebb..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# ruff: noqa
-from .core import *
-from .channels import *
-SCHEMA_VERSION = 'v5.8.0'
-SCHEMA_URL = 'https://vega.github.io/schema/vega-lite/v5.8.0.json'
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/utils.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/utils.py
deleted file mode 100644
index dd2d245a0bebcd5fc37ac20526aabbd5358dab0e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/utils.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module offers general convenience and utility functions for dealing with
-datetimes.
-
-.. versionadded:: 2.7.0
-"""
-from __future__ import unicode_literals
-
-from datetime import datetime, time
-
-
-def today(tzinfo=None):
- """
- Returns a :py:class:`datetime` representing the current day at midnight
-
- :param tzinfo:
- The time zone to attach (also used to determine the current day).
-
- :return:
- A :py:class:`datetime.datetime` object representing the current day
- at midnight.
- """
-
- dt = datetime.now(tzinfo)
- return datetime.combine(dt.date(), time(0, tzinfo=tzinfo))
-
-
-def default_tzinfo(dt, tzinfo):
- """
- Sets the ``tzinfo`` parameter on naive datetimes only
-
- This is useful for example when you are provided a datetime that may have
- either an implicit or explicit time zone, such as when parsing a time zone
- string.
-
- .. doctest::
-
- >>> from dateutil.tz import tzoffset
- >>> from dateutil.parser import parse
- >>> from dateutil.utils import default_tzinfo
- >>> dflt_tz = tzoffset("EST", -18000)
- >>> print(default_tzinfo(parse('2014-01-01 12:30 UTC'), dflt_tz))
- 2014-01-01 12:30:00+00:00
- >>> print(default_tzinfo(parse('2014-01-01 12:30'), dflt_tz))
- 2014-01-01 12:30:00-05:00
-
- :param dt:
- The datetime on which to replace the time zone
-
- :param tzinfo:
- The :py:class:`datetime.tzinfo` subclass instance to assign to
- ``dt`` if (and only if) it is naive.
-
- :return:
- Returns an aware :py:class:`datetime.datetime`.
- """
- if dt.tzinfo is not None:
- return dt
- else:
- return dt.replace(tzinfo=tzinfo)
-
-
-def within_delta(dt1, dt2, delta):
- """
- Useful for comparing two datetimes that may have a negligible difference
- to be considered equal.
- """
- delta = abs(delta)
- difference = dt1 - dt2
- return -delta <= difference <= delta
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/__init__.py
deleted file mode 100644
index f4cba26bf6ecaf18e96a62db69f70078498451e3..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/__init__.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# DON'T EDIT! This file is generated by MetaTools/buildTableList.py.
-def _moduleFinderHint():
- """Dummy function to let modulefinder know what tables may be
- dynamically imported. Generated by MetaTools/buildTableList.py.
-
- >>> _moduleFinderHint()
- """
- from . import B_A_S_E_
- from . import C_B_D_T_
- from . import C_B_L_C_
- from . import C_F_F_
- from . import C_F_F__2
- from . import C_O_L_R_
- from . import C_P_A_L_
- from . import D_S_I_G_
- from . import D__e_b_g
- from . import E_B_D_T_
- from . import E_B_L_C_
- from . import F_F_T_M_
- from . import F__e_a_t
- from . import G_D_E_F_
- from . import G_M_A_P_
- from . import G_P_K_G_
- from . import G_P_O_S_
- from . import G_S_U_B_
- from . import G__l_a_t
- from . import G__l_o_c
- from . import H_V_A_R_
- from . import J_S_T_F_
- from . import L_T_S_H_
- from . import M_A_T_H_
- from . import M_E_T_A_
- from . import M_V_A_R_
- from . import O_S_2f_2
- from . import S_I_N_G_
- from . import S_T_A_T_
- from . import S_V_G_
- from . import S__i_l_f
- from . import S__i_l_l
- from . import T_S_I_B_
- from . import T_S_I_C_
- from . import T_S_I_D_
- from . import T_S_I_J_
- from . import T_S_I_P_
- from . import T_S_I_S_
- from . import T_S_I_V_
- from . import T_S_I__0
- from . import T_S_I__1
- from . import T_S_I__2
- from . import T_S_I__3
- from . import T_S_I__5
- from . import T_T_F_A_
- from . import V_D_M_X_
- from . import V_O_R_G_
- from . import V_V_A_R_
- from . import _a_n_k_r
- from . import _a_v_a_r
- from . import _b_s_l_n
- from . import _c_i_d_g
- from . import _c_m_a_p
- from . import _c_v_a_r
- from . import _c_v_t
- from . import _f_e_a_t
- from . import _f_p_g_m
- from . import _f_v_a_r
- from . import _g_a_s_p
- from . import _g_c_i_d
- from . import _g_l_y_f
- from . import _g_v_a_r
- from . import _h_d_m_x
- from . import _h_e_a_d
- from . import _h_h_e_a
- from . import _h_m_t_x
- from . import _k_e_r_n
- from . import _l_c_a_r
- from . import _l_o_c_a
- from . import _l_t_a_g
- from . import _m_a_x_p
- from . import _m_e_t_a
- from . import _m_o_r_t
- from . import _m_o_r_x
- from . import _n_a_m_e
- from . import _o_p_b_d
- from . import _p_o_s_t
- from . import _p_r_e_p
- from . import _p_r_o_p
- from . import _s_b_i_x
- from . import _t_r_a_k
- from . import _v_h_e_a
- from . import _v_m_t_x
-
-
-if __name__ == "__main__":
- import doctest, sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js
deleted file mode 100644
index 5405cd3af19be5d8cb56dbb55aefa442653e888a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js
+++ /dev/null
@@ -1,2 +0,0 @@
-function c(n){a(n,"start");var t={},e=n.languageData||{},s=!1;for(var l in n)if(l!=e&&n.hasOwnProperty(l))for(var u=t[l]=[],o=n[l],r=0;r2&&o.token&&typeof o.token!="string"){e.pending=[];for(var g=2;g-1)return null;var l=e.indent.length-1,u=n[e.state];n:for(;;){for(var o=0;o msg.id === messageId);
-
- if (messageIndex === -1) {
- throw error(404, "Message not found");
- }
-
- const model = models.find((m) => m.id === conv.model);
-
- if (!model) {
- throw error(404, "Conversation model not found");
- }
-
- const prompt = buildPrompt(conv.messages.slice(0, messageIndex + 1), model);
-
- return new Response(
- JSON.stringify(
- {
- note: "This is a preview of the prompt that will be sent to the model when retrying the message. It may differ from what was sent in the past if the parameters have been updated since",
- prompt,
- model: model.name,
- parameters: {
- ...model.parameters,
- return_full_text: false,
- },
- },
- null,
- 2
- ),
- { headers: { "Content-Type": "application/json" } }
- );
-}
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/misc.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/misc.py
deleted file mode 100644
index d262d86e2c9a12e22bad7266dba429ce09c9a036..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/misc.py
+++ /dev/null
@@ -1,23 +0,0 @@
-"""
-@date: 2021/8/4
-@description:
-"""
-import numpy as np
-import torch
-
-
-def tensor2np(t: torch.Tensor) -> np.array:
- if isinstance(t, torch.Tensor):
- if t.device == 'cpu':
- return t.detach().numpy()
- else:
- return t.detach().cpu().numpy()
- else:
- return t
-
-
-def tensor2np_d(d: dict) -> dict:
- output = {}
- for k in d.keys():
- output[k] = tensor2np(d[k])
- return output
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/latent_mappers.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/latent_mappers.py
deleted file mode 100644
index 63637adc9646986a3546edd19f4555a2f75a379f..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/latent_mappers.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import Module
-
-from models.StyleCLIP.models.stylegan2.model import EqualLinear, PixelNorm
-
-
-class Mapper(Module):
-
- def __init__(self, opts):
- super(Mapper, self).__init__()
-
- self.opts = opts
- layers = [PixelNorm()]
-
- for i in range(4):
- layers.append(
- EqualLinear(
- 512, 512, lr_mul=0.01, activation='fused_lrelu'
- )
- )
-
- self.mapping = nn.Sequential(*layers)
-
-
- def forward(self, x):
- x = self.mapping(x)
- return x
-
-
-class SingleMapper(Module):
-
- def __init__(self, opts):
- super(SingleMapper, self).__init__()
-
- self.opts = opts
-
- self.mapping = Mapper(opts)
-
- def forward(self, x):
- out = self.mapping(x)
- return out
-
-
-class LevelsMapper(Module):
-
- def __init__(self, opts):
- super(LevelsMapper, self).__init__()
-
- self.opts = opts
-
- if not opts.no_coarse_mapper:
- self.course_mapping = Mapper(opts)
- if not opts.no_medium_mapper:
- self.medium_mapping = Mapper(opts)
- if not opts.no_fine_mapper:
- self.fine_mapping = Mapper(opts)
-
- def forward(self, x):
- x_coarse = x[:, :4, :]
- x_medium = x[:, 4:8, :]
- x_fine = x[:, 8:, :]
-
- if not self.opts.no_coarse_mapper:
- x_coarse = self.course_mapping(x_coarse)
- else:
- x_coarse = torch.zeros_like(x_coarse)
- if not self.opts.no_medium_mapper:
- x_medium = self.medium_mapping(x_medium)
- else:
- x_medium = torch.zeros_like(x_medium)
- if not self.opts.no_fine_mapper:
- x_fine = self.fine_mapping(x_fine)
- else:
- x_fine = torch.zeros_like(x_fine)
-
-
- out = torch.cat([x_coarse, x_medium, x_fine], dim=1)
-
- return out
-
diff --git a/spaces/DragGan/DragGan/legacy.py b/spaces/DragGan/DragGan/legacy.py
deleted file mode 100644
index 8cf53cb9396a639261bbcadb4e264e39415c1a56..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/legacy.py
+++ /dev/null
@@ -1,323 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Converting legacy network pickle into the new format."""
-
-import click
-import pickle
-import re
-import copy
-import numpy as np
-import torch
-import dnnlib
-from torch_utils import misc
-
-#----------------------------------------------------------------------------
-
-def load_network_pkl(f, force_fp16=False):
- data = _LegacyUnpickler(f).load()
-
- # Legacy TensorFlow pickle => convert.
- if isinstance(data, tuple) and len(data) == 3 and all(isinstance(net, _TFNetworkStub) for net in data):
- tf_G, tf_D, tf_Gs = data
- G = convert_tf_generator(tf_G)
- D = convert_tf_discriminator(tf_D)
- G_ema = convert_tf_generator(tf_Gs)
- data = dict(G=G, D=D, G_ema=G_ema)
-
- # Add missing fields.
- if 'training_set_kwargs' not in data:
- data['training_set_kwargs'] = None
- if 'augment_pipe' not in data:
- data['augment_pipe'] = None
-
- # Validate contents.
- assert isinstance(data['G'], torch.nn.Module)
- assert isinstance(data['D'], torch.nn.Module)
- assert isinstance(data['G_ema'], torch.nn.Module)
- assert isinstance(data['training_set_kwargs'], (dict, type(None)))
- assert isinstance(data['augment_pipe'], (torch.nn.Module, type(None)))
-
- # Force FP16.
- if force_fp16:
- for key in ['G', 'D', 'G_ema']:
- old = data[key]
- kwargs = copy.deepcopy(old.init_kwargs)
- fp16_kwargs = kwargs.get('synthesis_kwargs', kwargs)
- fp16_kwargs.num_fp16_res = 4
- fp16_kwargs.conv_clamp = 256
- if kwargs != old.init_kwargs:
- new = type(old)(**kwargs).eval().requires_grad_(False)
- misc.copy_params_and_buffers(old, new, require_all=True)
- data[key] = new
- return data
-
-#----------------------------------------------------------------------------
-
-class _TFNetworkStub(dnnlib.EasyDict):
- pass
-
-class _LegacyUnpickler(pickle.Unpickler):
- def find_class(self, module, name):
- if module == 'dnnlib.tflib.network' and name == 'Network':
- return _TFNetworkStub
- return super().find_class(module, name)
-
-#----------------------------------------------------------------------------
-
-def _collect_tf_params(tf_net):
- # pylint: disable=protected-access
- tf_params = dict()
- def recurse(prefix, tf_net):
- for name, value in tf_net.variables:
- tf_params[prefix + name] = value
- for name, comp in tf_net.components.items():
- recurse(prefix + name + '/', comp)
- recurse('', tf_net)
- return tf_params
-
-#----------------------------------------------------------------------------
-
-def _populate_module_params(module, *patterns):
- for name, tensor in misc.named_params_and_buffers(module):
- found = False
- value = None
- for pattern, value_fn in zip(patterns[0::2], patterns[1::2]):
- match = re.fullmatch(pattern, name)
- if match:
- found = True
- if value_fn is not None:
- value = value_fn(*match.groups())
- break
- try:
- assert found
- if value is not None:
- tensor.copy_(torch.from_numpy(np.array(value)))
- except:
- print(name, list(tensor.shape))
- raise
-
-#----------------------------------------------------------------------------
-
-def convert_tf_generator(tf_G):
- if tf_G.version < 4:
- raise ValueError('TensorFlow pickle version too low')
-
- # Collect kwargs.
- tf_kwargs = tf_G.static_kwargs
- known_kwargs = set()
- def kwarg(tf_name, default=None, none=None):
- known_kwargs.add(tf_name)
- val = tf_kwargs.get(tf_name, default)
- return val if val is not None else none
-
- # Convert kwargs.
- from training import networks_stylegan2
- network_class = networks_stylegan2.Generator
- kwargs = dnnlib.EasyDict(
- z_dim = kwarg('latent_size', 512),
- c_dim = kwarg('label_size', 0),
- w_dim = kwarg('dlatent_size', 512),
- img_resolution = kwarg('resolution', 1024),
- img_channels = kwarg('num_channels', 3),
- channel_base = kwarg('fmap_base', 16384) * 2,
- channel_max = kwarg('fmap_max', 512),
- num_fp16_res = kwarg('num_fp16_res', 0),
- conv_clamp = kwarg('conv_clamp', None),
- architecture = kwarg('architecture', 'skip'),
- resample_filter = kwarg('resample_kernel', [1,3,3,1]),
- use_noise = kwarg('use_noise', True),
- activation = kwarg('nonlinearity', 'lrelu'),
- mapping_kwargs = dnnlib.EasyDict(
- num_layers = kwarg('mapping_layers', 8),
- embed_features = kwarg('label_fmaps', None),
- layer_features = kwarg('mapping_fmaps', None),
- activation = kwarg('mapping_nonlinearity', 'lrelu'),
- lr_multiplier = kwarg('mapping_lrmul', 0.01),
- w_avg_beta = kwarg('w_avg_beta', 0.995, none=1),
- ),
- )
-
- # Check for unknown kwargs.
- kwarg('truncation_psi')
- kwarg('truncation_cutoff')
- kwarg('style_mixing_prob')
- kwarg('structure')
- kwarg('conditioning')
- kwarg('fused_modconv')
- unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs)
- if len(unknown_kwargs) > 0:
- raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0])
-
- # Collect params.
- tf_params = _collect_tf_params(tf_G)
- for name, value in list(tf_params.items()):
- match = re.fullmatch(r'ToRGB_lod(\d+)/(.*)', name)
- if match:
- r = kwargs.img_resolution // (2 ** int(match.group(1)))
- tf_params[f'{r}x{r}/ToRGB/{match.group(2)}'] = value
- kwargs.synthesis.kwargs.architecture = 'orig'
- #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}')
-
- # Convert params.
- G = network_class(**kwargs).eval().requires_grad_(False)
- # pylint: disable=unnecessary-lambda
- # pylint: disable=f-string-without-interpolation
- _populate_module_params(G,
- r'mapping\.w_avg', lambda: tf_params[f'dlatent_avg'],
- r'mapping\.embed\.weight', lambda: tf_params[f'mapping/LabelEmbed/weight'].transpose(),
- r'mapping\.embed\.bias', lambda: tf_params[f'mapping/LabelEmbed/bias'],
- r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'mapping/Dense{i}/weight'].transpose(),
- r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'mapping/Dense{i}/bias'],
- r'synthesis\.b4\.const', lambda: tf_params[f'synthesis/4x4/Const/const'][0],
- r'synthesis\.b4\.conv1\.weight', lambda: tf_params[f'synthesis/4x4/Conv/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b4\.conv1\.bias', lambda: tf_params[f'synthesis/4x4/Conv/bias'],
- r'synthesis\.b4\.conv1\.noise_const', lambda: tf_params[f'synthesis/noise0'][0, 0],
- r'synthesis\.b4\.conv1\.noise_strength', lambda: tf_params[f'synthesis/4x4/Conv/noise_strength'],
- r'synthesis\.b4\.conv1\.affine\.weight', lambda: tf_params[f'synthesis/4x4/Conv/mod_weight'].transpose(),
- r'synthesis\.b4\.conv1\.affine\.bias', lambda: tf_params[f'synthesis/4x4/Conv/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.conv0\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/weight'][::-1, ::-1].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.conv0\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/bias'],
- r'synthesis\.b(\d+)\.conv0\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-5}'][0, 0],
- r'synthesis\.b(\d+)\.conv0\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/noise_strength'],
- r'synthesis\.b(\d+)\.conv0\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.conv0\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.conv1\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.conv1\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/bias'],
- r'synthesis\.b(\d+)\.conv1\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-4}'][0, 0],
- r'synthesis\.b(\d+)\.conv1\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/noise_strength'],
- r'synthesis\.b(\d+)\.conv1\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.conv1\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.torgb\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.torgb\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/bias'],
- r'synthesis\.b(\d+)\.torgb\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.torgb\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.skip\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Skip/weight'][::-1, ::-1].transpose(3, 2, 0, 1),
- r'.*\.resample_filter', None,
- r'.*\.act_filter', None,
- )
- return G
-
-#----------------------------------------------------------------------------
-
-def convert_tf_discriminator(tf_D):
- if tf_D.version < 4:
- raise ValueError('TensorFlow pickle version too low')
-
- # Collect kwargs.
- tf_kwargs = tf_D.static_kwargs
- known_kwargs = set()
- def kwarg(tf_name, default=None):
- known_kwargs.add(tf_name)
- return tf_kwargs.get(tf_name, default)
-
- # Convert kwargs.
- kwargs = dnnlib.EasyDict(
- c_dim = kwarg('label_size', 0),
- img_resolution = kwarg('resolution', 1024),
- img_channels = kwarg('num_channels', 3),
- architecture = kwarg('architecture', 'resnet'),
- channel_base = kwarg('fmap_base', 16384) * 2,
- channel_max = kwarg('fmap_max', 512),
- num_fp16_res = kwarg('num_fp16_res', 0),
- conv_clamp = kwarg('conv_clamp', None),
- cmap_dim = kwarg('mapping_fmaps', None),
- block_kwargs = dnnlib.EasyDict(
- activation = kwarg('nonlinearity', 'lrelu'),
- resample_filter = kwarg('resample_kernel', [1,3,3,1]),
- freeze_layers = kwarg('freeze_layers', 0),
- ),
- mapping_kwargs = dnnlib.EasyDict(
- num_layers = kwarg('mapping_layers', 0),
- embed_features = kwarg('mapping_fmaps', None),
- layer_features = kwarg('mapping_fmaps', None),
- activation = kwarg('nonlinearity', 'lrelu'),
- lr_multiplier = kwarg('mapping_lrmul', 0.1),
- ),
- epilogue_kwargs = dnnlib.EasyDict(
- mbstd_group_size = kwarg('mbstd_group_size', None),
- mbstd_num_channels = kwarg('mbstd_num_features', 1),
- activation = kwarg('nonlinearity', 'lrelu'),
- ),
- )
-
- # Check for unknown kwargs.
- kwarg('structure')
- kwarg('conditioning')
- unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs)
- if len(unknown_kwargs) > 0:
- raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0])
-
- # Collect params.
- tf_params = _collect_tf_params(tf_D)
- for name, value in list(tf_params.items()):
- match = re.fullmatch(r'FromRGB_lod(\d+)/(.*)', name)
- if match:
- r = kwargs.img_resolution // (2 ** int(match.group(1)))
- tf_params[f'{r}x{r}/FromRGB/{match.group(2)}'] = value
- kwargs.architecture = 'orig'
- #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}')
-
- # Convert params.
- from training import networks_stylegan2
- D = networks_stylegan2.Discriminator(**kwargs).eval().requires_grad_(False)
- # pylint: disable=unnecessary-lambda
- # pylint: disable=f-string-without-interpolation
- _populate_module_params(D,
- r'b(\d+)\.fromrgb\.weight', lambda r: tf_params[f'{r}x{r}/FromRGB/weight'].transpose(3, 2, 0, 1),
- r'b(\d+)\.fromrgb\.bias', lambda r: tf_params[f'{r}x{r}/FromRGB/bias'],
- r'b(\d+)\.conv(\d+)\.weight', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/weight'].transpose(3, 2, 0, 1),
- r'b(\d+)\.conv(\d+)\.bias', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/bias'],
- r'b(\d+)\.skip\.weight', lambda r: tf_params[f'{r}x{r}/Skip/weight'].transpose(3, 2, 0, 1),
- r'mapping\.embed\.weight', lambda: tf_params[f'LabelEmbed/weight'].transpose(),
- r'mapping\.embed\.bias', lambda: tf_params[f'LabelEmbed/bias'],
- r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'Mapping{i}/weight'].transpose(),
- r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'Mapping{i}/bias'],
- r'b4\.conv\.weight', lambda: tf_params[f'4x4/Conv/weight'].transpose(3, 2, 0, 1),
- r'b4\.conv\.bias', lambda: tf_params[f'4x4/Conv/bias'],
- r'b4\.fc\.weight', lambda: tf_params[f'4x4/Dense0/weight'].transpose(),
- r'b4\.fc\.bias', lambda: tf_params[f'4x4/Dense0/bias'],
- r'b4\.out\.weight', lambda: tf_params[f'Output/weight'].transpose(),
- r'b4\.out\.bias', lambda: tf_params[f'Output/bias'],
- r'.*\.resample_filter', None,
- )
- return D
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.option('--source', help='Input pickle', required=True, metavar='PATH')
-@click.option('--dest', help='Output pickle', required=True, metavar='PATH')
-@click.option('--force-fp16', help='Force the networks to use FP16', type=bool, default=False, metavar='BOOL', show_default=True)
-def convert_network_pickle(source, dest, force_fp16):
- """Convert legacy network pickle into the native PyTorch format.
-
- The tool is able to load the main network configurations exported using the TensorFlow version of StyleGAN2 or StyleGAN2-ADA.
- It does not support e.g. StyleGAN2-ADA comparison methods, StyleGAN2 configs A-D, or StyleGAN1 networks.
-
- Example:
-
- \b
- python legacy.py \\
- --source=https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-cat-config-f.pkl \\
- --dest=stylegan2-cat-config-f.pkl
- """
- print(f'Loading "{source}"...')
- with dnnlib.util.open_url(source) as f:
- data = load_network_pkl(f, force_fp16=force_fp16)
- print(f'Saving "{dest}"...')
- with open(dest, 'wb') as f:
- pickle.dump(data, f)
- print('Done.')
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- convert_network_pickle() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/stylegan_human/training/augment.py b/spaces/DragGan/DragGan/stylegan_human/training/augment.py
deleted file mode 100644
index d68e35c96ef9fa9c18bbb6668f03b9463098710e..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/training/augment.py
+++ /dev/null
@@ -1,436 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Augmentation pipeline from the paper
-"Training Generative Adversarial Networks with Limited Data".
-Matches the original implementation by Karras et al. at
-https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py"""
-
-import numpy as np
-import scipy.signal
-import torch
-from torch_utils import persistence
-from torch_utils import misc
-from torch_utils.ops import upfirdn2d
-from torch_utils.ops import grid_sample_gradfix
-from torch_utils.ops import conv2d_gradfix
-
-#----------------------------------------------------------------------------
-# Coefficients of various wavelet decomposition low-pass filters.
-
-wavelets = {
- 'haar': [0.7071067811865476, 0.7071067811865476],
- 'db1': [0.7071067811865476, 0.7071067811865476],
- 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
- 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
- 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523],
- 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125],
- 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017],
- 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236],
- 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161],
- 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
- 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
- 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427],
- 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728],
- 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148],
- 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255],
- 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609],
-}
-
-#----------------------------------------------------------------------------
-# Helpers for constructing transformation matrices.
-
-def matrix(*rows, device=None):
- assert all(len(row) == len(rows[0]) for row in rows)
- elems = [x for row in rows for x in row]
- ref = [x for x in elems if isinstance(x, torch.Tensor)]
- if len(ref) == 0:
- return misc.constant(np.asarray(rows), device=device)
- assert device is None or device == ref[0].device
- elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems]
- return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1))
-
-def translate2d(tx, ty, **kwargs):
- return matrix(
- [1, 0, tx],
- [0, 1, ty],
- [0, 0, 1],
- **kwargs)
-
-def translate3d(tx, ty, tz, **kwargs):
- return matrix(
- [1, 0, 0, tx],
- [0, 1, 0, ty],
- [0, 0, 1, tz],
- [0, 0, 0, 1],
- **kwargs)
-
-def scale2d(sx, sy, **kwargs):
- return matrix(
- [sx, 0, 0],
- [0, sy, 0],
- [0, 0, 1],
- **kwargs)
-
-def scale3d(sx, sy, sz, **kwargs):
- return matrix(
- [sx, 0, 0, 0],
- [0, sy, 0, 0],
- [0, 0, sz, 0],
- [0, 0, 0, 1],
- **kwargs)
-
-def rotate2d(theta, **kwargs):
- return matrix(
- [torch.cos(theta), torch.sin(-theta), 0],
- [torch.sin(theta), torch.cos(theta), 0],
- [0, 0, 1],
- **kwargs)
-
-def rotate3d(v, theta, **kwargs):
- vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2]
- s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c
- return matrix(
- [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0],
- [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0],
- [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0],
- [0, 0, 0, 1],
- **kwargs)
-
-def translate2d_inv(tx, ty, **kwargs):
- return translate2d(-tx, -ty, **kwargs)
-
-def scale2d_inv(sx, sy, **kwargs):
- return scale2d(1 / sx, 1 / sy, **kwargs)
-
-def rotate2d_inv(theta, **kwargs):
- return rotate2d(-theta, **kwargs)
-
-#----------------------------------------------------------------------------
-# Versatile image augmentation pipeline from the paper
-# "Training Generative Adversarial Networks with Limited Data".
-#
-# All augmentations are disabled by default; individual augmentations can
-# be enabled by setting their probability multipliers to 1.
-
-@persistence.persistent_class
-class AugmentPipe(torch.nn.Module):
- def __init__(self,
- xflip=0, rotate90=0, xint=0, xint_max=0.125,
- scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125,
- brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1,
- imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1,
- noise=0, cutout=0, noise_std=0.1, cutout_size=0.5,
- ):
- super().__init__()
- self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability.
-
- # Pixel blitting.
- self.xflip = float(xflip) # Probability multiplier for x-flip.
- self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations.
- self.xint = float(xint) # Probability multiplier for integer translation.
- self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions.
-
- # General geometric transformations.
- self.scale = float(scale) # Probability multiplier for isotropic scaling.
- self.rotate = float(rotate) # Probability multiplier for arbitrary rotation.
- self.aniso = float(aniso) # Probability multiplier for anisotropic scaling.
- self.xfrac = float(xfrac) # Probability multiplier for fractional translation.
- self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling.
- self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle.
- self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling.
- self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions.
-
- # Color transformations.
- self.brightness = float(brightness) # Probability multiplier for brightness.
- self.contrast = float(contrast) # Probability multiplier for contrast.
- self.lumaflip = float(lumaflip) # Probability multiplier for luma flip.
- self.hue = float(hue) # Probability multiplier for hue rotation.
- self.saturation = float(saturation) # Probability multiplier for saturation.
- self.brightness_std = float(brightness_std) # Standard deviation of brightness.
- self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast.
- self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle.
- self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation.
-
- # Image-space filtering.
- self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering.
- self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands.
- self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification.
-
- # Image-space corruptions.
- self.noise = float(noise) # Probability multiplier for additive RGB noise.
- self.cutout = float(cutout) # Probability multiplier for cutout.
- self.noise_std = float(noise_std) # Standard deviation of additive RGB noise.
- self.cutout_size = float(cutout_size) # Size of the cutout rectangle, relative to image dimensions.
-
- # Setup orthogonal lowpass filter for geometric augmentations.
- self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6']))
-
- # Construct filter bank for image-space filtering.
- Hz_lo = np.asarray(wavelets['sym2']) # H(z)
- Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z)
- Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2
- Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2
- Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i)
- for i in range(1, Hz_fbank.shape[0]):
- Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1]
- Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2])
- Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2
- self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32))
-
- def forward(self, images, debug_percentile=None):
- assert isinstance(images, torch.Tensor) and images.ndim == 4
- batch_size, num_channels, height, width = images.shape
- device = images.device
- if debug_percentile is not None:
- debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device)
-
- # -------------------------------------
- # Select parameters for pixel blitting.
- # -------------------------------------
-
- # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in
- I_3 = torch.eye(3, device=device)
- G_inv = I_3
-
- # Apply x-flip with probability (xflip * strength).
- if self.xflip > 0:
- i = torch.floor(torch.rand([batch_size], device=device) * 2)
- i = torch.where(torch.rand([batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
- G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1)
-
- # Apply 90 degree rotations with probability (rotate90 * strength).
- if self.rotate90 > 0:
- i = torch.floor(torch.rand([batch_size], device=device) * 4)
- i = torch.where(torch.rand([batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 4))
- G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i)
-
- # Apply integer translation with probability (xint * strength).
- if self.xint > 0:
- t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max
- t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t))
- if debug_percentile is not None:
- t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max)
- G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height))
-
- # --------------------------------------------------------
- # Select parameters for general geometric transformations.
- # --------------------------------------------------------
-
- # Apply isotropic scaling with probability (scale * strength).
- if self.scale > 0:
- s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std)
- s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std))
- G_inv = G_inv @ scale2d_inv(s, s)
-
- # Apply pre-rotation with probability p_rot.
- p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p
- if self.rotate > 0:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max
- theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max)
- G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling.
-
- # Apply anisotropic scaling with probability (aniso * strength).
- if self.aniso > 0:
- s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std)
- s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std))
- G_inv = G_inv @ scale2d_inv(s, 1 / s)
-
- # Apply post-rotation with probability p_rot.
- if self.rotate > 0:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max
- theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.zeros_like(theta)
- G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling.
-
- # Apply fractional translation with probability (xfrac * strength).
- if self.xfrac > 0:
- t = torch.randn([batch_size, 2], device=device) * self.xfrac_std
- t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t))
- if debug_percentile is not None:
- t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std)
- G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height)
-
- # ----------------------------------
- # Execute geometric transformations.
- # ----------------------------------
-
- # Execute if the transform is not identity.
- if G_inv is not I_3:
-
- # Calculate padding.
- cx = (width - 1) / 2
- cy = (height - 1) / 2
- cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz]
- cp = G_inv @ cp.t() # [batch, xyz, idx]
- Hz_pad = self.Hz_geom.shape[0] // 4
- margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx]
- margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1]
- margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device)
- margin = margin.max(misc.constant([0, 0] * 2, device=device))
- margin = margin.min(misc.constant([width-1, height-1] * 2, device=device))
- mx0, my0, mx1, my1 = margin.ceil().to(torch.int32)
-
- # Pad image and adjust origin.
- images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect')
- G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv
-
- # Upsample.
- images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2)
- G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device)
- G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device)
-
- # Execute transformation.
- shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2]
- G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device)
- grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False)
- images = grid_sample_gradfix.grid_sample(images, grid)
-
- # Downsample and crop.
- images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True)
-
- # --------------------------------------------
- # Select parameters for color transformations.
- # --------------------------------------------
-
- # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out
- I_4 = torch.eye(4, device=device)
- C = I_4
-
- # Apply brightness with probability (brightness * strength).
- if self.brightness > 0:
- b = torch.randn([batch_size], device=device) * self.brightness_std
- b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b))
- if debug_percentile is not None:
- b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std)
- C = translate3d(b, b, b) @ C
-
- # Apply contrast with probability (contrast * strength).
- if self.contrast > 0:
- c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std)
- c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c))
- if debug_percentile is not None:
- c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std))
- C = scale3d(c, c, c) @ C
-
- # Apply luma flip with probability (lumaflip * strength).
- v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis.
- if self.lumaflip > 0:
- i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2)
- i = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
- C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection.
-
- # Apply hue rotation with probability (hue * strength).
- if self.hue > 0 and num_channels > 1:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max
- theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max)
- C = rotate3d(v, theta) @ C # Rotate around v.
-
- # Apply saturation with probability (saturation * strength).
- if self.saturation > 0 and num_channels > 1:
- s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std)
- s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std))
- C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C
-
- # ------------------------------
- # Execute color transformations.
- # ------------------------------
-
- # Execute if the transform is not identity.
- if C is not I_4:
- images = images.reshape([batch_size, num_channels, height * width])
- if num_channels == 3:
- images = C[:, :3, :3] @ images + C[:, :3, 3:]
- elif num_channels == 1:
- C = C[:, :3, :].mean(dim=1, keepdims=True)
- images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:]
- else:
- raise ValueError('Image must be RGB (3 channels) or L (1 channel)')
- images = images.reshape([batch_size, num_channels, height, width])
-
- # ----------------------
- # Image-space filtering.
- # ----------------------
-
- if self.imgfilter > 0:
- num_bands = self.Hz_fbank.shape[0]
- assert len(self.imgfilter_bands) == num_bands
- expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f).
-
- # Apply amplification for each band with probability (imgfilter * strength * band_strength).
- g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity).
- for i, band_strength in enumerate(self.imgfilter_bands):
- t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std)
- t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i))
- if debug_percentile is not None:
- t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i)
- t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector.
- t[:, i] = t_i # Replace i'th element.
- t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power.
- g = g * t # Accumulate into global gain.
-
- # Construct combined amplification filter.
- Hz_prime = g @ self.Hz_fbank # [batch, tap]
- Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap]
- Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap]
-
- # Apply filter.
- p = self.Hz_fbank.shape[1] // 2
- images = images.reshape([1, batch_size * num_channels, height, width])
- images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect')
- images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels)
- images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels)
- images = images.reshape([batch_size, num_channels, height, width])
-
- # ------------------------
- # Image-space corruptions.
- # ------------------------
-
- # Apply additive RGB noise with probability (noise * strength).
- if self.noise > 0:
- sigma = torch.randn([batch_size, 1, 1, 1], device=device).abs() * self.noise_std
- sigma = torch.where(torch.rand([batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma))
- if debug_percentile is not None:
- sigma = torch.full_like(sigma, torch.erfinv(debug_percentile) * self.noise_std)
- images = images + torch.randn([batch_size, num_channels, height, width], device=device) * sigma
-
- # Apply cutout with probability (cutout * strength).
- if self.cutout > 0:
- size = torch.full([batch_size, 2, 1, 1, 1], self.cutout_size, device=device)
- size = torch.where(torch.rand([batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size))
- center = torch.rand([batch_size, 2, 1, 1, 1], device=device)
- if debug_percentile is not None:
- size = torch.full_like(size, self.cutout_size)
- center = torch.full_like(center, debug_percentile)
- coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1])
- coord_y = torch.arange(height, device=device).reshape([1, 1, -1, 1])
- mask_x = (((coord_x + 0.5) / width - center[:, 0]).abs() >= size[:, 0] / 2)
- mask_y = (((coord_y + 0.5) / height - center[:, 1]).abs() >= size[:, 1] / 2)
- mask = torch.logical_or(mask_x, mask_y).to(torch.float32)
- images = images * mask
-
- return images
-
-#----------------------------------------------------------------------------
diff --git a/spaces/ECCV2022/bytetrack/tutorials/ctracker/test_byte.py b/spaces/ECCV2022/bytetrack/tutorials/ctracker/test_byte.py
deleted file mode 100644
index bbb8a53b7a98de5e1c4c5fcffa1a546cc36f0e4b..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/ctracker/test_byte.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import numpy as np
-import torchvision
-import time
-import math
-import os
-import copy
-import pdb
-import argparse
-import sys
-import cv2
-import skimage.io
-import skimage.transform
-import skimage.color
-import skimage
-import torch
-import model
-
-from torch.utils.data import Dataset, DataLoader
-from torchvision import datasets, models, transforms
-from dataloader import CSVDataset, collater, Resizer, AspectRatioBasedSampler, Augmenter, UnNormalizer, Normalizer, RGB_MEAN, RGB_STD
-from scipy.optimize import linear_sum_assignment
-from tracker import BYTETracker
-
-
-def write_results(filename, results):
- save_format = '{frame},{id},{x1},{y1},{w},{h},{s},-1,-1,-1\n'
- with open(filename, 'w') as f:
- for frame_id, tlwhs, track_ids, scores in results:
- for tlwh, track_id, score in zip(tlwhs, track_ids, scores):
- if track_id < 0:
- continue
- x1, y1, w, h = tlwh
- line = save_format.format(frame=frame_id, id=track_id, x1=round(x1, 1), y1=round(y1, 1), w=round(w, 1), h=round(h, 1), s=round(score, 2))
- f.write(line)
-
-def write_results_no_score(filename, results):
- save_format = '{frame},{id},{x1},{y1},{w},{h},-1,-1,-1,-1\n'
- with open(filename, 'w') as f:
- for frame_id, tlwhs, track_ids in results:
- for tlwh, track_id in zip(tlwhs, track_ids):
- if track_id < 0:
- continue
- x1, y1, w, h = tlwh
- line = save_format.format(frame=frame_id, id=track_id, x1=round(x1, 1), y1=round(y1, 1), w=round(w, 1), h=round(h, 1))
- f.write(line)
-
-def run_each_dataset(model_dir, retinanet, dataset_path, subset, cur_dataset):
- print(cur_dataset)
-
- img_list = os.listdir(os.path.join(dataset_path, subset, cur_dataset, 'img1'))
- img_list = [os.path.join(dataset_path, subset, cur_dataset, 'img1', _) for _ in img_list if ('jpg' in _) or ('png' in _)]
- img_list = sorted(img_list)
-
- img_len = len(img_list)
- last_feat = None
-
- confidence_threshold = 0.6
- IOU_threshold = 0.5
- retention_threshold = 10
-
- det_list_all = []
- tracklet_all = []
- results = []
- max_id = 0
- max_draw_len = 100
- draw_interval = 5
- img_width = 1920
- img_height = 1080
- fps = 30
-
- tracker = BYTETracker()
-
- for idx in range((int(img_len / 2)), img_len + 1):
- i = idx - 1
- print('tracking: ', i)
- with torch.no_grad():
- data_path1 = img_list[min(idx, img_len - 1)]
- img_origin1 = skimage.io.imread(data_path1)
- img_h, img_w, _ = img_origin1.shape
- img_height, img_width = img_h, img_w
- resize_h, resize_w = math.ceil(img_h / 32) * 32, math.ceil(img_w / 32) * 32
- img1 = np.zeros((resize_h, resize_w, 3), dtype=img_origin1.dtype)
- img1[:img_h, :img_w, :] = img_origin1
- img1 = (img1.astype(np.float32) / 255.0 - np.array([[RGB_MEAN]])) / np.array([[RGB_STD]])
- img1 = torch.from_numpy(img1).permute(2, 0, 1).view(1, 3, resize_h, resize_w)
- scores, transformed_anchors, last_feat = retinanet(img1.cuda().float(), last_feat=last_feat)
-
- if idx > (int(img_len / 2)):
- idxs = np.where(scores > 0.1)
- # run tracking
- online_targets = tracker.update(transformed_anchors[idxs[0], :4], scores[idxs[0]])
- online_tlwhs = []
- online_ids = []
- online_scores = []
- for t in online_targets:
- tlwh = t.tlwh
- tid = t.track_id
- online_tlwhs.append(tlwh)
- online_ids.append(tid)
- online_scores.append(t.score)
- results.append((idx, online_tlwhs, online_ids, online_scores))
-
- fout_tracking = os.path.join(model_dir, 'results', cur_dataset + '.txt')
- write_results(fout_tracking, results)
-
-
-
-def main(args=None):
- parser = argparse.ArgumentParser(description='Simple script for testing a CTracker network.')
- parser.add_argument('--dataset_path', default='/dockerdata/home/jeromepeng/data/MOT/MOT17/', type=str,
- help='Dataset path, location of the images sequence.')
- parser.add_argument('--model_dir', default='./trained_model/', help='Path to model (.pt) file.')
- parser.add_argument('--model_path', default='./trained_model/model_final.pth', help='Path to model (.pt) file.')
- parser.add_argument('--seq_nums', default=0, type=int)
-
- parser = parser.parse_args(args)
-
- if not os.path.exists(os.path.join(parser.model_dir, 'results')):
- os.makedirs(os.path.join(parser.model_dir, 'results'))
-
- retinanet = model.resnet50(num_classes=1, pretrained=True)
- # retinanet_save = torch.load(os.path.join(parser.model_dir, 'model_final.pth'))
- retinanet_save = torch.load(os.path.join(parser.model_path))
-
- # rename moco pre-trained keys
- state_dict = retinanet_save.state_dict()
- for k in list(state_dict.keys()):
- # retain only encoder up to before the embedding layer
- if k.startswith('module.'):
- # remove prefix
- state_dict[k[len("module."):]] = state_dict[k]
- # delete renamed or unused k
- del state_dict[k]
-
- retinanet.load_state_dict(state_dict)
-
- use_gpu = True
-
- if use_gpu: retinanet = retinanet.cuda()
-
- retinanet.eval()
- seq_nums = []
- if parser.seq_nums > 0:
- seq_nums.append(parser.seq_nums)
- else:
- seq_nums = [2, 4, 5, 9, 10, 11, 13]
-
- for seq_num in seq_nums:
- run_each_dataset(parser.model_dir, retinanet, parser.dataset_path, 'train', 'MOT17-{:02d}'.format(seq_num))
-
-
-# for seq_num in [1, 3, 6, 7, 8, 12, 14]:
-# run_each_dataset(parser.model_dir, retinanet, parser.dataset_path, 'test', 'MOT17-{:02d}'.format(seq_num))
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/ECCV2022/bytetrack/yolox/__init__.py b/spaces/ECCV2022/bytetrack/yolox/__init__.py
deleted file mode 100644
index 1cbc411d419c55098e7d4e24ff0f21caaaf10a1f..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-from .utils import configure_module
-
-configure_module()
-
-__version__ = "0.1.0"
diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models.py
deleted file mode 100644
index 44c08d361bcb13b84b38dc29beff5cdaddad4ea2..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1124 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/general.py b/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/general.py
deleted file mode 100644
index 1c8e14f56a107ec3a4269c382cfc5168ad780ffc..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/general.py
+++ /dev/null
@@ -1,271 +0,0 @@
-import math
-import time
-
-import numpy as np
-import torch
-import torchvision
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- # if new_size != img_size:
- # print(f"WARNING: --img-size {img_size:g} must be multiple of max stride {s:g}, updating to {new_size:g}")
- return new_size
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter)
-
-
-def non_max_suppression_face(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):
- """Performs Non-Maximum Suppression (NMS) on inference results
- Returns:
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
- """
-
- nc = prediction.shape[2] - 15 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- # (pixels) maximum box width and height
- max_wh = 4096
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 16), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- label = labels[xi]
- v = torch.zeros((len(label), nc + 15), device=x.device)
- v[:, :4] = label[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(label)), label[:, 0].long() + 15] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 15:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, landmarks, cls)
- if multi_label:
- i, j = (x[:, 15:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 15, None], x[:, 5:15], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 15:].max(1, keepdim=True)
- x = torch.cat((box, conf, x[:, 5:15], j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # If none remain process next image
- n = x.shape[0] # number of boxes
- if not n:
- continue
-
- # Batched NMS
- c = x[:, 15:16] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
-
- if merge and (1 < n < 3e3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):
- """Performs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- # (pixels) maximum box width and height
- max_wh = 4096
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- label_id = labels[xi]
- v = torch.zeros((len(label_id), nc + 5), device=x.device)
- v[:, :4] = label_id[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(label_id)), label_id[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
-
- x = x[x[:, 4].argsort(descending=True)] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if merge and (1 < n < 3e3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f"WARNING: NMS time limit {time_limit}s exceeded")
- break # time limit exceeded
-
- return output
-
-
-def scale_coords_landmarks(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2, 4, 6, 8]] -= pad[0] # x padding
- coords[:, [1, 3, 5, 7, 9]] -= pad[1] # y padding
- coords[:, :10] /= gain
- coords[:, 0].clamp_(0, img0_shape[1]) # x1
- coords[:, 1].clamp_(0, img0_shape[0]) # y1
- coords[:, 2].clamp_(0, img0_shape[1]) # x2
- coords[:, 3].clamp_(0, img0_shape[0]) # y2
- coords[:, 4].clamp_(0, img0_shape[1]) # x3
- coords[:, 5].clamp_(0, img0_shape[0]) # y3
- coords[:, 6].clamp_(0, img0_shape[1]) # x4
- coords[:, 7].clamp_(0, img0_shape[0]) # y4
- coords[:, 8].clamp_(0, img0_shape[1]) # x5
- coords[:, 9].clamp_(0, img0_shape[0]) # y5
- return coords
diff --git a/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/bug_report.md
deleted file mode 100644
index ac668766a39892be5bc9e03f3ea626f8b3bf4b57..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/bug_report.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-name: Bug report
-about: Create a report to help us improve
-title: ''
-labels: ''
-assignees: ''
-
----
-
-- **(1) Describe the bug 简述**
-
-
-- **(2) Screen Shot 截图**
-
-
-- **(3) Terminal Traceback 终端traceback(如有)**
-
-
-- **(4) Material to Help Reproduce Bugs 帮助我们复现的测试材料样本(如有)**
-
-
-
-Before submitting an issue 提交issue之前:
-- Please try to upgrade your code. 如果您的代码不是最新的,建议您先尝试更新代码
-- Please check project wiki for common problem solutions.项目[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)有一些常见问题的解决方法
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/utils.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_rope_along_line.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_rope_along_line.py
deleted file mode 100644
index bab4c7a326cc97c490e5b4cfba2531cd2823e967..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_rope_along_line.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-
-class AlignRopeAlongLine(Task):
- """Align a deformable rope along a straight line marked on the tabletop."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "align the rope along the line"
- self.task_completed_desc = "done aligning."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add line.
- length = np.random.uniform(0.18, 0.25)
- line_size = (length, 0.01, 0.01)
- line_pose = self.get_random_pose(env, line_size)
- line_template = 'line/line-template.urdf'
- replace = {'DIM': line_size, 'HALF': (line_size[0] / 2, line_size[1] / 2, line_size[2] / 2)}
- line_urdf = self.fill_template(line_template, replace)
- env.add_object(line_urdf, line_pose, 'fixed')
-
- # Add rope.
- rope_size = (length, 0.01, 0.01)
- rope_pose = self.get_random_pose(env, rope_size)
- corner1_pose = utils.apply(line_pose, (length / 2, 0.01, 0.01))
- corner2_pose = utils.apply(line_pose, (-length / 2, 0.01, 0.01))
- rope_id, targets, matches = self.make_rope(env, (corner1_pose, corner2_pose), n_parts=15)
-
- # Goal: rope is aligned with the line.
- self.add_goal(objs=rope_id, matches=matches, targ_poses=targets, replace=False,
- rotations=False, metric='pose', params=None, step_max_reward=1, language_goal=self.lang_template)
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_unet_lat.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_unet_lat.py
deleted file mode 100644
index 811414569b33609353ed4eae5708aab1c6251025..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_unet_lat.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-import cliport.utils.utils as utils
-from cliport.models.resnet import IdentityBlock, ConvBlock
-from cliport.models.core.unet import Up
-from cliport.models.core.fusion import FusionConvLat
-from cliport.models.clip_lingunet_lat import CLIPLingUNetLat
-
-
-class CLIPUNetLat(CLIPLingUNetLat):
- """ CLIP RN50 with U-Net skip connections and lateral connections without language """
-
- def __init__(self, input_shape, output_dim, cfg, device, preprocess):
- super().__init__(input_shape, output_dim, cfg, device, preprocess)
-
- def _build_decoder(self):
- self.conv1 = nn.Sequential(
- nn.Conv2d(self.input_dim, 1024, kernel_size=3, stride=1, padding=1, bias=False),
- nn.ReLU(True)
- )
-
- self.up1 = Up(2048, 1024 // self.up_factor, self.bilinear)
- self.lat_fusion1 = FusionConvLat(input_dim=1024+512, output_dim=512)
-
- self.up2 = Up(1024, 512 // self.up_factor, self.bilinear)
- self.lat_fusion2 = FusionConvLat(input_dim=512+256, output_dim=256)
-
- self.up3 = Up(512, 256 // self.up_factor, self.bilinear)
- self.lat_fusion3 = FusionConvLat(input_dim=256+128, output_dim=128)
-
- self.layer1 = nn.Sequential(
- ConvBlock(128, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion4 = FusionConvLat(input_dim=128+64, output_dim=64)
-
- self.layer2 = nn.Sequential(
- ConvBlock(64, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(32, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion5 = FusionConvLat(input_dim=64+32, output_dim=32)
-
- self.layer3 = nn.Sequential(
- ConvBlock(32, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(16, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion6 = FusionConvLat(input_dim=32+16, output_dim=16)
-
- self.conv2 = nn.Sequential(
- nn.Conv2d(16, self.output_dim, kernel_size=1)
- )
-
- def forward(self, x, lat):
- x = self.preprocess(x, dist='clip')
-
- in_type = x.dtype
- in_shape = x.shape
- x = x[:,:3] # select RGB
- x, im = self.encode_image(x)
- x = x.to(in_type)
-
- x = self.conv1(x)
-
- x = self.up1(x, im[-2])
- x = self.lat_fusion1(x, lat[-6])
-
- x = self.up2(x, im[-3])
- x = self.lat_fusion2(x, lat[-5])
-
- x = self.up3(x, im[-4])
- x = self.lat_fusion3(x, lat[-4])
-
- x = self.layer1(x)
- x = self.lat_fusion4(x, lat[-3])
-
- x = self.layer2(x)
- x = self.lat_fusion5(x, lat[-2])
-
- x = self.layer3(x)
- x = self.lat_fusion6(x, lat[-1])
-
- x = self.conv2(x)
-
- x = F.interpolate(x, size=(in_shape[-2], in_shape[-1]), mode='bilinear')
- return x
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/packing_google_objects.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/packing_google_objects.py
deleted file mode 100644
index 3716d33e5dbbdff3a313642eacd3fcbb672a9295..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/packing_google_objects.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import os
-
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-import pybullet as p
-
-
-class PackingSeenGoogleObjectsSeq(Task):
- """: Place the specified objects in the brown box following the order prescribed in the language
-instruction at each timestep."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 6
- self.lang_template = "pack the {obj} in the brown box"
- self.task_completed_desc = "done packing objects."
- self.object_names = self.get_object_names()
- self.additional_reset()
-
- def get_object_names(self):
- return utils.google_all_shapes
-
- def reset(self, env):
- super().reset(env)
-
- # object names
- object_names = self.object_names[self.mode]
-
- # Add container box.
- zone_size = self.get_random_size(0.2, 0.35, 0.2, 0.35, 0.05, 0.05)
- zone_pose = self.get_random_pose(env, zone_size)
- container_template = 'container/container-template_DIM_HALF.urdf'
- replace = {'DIM': zone_size, 'HALF': (zone_size[0] / 2, zone_size[1] / 2, zone_size[2] / 2)}
- container_urdf = self.fill_template(container_template, replace)
- env.add_object(container_urdf, zone_pose, 'fixed')
-
- margin = 0.01
- min_object_dim = 0.08
- bboxes = []
-
- # Split container space with KD trees.
- stack_size = np.array(zone_size)
- stack_size[0] -= 0.01
- stack_size[1] -= 0.01
- root_size = (0.01, 0.01, 0) + tuple(stack_size)
- root = utils.TreeNode(None, [], bbox=np.array(root_size))
- utils.KDTree(root, min_object_dim, margin, bboxes)
-
- # Add Google Scanned Objects to scene.
- object_ids = []
- bboxes = np.array(bboxes)
- scale_factor = 5
- object_template = 'google/object-template_FNAME_COLOR_SCALE.urdf'
- chosen_objs, repeat_category = self.choose_objects(object_names, len(bboxes))
- object_descs = []
- for i, bbox in enumerate(bboxes):
- size = bbox[3:] - bbox[:3]
- max_size = size.max()
- position = size / 2. + bbox[:3]
- position[0] += -zone_size[0] / 2
- position[1] += -zone_size[1] / 2
- shape_size = max_size * scale_factor
- pose = self.get_random_pose(env, size)
-
- # Add object only if valid pose found.
- if pose[0] is not None:
- # Initialize with a slightly tilted pose so that the objects aren't always erect.
- slight_tilt = utils.q_mult(pose[1], (-0.1736482, 0, 0, 0.9848078))
- ps = ((pose[0][0], pose[0][1], pose[0][2]+0.05), slight_tilt)
-
- object_name = chosen_objs[i]
- object_name_with_underscore = object_name.replace(" ", "_")
- mesh_file = os.path.join(self.assets_root,
- 'google',
- 'meshes_fixed',
- f'{object_name_with_underscore}.obj')
- texture_file = os.path.join(self.assets_root,
- 'google',
- 'textures',
- f'{object_name_with_underscore}.png')
-
- try:
- replace = {'FNAME': (mesh_file,),
- 'SCALE': [shape_size, shape_size, shape_size],
- 'COLOR': (0.2, 0.2, 0.2)}
- urdf = self.fill_template(object_template, replace)
- box_id = env.add_object(urdf, ps)
- object_ids.append((box_id, (0, None)))
-
- texture_id = p.loadTexture(texture_file)
- p.changeVisualShape(box_id, -1, textureUniqueId=texture_id)
- p.changeVisualShape(box_id, -1, rgbaColor=[1, 1, 1, 1])
-
- object_descs.append(object_name)
-
- except Exception as e:
- print("Failed to load Google Scanned Object in PyBullet")
- print(object_name_with_underscore, mesh_file, texture_file)
- print(f"Exception: {e}")
-
- self.set_goals(object_descs, object_ids, repeat_category, zone_pose, zone_size)
-
- for i in range(480):
- p.stepSimulation()
-
- def choose_objects(self, object_names, k):
- repeat_category = None
- return np.random.choice(object_names, k, replace=False), repeat_category
-
- def set_goals(self, object_descs, object_ids, repeat_category, zone_pose, zone_size):
- # Random picking sequence.
- num_pack_objs = np.random.randint(1, len(object_ids))
-
- object_ids = object_ids[:num_pack_objs]
- true_poses = []
- for obj_idx, (object_id, _) in enumerate(object_ids):
- true_poses.append(zone_pose)
- language_goal = self.lang_template.format(obj=object_descs[obj_idx])
- self.add_goal(objs=[object_id], matches=np.int32([[1]]), targ_poses=[zone_pose], replace=False,
- rotations=True, metric='zone', params=[(zone_pose, zone_size)], step_max_reward=1 / len(object_ids),
- language_goal=language_goal)
-
- # Only mistake allowed.
- self.max_steps = len(object_ids)+1
-
diff --git "a/spaces/Gmq-x/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/Gmq-x/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000
--- "a/spaces/Gmq-x/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,138 +0,0 @@
-import threading
-from request_llm.bridge_all import predict_no_ui_long_connection
-from toolbox import update_ui
-from toolbox import CatchException, write_results_to_file, report_execption
-from .crazy_utils import breakdown_txt_to_satisfy_token_limit
-
-def extract_code_block_carefully(txt):
- splitted = txt.split('```')
- n_code_block_seg = len(splitted) - 1
- if n_code_block_seg <= 1: return txt
- # 剩下的情况都开头除去 ``` 结尾除去一次 ```
- txt_out = '```'.join(splitted[1:-1])
- return txt_out
-
-
-
-def break_txt_into_half_at_some_linebreak(txt):
- lines = txt.split('\n')
- n_lines = len(lines)
- pre = lines[:(n_lines//2)]
- post = lines[(n_lines//2):]
- return "\n".join(pre), "\n".join(post)
-
-
-@CatchException
-def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- # 第1步:清空历史,以免输入溢出
- history = []
-
- # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 第3步:集合文件
- import time, glob, os, shutil, re
- os.makedirs('gpt_log/generated_english_version', exist_ok=True)
- os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- # file_manifest = ['./toolbox.py']
- i_say_show_user_buffer = []
-
- # 第4步:随便显示点什么防止卡顿的感觉
- for index, fp in enumerate(file_manifest):
- # if 'test_project' in fp: continue
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
- i_say_show_user_buffer.append(i_say_show_user)
- chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- # 第5步:Token限制下的截断与处理
- MAX_TOKEN = 3000
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
-
-
- # 第6步:任务函数
- mutable_return = [None for _ in file_manifest]
- observe_window = [[""] for _ in file_manifest]
- def thread_worker(fp,index):
- if index > 10:
- time.sleep(60)
- print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
- try:
- gpt_say = ""
- # 分解代码文件
- file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
- for file_content_partial in file_content_breakdown:
- i_say = i_say_template(fp, file_content_partial)
- # # ** gpt request **
- gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
- gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
- gpt_say += gpt_say_partial
- mutable_return[index] = gpt_say
- except ConnectionAbortedError as token_exceed_err:
- print('至少一个线程任务Token溢出而失败', e)
- except Exception as e:
- print('至少一个线程任务意外失败', e)
-
- # 第7步:所有线程同时开始执行任务函数
- handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
- for h in handles:
- h.daemon = True
- h.start()
- chatbot.append(('开始了吗?', f'多线程操作已经开始'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第8步:循环轮询各个线程是否执行完毕
- cnt = 0
- while True:
- cnt += 1
- time.sleep(0.2)
- th_alive = [h.is_alive() for h in handles]
- if not any(th_alive): break
- # 更好的UI视觉效果
- observe_win = []
- for thread_index, alive in enumerate(th_alive):
- observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace(' ','.....').replace('$','.')+"... ]")
- stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
- stat_str = ''.join(stat)
- chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第9步:把结果写入文件
- for index, h in enumerate(handles):
- h.join() # 这里其实不需要join了,肯定已经都结束了
- fp = file_manifest[index]
- gpt_say = mutable_return[index]
- i_say_show_user = i_say_show_user_buffer[index]
-
- where_to_relocate = f'gpt_log/generated_english_version/{fp}'
- if gpt_say is not None:
- with open(where_to_relocate, 'w+', encoding='utf-8') as f:
- f.write(gpt_say)
- else: # 失败
- shutil.copyfile(file_manifest[index], where_to_relocate)
- chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(1)
-
- # 第10步:备份一个文件
- res = write_results_to_file(history)
- chatbot.append(("生成一份任务执行报告", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 86584573a3d1afac73041b85516112ac21f1f17c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/split_train_valid_docs.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/split_train_valid_docs.py
deleted file mode 100644
index ff159785284a13b44626b207d84430c592acaf8f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/split_train_valid_docs.py
+++ /dev/null
@@ -1,86 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Split a large file into a train and valid set while respecting document
-boundaries. Documents should be separated by a single empty line.
-"""
-
-import argparse
-import random
-import sys
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("input")
- parser.add_argument("sample_output", help="train output file")
- parser.add_argument("remainder_output", help="valid output file")
- parser.add_argument("-k", type=int, help="remainder size")
- parser.add_argument(
- "--lines", action="store_true", help="split lines instead of docs"
- )
- args = parser.parse_args()
-
- assert args.k is not None
-
- sample = []
- remainder = []
- num_docs = [0]
-
- def update_sample(doc):
- if len(sample) < args.k:
- sample.append(doc.copy())
- else:
- i = num_docs[0]
- j = random.randrange(i + 1)
- if j < args.k:
- remainder.append(sample[j])
- sample[j] = doc.copy()
- else:
- remainder.append(doc.copy())
- num_docs[0] += 1
- doc.clear()
-
- with open(args.input, "r", encoding="utf-8") as h:
- doc = []
- for i, line in enumerate(h):
- if line.strip() == "": # empty line indicates new document
- update_sample(doc)
- else:
- doc.append(line)
- if args.lines:
- update_sample(doc)
- if i % 1000000 == 0:
- print(i, file=sys.stderr, end="", flush=True)
- elif i % 100000 == 0:
- print(".", file=sys.stderr, end="", flush=True)
- if len(doc) > 0:
- update_sample(doc)
- print(file=sys.stderr, flush=True)
-
- assert len(sample) == args.k
-
- with open(args.sample_output, "w", encoding="utf-8") as out:
- first = True
- for doc in sample:
- if not first and not args.lines:
- out.write("\n")
- first = False
- for line in doc:
- out.write(line)
-
- with open(args.remainder_output, "w", encoding="utf-8") as out:
- first = True
- for doc in remainder:
- if not first and not args.lines:
- out.write("\n")
- first = False
- for line in doc:
- out.write(line)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HighCWu/GPEN/face_model/op/__init__.py b/spaces/HighCWu/GPEN/face_model/op/__init__.py
deleted file mode 100644
index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/face_model/op/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .fused_act import FusedLeakyReLU, fused_leaky_relu
-from .upfirdn2d import upfirdn2d
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates.py
deleted file mode 100644
index 561a07d38de21f362ea8871549ad8a80926dc375..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates.py
+++ /dev/null
@@ -1,563 +0,0 @@
-from __future__ import annotations
-
-import typing
-from typing import Any, Callable, Tuple
-
-import numpy as np
-from PIL.Image import Image
-
-from gradio import components
-
-
-class TextArea(components.Textbox):
- """
- Sets: lines=7
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Callable | None = "",
- *,
- lines: int = 7,
- max_lines: int = 20,
- placeholder: str | None = None,
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- lines=lines,
- max_lines=max_lines,
- placeholder=placeholder,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Webcam(components.Image):
- """
- Sets: source="webcam", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] | None = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "webcam",
- tool: str | None = None,
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class Sketchpad(components.Image):
- """
- Sets: image_mode="L", source="canvas", shape=(28, 28), invert_colors=True, interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] = (28, 28),
- image_mode: str = "L",
- invert_colors: bool = True,
- source: str = "canvas",
- tool: str | None = None,
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class Paint(components.Image):
- """
- Sets: source="canvas", tool="color-sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] | None = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "canvas",
- tool: str = "color-sketch",
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class ImageMask(components.Image):
- """
- Sets: source="upload", tool="sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] | None = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "upload",
- tool: str = "sketch",
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class ImagePaint(components.Image):
- """
- Sets: source="upload", tool="color-sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] | None = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "upload",
- tool: str = "color-sketch",
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class Pil(components.Image):
- """
- Sets: type="pil"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: Tuple[int, int] | None = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "upload",
- tool: str | None = None,
- type: str = "pil",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- **kwargs,
- )
-
-
-class PlayableVideo(components.Video):
- """
- Sets: format="mp4"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Callable | None = None,
- *,
- format: str | None = "mp4",
- source: str = "upload",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- include_audio: bool | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- format=format,
- source=source,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- include_audio=include_audio,
- **kwargs,
- )
-
-
-class Microphone(components.Audio):
- """
- Sets: source="microphone"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Tuple[int, np.ndarray] | Callable | None = None,
- *,
- source: str = "microphone",
- type: str = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- source=source,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Files(components.File):
- """
- Sets: file_count="multiple"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | typing.List[str] | Callable | None = None,
- *,
- file_count: str = "multiple",
- type: str = "file",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- file_count=file_count,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Numpy(components.Dataframe):
- """
- Sets: type="numpy"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: typing.List[typing.List[Any]] | Callable | None = None,
- *,
- headers: typing.List[str] | None = None,
- row_count: int | Tuple[int, str] = (1, "dynamic"),
- col_count: int | Tuple[int, str] | None = None,
- datatype: str | typing.List[str] = "str",
- type: str = "numpy",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: str = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-class Matrix(components.Dataframe):
- """
- Sets: type="array"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: typing.List[typing.List[Any]] | Callable | None = None,
- *,
- headers: typing.List[str] | None = None,
- row_count: int | Tuple[int, str] = (1, "dynamic"),
- col_count: int | Tuple[int, str] | None = None,
- datatype: str | typing.List[str] = "str",
- type: str = "array",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: str = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-class List(components.Dataframe):
- """
- Sets: type="array", col_count=1
- """
-
- is_template = True
-
- def __init__(
- self,
- value: typing.List[typing.List[Any]] | Callable | None = None,
- *,
- headers: typing.List[str] | None = None,
- row_count: int | Tuple[int, str] = (1, "dynamic"),
- col_count: int | Tuple[int, str] = 1,
- datatype: str | typing.List[str] = "str",
- type: str = "array",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: str = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-Mic = Microphone
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/utils.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/utils.py
deleted file mode 100644
index 2c7b03733d2290d3834d2c68a16034198daa1e69..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/utils.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from scipy.interpolate import interp1d
-import torchaudio
-
-from fairseq.tasks.text_to_speech import (
- batch_compute_distortion, compute_rms_dist
-)
-
-
-def batch_mel_spectral_distortion(
- y1, y2, sr, normalize_type="path", mel_fn=None
-):
- """
- https://arxiv.org/pdf/2011.03568.pdf
-
- Same as Mel Cepstral Distortion, but computed on log-mel spectrograms.
- """
- if mel_fn is None or mel_fn.sample_rate != sr:
- mel_fn = torchaudio.transforms.MelSpectrogram(
- sr, n_fft=int(0.05 * sr), win_length=int(0.05 * sr),
- hop_length=int(0.0125 * sr), f_min=20, n_mels=80,
- window_fn=torch.hann_window
- ).to(y1[0].device)
- offset = 1e-6
- return batch_compute_distortion(
- y1, y2, sr, lambda y: torch.log(mel_fn(y) + offset).transpose(-1, -2),
- compute_rms_dist, normalize_type
- )
-
-
-# This code is based on
-# "https://github.com/bastibe/MAPS-Scripts/blob/master/helper.py"
-def _same_t_in_true_and_est(func):
- def new_func(true_t, true_f, est_t, est_f):
- assert type(true_t) is np.ndarray
- assert type(true_f) is np.ndarray
- assert type(est_t) is np.ndarray
- assert type(est_f) is np.ndarray
-
- interpolated_f = interp1d(
- est_t, est_f, bounds_error=False, kind='nearest', fill_value=0
- )(true_t)
- return func(true_t, true_f, true_t, interpolated_f)
-
- return new_func
-
-
-@_same_t_in_true_and_est
-def gross_pitch_error(true_t, true_f, est_t, est_f):
- """The relative frequency in percent of pitch estimates that are
- outside a threshold around the true pitch. Only frames that are
- considered pitched by both the ground truth and the estimator (if
- applicable) are considered.
- """
-
- correct_frames = _true_voiced_frames(true_t, true_f, est_t, est_f)
- gross_pitch_error_frames = _gross_pitch_error_frames(
- true_t, true_f, est_t, est_f
- )
- return np.sum(gross_pitch_error_frames) / np.sum(correct_frames)
-
-
-def _gross_pitch_error_frames(true_t, true_f, est_t, est_f, eps=1e-8):
- voiced_frames = _true_voiced_frames(true_t, true_f, est_t, est_f)
- true_f_p_eps = [x + eps for x in true_f]
- pitch_error_frames = np.abs(est_f / true_f_p_eps - 1) > 0.2
- return voiced_frames & pitch_error_frames
-
-
-def _true_voiced_frames(true_t, true_f, est_t, est_f):
- return (est_f != 0) & (true_f != 0)
-
-
-def _voicing_decision_error_frames(true_t, true_f, est_t, est_f):
- return (est_f != 0) != (true_f != 0)
-
-
-@_same_t_in_true_and_est
-def f0_frame_error(true_t, true_f, est_t, est_f):
- gross_pitch_error_frames = _gross_pitch_error_frames(
- true_t, true_f, est_t, est_f
- )
- voicing_decision_error_frames = _voicing_decision_error_frames(
- true_t, true_f, est_t, est_f
- )
- return (np.sum(gross_pitch_error_frames) +
- np.sum(voicing_decision_error_frames)) / (len(true_t))
-
-
-@_same_t_in_true_and_est
-def voicing_decision_error(true_t, true_f, est_t, est_f):
- voicing_decision_error_frames = _voicing_decision_error_frames(
- true_t, true_f, est_t, est_f
- )
- return np.sum(voicing_decision_error_frames) / (len(true_t))
diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py
deleted file mode 100644
index 2c87445d810cd790f887d1a135287a334cbdf223..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-import os
-
-import numpy as np
-
-import joblib
-from examples.textless_nlp.gslm.speech2unit.clustering.utils import (
- get_audio_files,
-)
-from examples.textless_nlp.gslm.speech2unit.pretrained.utils import (
- get_features,
-)
-
-
-def get_logger():
- log_format = "[%(asctime)s] [%(levelname)s]: %(message)s"
- logging.basicConfig(format=log_format, level=logging.INFO)
- logger = logging.getLogger(__name__)
- return logger
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="Quantize using K-means clustering over acoustic features."
- )
- parser.add_argument(
- "--feature_type",
- type=str,
- choices=["logmel", "hubert", "w2v2", "cpc"],
- default=None,
- required=True,
- help="Acoustic feature type",
- )
- parser.add_argument(
- "--acoustic_model_path",
- type=str,
- help="Pretrained acoustic model checkpoint"
- )
- parser.add_argument(
- "--layer",
- type=int,
- help="The layer of the pretrained model to extract features from",
- default=-1,
- )
- parser.add_argument(
- "--kmeans_model_path",
- type=str,
- required=True,
- help="K-means model file path to use for inference",
- )
- parser.add_argument(
- "--features_path",
- type=str,
- default=None,
- help="Features file path. You don't need to enter acoustic model details if you have dumped features",
- )
- parser.add_argument(
- "--manifest_path",
- type=str,
- default=None,
- help="Manifest file containing the root dir and file names",
- )
- parser.add_argument(
- "--out_quantized_file_path",
- required=True,
- type=str,
- help="File path of quantized output.",
- )
- parser.add_argument(
- "--extension", type=str, default=".flac", help="Features file path"
- )
- return parser
-
-
-def main(args, logger):
- # Feature extraction
- if args.features_path is not None:
- logger.info(f"Loading acoustic features from {args.features_path}...")
- features_batch = np.load(args.features_path)
- else:
- logger.info(f"Extracting {args.feature_type} acoustic features...")
- features_batch = get_features(
- feature_type=args.feature_type,
- checkpoint_path=args.acoustic_model_path,
- layer=args.layer,
- manifest_path=args.manifest_path,
- sample_pct=1.0,
- flatten=False,
- )
- logger.info(
- f"Features extracted for {len(features_batch)} utterances.\n"
- )
- logger.info(
- f"Dimensionality of representation = {features_batch[0].shape[1]}"
- )
-
- # K-means model
- logger.info(f"Loading K-means model from {args.kmeans_model_path} ...")
- kmeans_model = joblib.load(open(args.kmeans_model_path, "rb"))
- kmeans_model.verbose = False
-
- _, fnames, _ = get_audio_files(args.manifest_path)
-
- os.makedirs(os.path.dirname(args.out_quantized_file_path), exist_ok=True)
- print(f"Writing quantized predictions to {args.out_quantized_file_path}")
- with open(args.out_quantized_file_path, "w") as fout:
- for i, feats in enumerate(features_batch):
- pred = kmeans_model.predict(feats)
- pred_str = " ".join(str(p) for p in pred)
- base_fname = os.path.basename(fnames[i]).rstrip(args.extension)
- fout.write(f"{base_fname}|{pred_str}\n")
-
-
-if __name__ == "__main__":
- parser = get_parser()
- args = parser.parse_args()
- logger = get_logger()
- logger.info(args)
- main(args, logger)
diff --git a/spaces/IDEA-CCNL/Erlangshen-UniMC-Zero-Shot/modeling_albert.py b/spaces/IDEA-CCNL/Erlangshen-UniMC-Zero-Shot/modeling_albert.py
deleted file mode 100644
index 7c5298825fb471e0575dabaefb2b8514e5bedcd8..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/Erlangshen-UniMC-Zero-Shot/modeling_albert.py
+++ /dev/null
@@ -1,1363 +0,0 @@
-# coding=utf-8
-# Copyright 2018 Google AI, Google Brain and the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch ALBERT model. """
-
-import math
-import os
-from dataclasses import dataclass
-from typing import Optional, Tuple
-
-import torch
-from packaging import version
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- ModelOutput,
- add_code_sample_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- replace_return_docstrings,
-)
-from transformers.modeling_outputs import (
- BaseModelOutput,
- BaseModelOutputWithPooling,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from transformers.modeling_utils import (
- PreTrainedModel,
- apply_chunking_to_forward,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import logging
-from transformers import AlbertConfig
-
-
-
-logger = logging.get_logger(__name__)
-
-_CHECKPOINT_FOR_DOC = "albert-base-v2"
-_CONFIG_FOR_DOC = "AlbertConfig"
-_TOKENIZER_FOR_DOC = "AlbertTokenizer"
-
-
-ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "albert-base-v1",
- "albert-large-v1",
- "albert-xlarge-v1",
- "albert-xxlarge-v1",
- "albert-base-v2",
- "albert-large-v2",
- "albert-xlarge-v2",
- "albert-xxlarge-v2",
- # See all ALBERT models at https://huggingface.co/models?filter=albert
-]
-
-
-def load_tf_weights_in_albert(model, config, tf_checkpoint_path):
- """Load tf checkpoints in a pytorch model."""
- try:
- import re
-
- import numpy as np
- import tensorflow as tf
- except ImportError:
- logger.error(
- "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
- "https://www.tensorflow.org/install/ for installation instructions."
- )
- raise
- tf_path = os.path.abspath(tf_checkpoint_path)
- logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
- # Load weights from TF model
- init_vars = tf.train.list_variables(tf_path)
- names = []
- arrays = []
- for name, shape in init_vars:
- logger.info(f"Loading TF weight {name} with shape {shape}")
- array = tf.train.load_variable(tf_path, name)
- names.append(name)
- arrays.append(array)
-
- for name, array in zip(names, arrays):
- print(name)
-
- for name, array in zip(names, arrays):
- original_name = name
-
- # If saved from the TF HUB module
- name = name.replace("module/", "")
-
- # Renaming and simplifying
- name = name.replace("ffn_1", "ffn")
- name = name.replace("bert/", "albert/")
- name = name.replace("attention_1", "attention")
- name = name.replace("transform/", "")
- name = name.replace("LayerNorm_1", "full_layer_layer_norm")
- name = name.replace("LayerNorm", "attention/LayerNorm")
- name = name.replace("transformer/", "")
-
- # The feed forward layer had an 'intermediate' step which has been abstracted away
- name = name.replace("intermediate/dense/", "")
- name = name.replace("ffn/intermediate/output/dense/", "ffn_output/")
-
- # ALBERT attention was split between self and output which have been abstracted away
- name = name.replace("/output/", "/")
- name = name.replace("/self/", "/")
-
- # The pooler is a linear layer
- name = name.replace("pooler/dense", "pooler")
-
- # The classifier was simplified to predictions from cls/predictions
- name = name.replace("cls/predictions", "predictions")
- name = name.replace("predictions/attention", "predictions")
-
- # Naming was changed to be more explicit
- name = name.replace("embeddings/attention", "embeddings")
- name = name.replace("inner_group_", "albert_layers/")
- name = name.replace("group_", "albert_layer_groups/")
-
- # Classifier
- if len(name.split("/")) == 1 and ("output_bias" in name or "output_weights" in name):
- name = "classifier/" + name
-
- # No ALBERT model currently handles the next sentence prediction task
- if "seq_relationship" in name:
- name = name.replace("seq_relationship/output_", "sop_classifier/classifier/")
- name = name.replace("weights", "weight")
-
- name = name.split("/")
-
- # Ignore the gradients applied by the LAMB/ADAM optimizers.
- if (
- "adam_m" in name
- or "adam_v" in name
- or "AdamWeightDecayOptimizer" in name
- or "AdamWeightDecayOptimizer_1" in name
- or "global_step" in name
- ):
- logger.info(f"Skipping {'/'.join(name)}")
- continue
-
- pointer = model
- for m_name in name:
- if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
- scope_names = re.split(r"_(\d+)", m_name)
- else:
- scope_names = [m_name]
-
- if scope_names[0] == "kernel" or scope_names[0] == "gamma":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
- pointer = getattr(pointer, "bias")
- elif scope_names[0] == "output_weights":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "squad":
- pointer = getattr(pointer, "classifier")
- else:
- try:
- pointer = getattr(pointer, scope_names[0])
- except AttributeError:
- logger.info(f"Skipping {'/'.join(name)}")
- continue
- if len(scope_names) >= 2:
- num = int(scope_names[1])
- pointer = pointer[num]
-
- if m_name[-11:] == "_embeddings":
- pointer = getattr(pointer, "weight")
- elif m_name == "kernel":
- array = np.transpose(array)
- try:
- if pointer.shape != array.shape:
- raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched")
- except AssertionError as e:
- e.args += (pointer.shape, array.shape)
- raise
- print(f"Initialize PyTorch weight {name} from {original_name}")
- pointer.data = torch.from_numpy(array)
-
- return model
-
-
-class AlbertEmbeddings(nn.Module):
- """
- Construct the embeddings from word, position and token_type embeddings.
- """
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size)
- self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if version.parse(torch.__version__) > version.parse("1.6.0"):
- self.register_buffer(
- "token_type_ids",
- torch.zeros(self.position_ids.size(), dtype=torch.long, device=self.position_ids.device),
- persistent=False,
- )
-
- # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.forward
- def forward(
- self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- if position_ids is None:
- position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
-
- # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs
- # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
- # issue #5664
- if token_type_ids is None:
- if hasattr(self, "token_type_ids"):
- buffered_token_type_ids = self.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
- token_type_embeddings = self.token_type_embeddings(token_type_ids)
-
- embeddings = inputs_embeds + token_type_embeddings
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class AlbertAttention(nn.Module):
- def __init__(self, config):
- super().__init__()
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
- raise ValueError(
- f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
- f"heads ({config.num_attention_heads}"
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.hidden_size = config.hidden_size
- self.attention_head_size = config.hidden_size // config.num_attention_heads
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.attention_dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.output_dropout = nn.Dropout(config.hidden_dropout_prob)
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.pruned_heads = set()
-
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)
-
- # Copied from transformers.models.bert.modeling_bert.BertSelfAttention.transpose_for_scores
- def transpose_for_scores(self, x):
- new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
- x = x.view(*new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.num_attention_heads, self.attention_head_size, self.pruned_heads
- )
-
- # Prune linear layers
- self.query = prune_linear_layer(self.query, index)
- self.key = prune_linear_layer(self.key, index)
- self.value = prune_linear_layer(self.value, index)
- self.dense = prune_linear_layer(self.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.num_attention_heads = self.num_attention_heads - len(heads)
- self.all_head_size = self.attention_head_size * self.num_attention_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False):
- mixed_query_layer = self.query(hidden_states)
- mixed_key_layer = self.key(hidden_states)
- mixed_value_layer = self.value(hidden_states)
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
- key_layer = self.transpose_for_scores(mixed_key_layer)
- value_layer = self.transpose_for_scores(mixed_value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
-
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- seq_length = hidden_states.size()[1]
- position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
- position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
- distance = position_ids_l - position_ids_r
- positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
- positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs = self.attention_dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs = attention_probs * head_mask
-
- context_layer = torch.matmul(attention_probs, value_layer)
- context_layer = context_layer.transpose(2, 1).flatten(2)
-
- projected_context_layer = self.dense(context_layer)
- projected_context_layer_dropout = self.output_dropout(projected_context_layer)
- layernormed_context_layer = self.LayerNorm(hidden_states + projected_context_layer_dropout)
- return (layernormed_context_layer, attention_probs) if output_attentions else (layernormed_context_layer,)
-
-
-class AlbertLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.config = config
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.full_layer_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.attention = AlbertAttention(config)
- self.ffn = nn.Linear(config.hidden_size, config.intermediate_size)
- self.ffn_output = nn.Linear(config.intermediate_size, config.hidden_size)
- self.activation = ACT2FN[config.hidden_act]
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(
- self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False, output_hidden_states=False
- ):
- attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions)
-
- ffn_output = apply_chunking_to_forward(
- self.ff_chunk,
- self.chunk_size_feed_forward,
- self.seq_len_dim,
- attention_output[0],
- )
- hidden_states = self.full_layer_layer_norm(ffn_output + attention_output[0])
-
- return (hidden_states,) + attention_output[1:] # add attentions if we output them
-
- def ff_chunk(self, attention_output):
- ffn_output = self.ffn(attention_output)
- ffn_output = self.activation(ffn_output)
- ffn_output = self.ffn_output(ffn_output)
- return ffn_output
-
-
-class AlbertLayerGroup(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.albert_layers = nn.ModuleList([AlbertLayer(config) for _ in range(config.inner_group_num)])
-
- def forward(
- self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False, output_hidden_states=False
- ):
- layer_hidden_states = ()
- layer_attentions = ()
-
- for layer_index, albert_layer in enumerate(self.albert_layers):
- layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions)
- hidden_states = layer_output[0]
-
- if output_attentions:
- layer_attentions = layer_attentions + (layer_output[1],)
-
- if output_hidden_states:
- layer_hidden_states = layer_hidden_states + (hidden_states,)
-
- outputs = (hidden_states,)
- if output_hidden_states:
- outputs = outputs + (layer_hidden_states,)
- if output_attentions:
- outputs = outputs + (layer_attentions,)
- return outputs # last-layer hidden state, (layer hidden states), (layer attentions)
-
-
-class AlbertTransformer(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.config = config
- self.embedding_hidden_mapping_in = nn.Linear(config.embedding_size, config.hidden_size)
- self.albert_layer_groups = nn.ModuleList([AlbertLayerGroup(config) for _ in range(config.num_hidden_groups)])
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- output_attentions=False,
- output_hidden_states=False,
- return_dict=True,
- ):
- hidden_states = self.embedding_hidden_mapping_in(hidden_states)
-
- all_hidden_states = (hidden_states,) if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- head_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask
-
- for i in range(self.config.num_hidden_layers):
- # Number of layers in a hidden group
- layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups)
-
- # Index of the hidden group
- group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups))
-
- layer_group_output = self.albert_layer_groups[group_idx](
- hidden_states,
- attention_mask,
- head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group],
- output_attentions,
- output_hidden_states,
- )
- hidden_states = layer_group_output[0]
-
- if output_attentions:
- all_attentions = all_attentions + layer_group_output[-1]
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None)
- return BaseModelOutput(
- last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions
- )
-
-
-class AlbertPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = AlbertConfig
- load_tf_weights = load_tf_weights_in_albert
- base_model_prefix = "albert"
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """Initialize the weights."""
- if isinstance(module, nn.Linear):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
-
-@dataclass
-class AlbertForPreTrainingOutput(ModelOutput):
- """
- Output type of :class:`~transformers.AlbertForPreTraining`.
-
- Args:
- loss (`optional`, returned when ``labels`` is provided, ``torch.FloatTensor`` of shape :obj:`(1,)`):
- Total loss as the sum of the masked language modeling loss and the next sequence prediction
- (classification) loss.
- prediction_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- sop_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, 2)`):
- Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
- before SoftMax).
- hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
- Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
- of shape :obj:`(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
- Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
- sequence_length, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[torch.FloatTensor] = None
- prediction_logits: torch.FloatTensor = None
- sop_logits: torch.FloatTensor = None
- hidden_states: Optional[Tuple[torch.FloatTensor]] = None
- attentions: Optional[Tuple[torch.FloatTensor]] = None
-
-
-ALBERT_START_DOCSTRING = r"""
-
- This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic
- methods the library implements for all its model (such as downloading or saving, resizing the input embeddings,
- pruning heads etc.)
-
- This model is also a PyTorch `torch.nn.Module `__
- subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
- general usage and behavior.
-
- Args:
- config (:class:`~transformers.AlbertConfig`): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model
- weights.
-"""
-
-ALBERT_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`):
- Indices of input sequence tokens in the vocabulary.
-
- Indices can be obtained using :class:`~transformers.AlbertTokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.__call__` and :meth:`transformers.PreTrainedTokenizer.encode` for
- details.
-
- `What are input IDs? <../glossary.html#input-ids>`__
- attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`):
- Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
- token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
- Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0,
- 1]``:
-
- - 0 corresponds to a `sentence A` token,
- - 1 corresponds to a `sentence B` token.
-
- `What are token type IDs? <../glossary.html#token-type-ids>`_
- position_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0,
- config.max_position_embeddings - 1]``.
-
- `What are position IDs? <../glossary.html#position-ids>`_
- head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`({0}, hidden_size)`, `optional`):
- Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
- vectors than the model's internal embedding lookup matrix.
- output_attentions (:obj:`bool`, `optional`):
- Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
- tensors for more detail.
- output_hidden_states (:obj:`bool`, `optional`):
- Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
- more detail.
- return_dict (:obj:`bool`, `optional`):
- Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top.",
- ALBERT_START_DOCSTRING,
-)
-class AlbertModel(AlbertPreTrainedModel):
-
- config_class = AlbertConfig
- base_model_prefix = "albert"
-
- def __init__(self, config, add_pooling_layer=True):
- super().__init__(config)
-
- self.config = config
- self.embeddings = AlbertEmbeddings(config)
- self.encoder = AlbertTransformer(config)
- if add_pooling_layer:
- self.pooler = nn.Linear(config.hidden_size, config.hidden_size)
- self.pooler_activation = nn.Tanh()
- else:
- self.pooler = None
- self.pooler_activation = None
-
- self.init_weights()
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} ALBERT has
- a different architecture in that its layers are shared across groups, which then has inner groups. If an ALBERT
- model has 12 hidden layers and 2 hidden groups, with two inner groups, there is a total of 4 different layers.
-
- These layers are flattened: the indices [0,1] correspond to the two inner groups of the first hidden layer,
- while [2,3] correspond to the two inner groups of the second hidden layer.
-
- Any layer with in index other than [0,1,2,3] will result in an error. See base class PreTrainedModel for more
- information about head pruning
- """
- for layer, heads in heads_to_prune.items():
- group_idx = int(layer / self.config.inner_group_num)
- inner_group_idx = int(layer - group_idx * self.config.inner_group_num)
- self.encoder.albert_layer_groups[group_idx].albert_layers[inner_group_idx].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=BaseModelOutputWithPooling,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- batch_size, seq_length = input_shape
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- if attention_mask is None:
- attention_mask = torch.ones(input_shape, device=device)
- if token_type_ids is None:
- if hasattr(self.embeddings, "token_type_ids"):
- buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) #
- extended_attention_mask = attention_mask[:, None, :, :]
- extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
- )
- encoder_outputs = self.encoder(
- embedding_output,
- extended_attention_mask,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = encoder_outputs[0]
-
- pooled_output = self.pooler_activation(self.pooler(sequence_output[:, 0])) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPooling(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a
- `sentence order prediction (classification)` head.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForPreTraining(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config)
- self.predictions = AlbertMLMHead(config)
- self.sop_classifier = AlbertSOPHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.predictions.decoder = new_embeddings
-
- def get_input_embeddings(self):
- return self.albert.embeddings.word_embeddings
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @replace_return_docstrings(output_type=AlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- sentence_order_label=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (``torch.LongTensor`` of shape ``(batch_size, sequence_length)``, `optional`):
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
- sentence_order_label (``torch.LongTensor`` of shape ``(batch_size,)``, `optional`):
- Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
- (see :obj:`input_ids` docstring) Indices should be in ``[0, 1]``. ``0`` indicates original order (sequence
- A, then sequence B), ``1`` indicates switched order (sequence B, then sequence A).
-
- Returns:
-
- Example::
-
- >>> from transformers import AlbertTokenizer, AlbertForPreTraining
- >>> import torch
-
- >>> tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
- >>> model = AlbertForPreTraining.from_pretrained('albert-base-v2')
-
- >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
- >>> outputs = model(input_ids)
-
- >>> prediction_logits = outputs.prediction_logits
- >>> sop_logits = outputs.sop_logits
-
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output, pooled_output = outputs[:2]
-
- prediction_scores = self.predictions(sequence_output)
- sop_scores = self.sop_classifier(pooled_output)
-
- total_loss = None
- if labels is not None and sentence_order_label is not None:
- loss_fct = CrossEntropyLoss()
- masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
- sentence_order_loss = loss_fct(sop_scores.view(-1, 2), sentence_order_label.view(-1))
- total_loss = masked_lm_loss + sentence_order_loss
-
- if not return_dict:
- output = (prediction_scores, sop_scores) + outputs[2:]
- return ((total_loss,) + output) if total_loss is not None else output
-
- return AlbertForPreTrainingOutput(
- loss=total_loss,
- prediction_logits=prediction_scores,
- sop_logits=sop_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-class AlbertMLMHead(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.LayerNorm = nn.LayerNorm(config.embedding_size)
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
- self.dense = nn.Linear(config.hidden_size, config.embedding_size)
- self.decoder = nn.Linear(config.embedding_size, config.vocab_size)
- self.activation = ACT2FN[config.hidden_act]
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.activation(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- hidden_states = self.decoder(hidden_states)
-
- prediction_scores = hidden_states
-
- return prediction_scores
-
- def _tie_weights(self):
- # To tie those two weights if they get disconnected (on TPU or when the bias is resized)
- self.bias = self.decoder.bias
-
-
-class AlbertSOPHead(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, config.num_labels)
-
- def forward(self, pooled_output):
- dropout_pooled_output = self.dropout(pooled_output)
- logits = self.classifier(dropout_pooled_output)
- return logits
-
-
-@add_start_docstrings(
- "Albert Model with a `language modeling` head on top.",
- ALBERT_START_DOCSTRING,
-)
-class AlbertForMaskedLM(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- self.predictions = AlbertMLMHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.predictions.decoder = new_embeddings
-
- def get_input_embeddings(self):
- return self.albert.embeddings.word_embeddings
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MaskedLMOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_outputs = outputs[0]
-
- prediction_scores = self.predictions(sequence_outputs)
-
- masked_lm_loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
-
- return MaskedLMOutput(
- loss=masked_lm_loss,
- logits=prediction_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
- output) e.g. for GLUE tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForSequenceClassification(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.config = config
-
- self.albert = AlbertModel(config)
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=SequenceClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the sequence classification/regression loss. Indices should be in ``[0, ...,
- config.num_labels - 1]``. If ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),
- If ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
-
- loss = None
- if labels is not None:
- if self.config.problem_type is None:
- if self.num_labels == 1:
- self.config.problem_type = "regression"
- elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
- self.config.problem_type = "single_label_classification"
- else:
- self.config.problem_type = "multi_label_classification"
-
- if self.config.problem_type == "regression":
- loss_fct = MSELoss()
- if self.num_labels == 1:
- loss = loss_fct(logits.squeeze(), labels.squeeze())
- else:
- loss = loss_fct(logits, labels)
- elif self.config.problem_type == "single_label_classification":
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
- elif self.config.problem_type == "multi_label_classification":
- loss_fct = BCEWithLogitsLoss()
- loss = loss_fct(logits, labels)
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return SequenceClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
- Named-Entity-Recognition (NER) tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForTokenClassification(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- classifier_dropout_prob = (
- config.classifier_dropout_prob
- if config.classifier_dropout_prob is not None
- else config.hidden_dropout_prob
- )
- self.dropout = nn.Dropout(classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=TokenClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels -
- 1]``.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- sequence_output = self.dropout(sequence_output)
- logits = self.classifier(sequence_output)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- # Only keep active parts of the loss
- if attention_mask is not None:
- active_loss = attention_mask.view(-1) == 1
- active_logits = logits.view(-1, self.num_labels)
- active_labels = torch.where(
- active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
- )
- loss = loss_fct(active_logits, active_labels)
- else:
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return TokenClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
- layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForQuestionAnswering(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=QuestionAnsweringModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- start_positions=None,
- end_positions=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for position (index) of the start of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the
- sequence are not taken into account for computing the loss.
- end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for position (index) of the end of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the
- sequence are not taken into account for computing the loss.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- logits = self.qa_outputs(sequence_output)
- start_logits, end_logits = logits.split(1, dim=-1)
- start_logits = start_logits.squeeze(-1).contiguous()
- end_logits = end_logits.squeeze(-1).contiguous()
-
- total_loss = None
- if start_positions is not None and end_positions is not None:
- # If we are on multi-GPU, split add a dimension
- if len(start_positions.size()) > 1:
- start_positions = start_positions.squeeze(-1)
- if len(end_positions.size()) > 1:
- end_positions = end_positions.squeeze(-1)
- # sometimes the start/end positions are outside our model inputs, we ignore these terms
- ignored_index = start_logits.size(1)
- start_positions = start_positions.clamp(0, ignored_index)
- end_positions = end_positions.clamp(0, ignored_index)
-
- loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
- start_loss = loss_fct(start_logits, start_positions)
- end_loss = loss_fct(end_logits, end_positions)
- total_loss = (start_loss + end_loss) / 2
-
- if not return_dict:
- output = (start_logits, end_logits) + outputs[2:]
- return ((total_loss,) + output) if total_loss is not None else output
-
- return QuestionAnsweringModelOutput(
- loss=total_loss,
- start_logits=start_logits,
- end_logits=end_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
- softmax) e.g. for RocStories/SWAG tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForMultipleChoice(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config)
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, 1)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MultipleChoiceModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the multiple choice classification loss. Indices should be in ``[0, ...,
- num_choices-1]`` where `num_choices` is the size of the second dimension of the input tensors. (see
- `input_ids` above)
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
- num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
-
- input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
- attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
- token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None
- position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
- inputs_embeds = (
- inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
- if inputs_embeds is not None
- else None
- )
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
- reshaped_logits = logits.view(-1, num_choices)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(reshaped_logits, labels)
-
- if not return_dict:
- output = (reshaped_logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return MultipleChoiceModelOutput(
- loss=loss,
- logits=reshaped_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
diff --git a/spaces/Iceclear/StableSR/StableSR/taming/models/cond_transformer.py b/spaces/Iceclear/StableSR/StableSR/taming/models/cond_transformer.py
deleted file mode 100644
index e4c63730fa86ac1b92b37af14c14fb696595b1ab..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/taming/models/cond_transformer.py
+++ /dev/null
@@ -1,352 +0,0 @@
-import os, math
-import torch
-import torch.nn.functional as F
-import pytorch_lightning as pl
-
-from main import instantiate_from_config
-from taming.modules.util import SOSProvider
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-class Net2NetTransformer(pl.LightningModule):
- def __init__(self,
- transformer_config,
- first_stage_config,
- cond_stage_config,
- permuter_config=None,
- ckpt_path=None,
- ignore_keys=[],
- first_stage_key="image",
- cond_stage_key="depth",
- downsample_cond_size=-1,
- pkeep=1.0,
- sos_token=0,
- unconditional=False,
- ):
- super().__init__()
- self.be_unconditional = unconditional
- self.sos_token = sos_token
- self.first_stage_key = first_stage_key
- self.cond_stage_key = cond_stage_key
- self.init_first_stage_from_ckpt(first_stage_config)
- self.init_cond_stage_from_ckpt(cond_stage_config)
- if permuter_config is None:
- permuter_config = {"target": "taming.modules.transformer.permuter.Identity"}
- self.permuter = instantiate_from_config(config=permuter_config)
- self.transformer = instantiate_from_config(config=transformer_config)
-
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
- self.downsample_cond_size = downsample_cond_size
- self.pkeep = pkeep
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- for k in sd.keys():
- for ik in ignore_keys:
- if k.startswith(ik):
- self.print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- self.load_state_dict(sd, strict=False)
- print(f"Restored from {path}")
-
- def init_first_stage_from_ckpt(self, config):
- model = instantiate_from_config(config)
- model = model.eval()
- model.train = disabled_train
- self.first_stage_model = model
-
- def init_cond_stage_from_ckpt(self, config):
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__" or self.be_unconditional:
- print(f"Using no cond stage. Assuming the training is intended to be unconditional. "
- f"Prepending {self.sos_token} as a sos token.")
- self.be_unconditional = True
- self.cond_stage_key = self.first_stage_key
- self.cond_stage_model = SOSProvider(self.sos_token)
- else:
- model = instantiate_from_config(config)
- model = model.eval()
- model.train = disabled_train
- self.cond_stage_model = model
-
- def forward(self, x, c):
- # one step to produce the logits
- _, z_indices = self.encode_to_z(x)
- _, c_indices = self.encode_to_c(c)
-
- if self.training and self.pkeep < 1.0:
- mask = torch.bernoulli(self.pkeep*torch.ones(z_indices.shape,
- device=z_indices.device))
- mask = mask.round().to(dtype=torch.int64)
- r_indices = torch.randint_like(z_indices, self.transformer.config.vocab_size)
- a_indices = mask*z_indices+(1-mask)*r_indices
- else:
- a_indices = z_indices
-
- cz_indices = torch.cat((c_indices, a_indices), dim=1)
-
- # target includes all sequence elements (no need to handle first one
- # differently because we are conditioning)
- target = z_indices
- # make the prediction
- logits, _ = self.transformer(cz_indices[:, :-1])
- # cut off conditioning outputs - output i corresponds to p(z_i | z_{ -1:
- c = F.interpolate(c, size=(self.downsample_cond_size, self.downsample_cond_size))
- quant_c, _, [_,_,indices] = self.cond_stage_model.encode(c)
- if len(indices.shape) > 2:
- indices = indices.view(c.shape[0], -1)
- return quant_c, indices
-
- @torch.no_grad()
- def decode_to_img(self, index, zshape):
- index = self.permuter(index, reverse=True)
- bhwc = (zshape[0],zshape[2],zshape[3],zshape[1])
- quant_z = self.first_stage_model.quantize.get_codebook_entry(
- index.reshape(-1), shape=bhwc)
- x = self.first_stage_model.decode(quant_z)
- return x
-
- @torch.no_grad()
- def log_images(self, batch, temperature=None, top_k=None, callback=None, lr_interface=False, **kwargs):
- log = dict()
-
- N = 4
- if lr_interface:
- x, c = self.get_xc(batch, N, diffuse=False, upsample_factor=8)
- else:
- x, c = self.get_xc(batch, N)
- x = x.to(device=self.device)
- c = c.to(device=self.device)
-
- quant_z, z_indices = self.encode_to_z(x)
- quant_c, c_indices = self.encode_to_c(c)
-
- # create a "half"" sample
- z_start_indices = z_indices[:,:z_indices.shape[1]//2]
- index_sample = self.sample(z_start_indices, c_indices,
- steps=z_indices.shape[1]-z_start_indices.shape[1],
- temperature=temperature if temperature is not None else 1.0,
- sample=True,
- top_k=top_k if top_k is not None else 100,
- callback=callback if callback is not None else lambda k: None)
- x_sample = self.decode_to_img(index_sample, quant_z.shape)
-
- # sample
- z_start_indices = z_indices[:, :0]
- index_sample = self.sample(z_start_indices, c_indices,
- steps=z_indices.shape[1],
- temperature=temperature if temperature is not None else 1.0,
- sample=True,
- top_k=top_k if top_k is not None else 100,
- callback=callback if callback is not None else lambda k: None)
- x_sample_nopix = self.decode_to_img(index_sample, quant_z.shape)
-
- # det sample
- z_start_indices = z_indices[:, :0]
- index_sample = self.sample(z_start_indices, c_indices,
- steps=z_indices.shape[1],
- sample=False,
- callback=callback if callback is not None else lambda k: None)
- x_sample_det = self.decode_to_img(index_sample, quant_z.shape)
-
- # reconstruction
- x_rec = self.decode_to_img(z_indices, quant_z.shape)
-
- log["inputs"] = x
- log["reconstructions"] = x_rec
-
- if self.cond_stage_key in ["objects_bbox", "objects_center_points"]:
- figure_size = (x_rec.shape[2], x_rec.shape[3])
- dataset = kwargs["pl_module"].trainer.datamodule.datasets["validation"]
- label_for_category_no = dataset.get_textual_label_for_category_no
- plotter = dataset.conditional_builders[self.cond_stage_key].plot
- log["conditioning"] = torch.zeros_like(log["reconstructions"])
- for i in range(quant_c.shape[0]):
- log["conditioning"][i] = plotter(quant_c[i], label_for_category_no, figure_size)
- log["conditioning_rec"] = log["conditioning"]
- elif self.cond_stage_key != "image":
- cond_rec = self.cond_stage_model.decode(quant_c)
- if self.cond_stage_key == "segmentation":
- # get image from segmentation mask
- num_classes = cond_rec.shape[1]
-
- c = torch.argmax(c, dim=1, keepdim=True)
- c = F.one_hot(c, num_classes=num_classes)
- c = c.squeeze(1).permute(0, 3, 1, 2).float()
- c = self.cond_stage_model.to_rgb(c)
-
- cond_rec = torch.argmax(cond_rec, dim=1, keepdim=True)
- cond_rec = F.one_hot(cond_rec, num_classes=num_classes)
- cond_rec = cond_rec.squeeze(1).permute(0, 3, 1, 2).float()
- cond_rec = self.cond_stage_model.to_rgb(cond_rec)
- log["conditioning_rec"] = cond_rec
- log["conditioning"] = c
-
- log["samples_half"] = x_sample
- log["samples_nopix"] = x_sample_nopix
- log["samples_det"] = x_sample_det
- return log
-
- def get_input(self, key, batch):
- x = batch[key]
- if len(x.shape) == 3:
- x = x[..., None]
- if len(x.shape) == 4:
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format)
- if x.dtype == torch.double:
- x = x.float()
- return x
-
- def get_xc(self, batch, N=None):
- x = self.get_input(self.first_stage_key, batch)
- c = self.get_input(self.cond_stage_key, batch)
- if N is not None:
- x = x[:N]
- c = c[:N]
- return x, c
-
- def shared_step(self, batch, batch_idx):
- x, c = self.get_xc(batch)
- logits, target = self(x, c)
- loss = F.cross_entropy(logits.reshape(-1, logits.size(-1)), target.reshape(-1))
- return loss
-
- def training_step(self, batch, batch_idx):
- loss = self.shared_step(batch, batch_idx)
- self.log("train/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- return loss
-
- def validation_step(self, batch, batch_idx):
- loss = self.shared_step(batch, batch_idx)
- self.log("val/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- return loss
-
- def configure_optimizers(self):
- """
- Following minGPT:
- This long function is unfortunately doing something very simple and is being very defensive:
- We are separating out all parameters of the model into two buckets: those that will experience
- weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
- We are then returning the PyTorch optimizer object.
- """
- # separate out all parameters to those that will and won't experience regularizing weight decay
- decay = set()
- no_decay = set()
- whitelist_weight_modules = (torch.nn.Linear, )
- blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
- for mn, m in self.transformer.named_modules():
- for pn, p in m.named_parameters():
- fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
-
- if pn.endswith('bias'):
- # all biases will not be decayed
- no_decay.add(fpn)
- elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
- # weights of whitelist modules will be weight decayed
- decay.add(fpn)
- elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
- # weights of blacklist modules will NOT be weight decayed
- no_decay.add(fpn)
-
- # special case the position embedding parameter in the root GPT module as not decayed
- no_decay.add('pos_emb')
-
- # validate that we considered every parameter
- param_dict = {pn: p for pn, p in self.transformer.named_parameters()}
- inter_params = decay & no_decay
- union_params = decay | no_decay
- assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
- assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
- % (str(param_dict.keys() - union_params), )
-
- # create the pytorch optimizer object
- optim_groups = [
- {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": 0.01},
- {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
- ]
- optimizer = torch.optim.AdamW(optim_groups, lr=self.learning_rate, betas=(0.9, 0.95))
- return optimizer
diff --git a/spaces/Illumotion/Koboldcpp/README.md b/spaces/Illumotion/Koboldcpp/README.md
deleted file mode 100644
index 2707d9b4cbe6ab9d89e858bab8f215a57e244ba0..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: Koboldcpp
-sdk: docker
-emoji: 💻
-colorFrom: blue
-colorTo: blue
----
\ No newline at end of file
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/prod.js b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/prod.js
deleted file mode 100644
index b598f486b642bda9df05d0fa51b0ba7eaf3a8974..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/configs/webpack/prod.js
+++ /dev/null
@@ -1,22 +0,0 @@
-// Copyright (c) Meta Platforms, Inc. and affiliates.
-// All rights reserved.
-
-// This source code is licensed under the license found in the
-// LICENSE file in the root directory of this source tree.
-
-// production config
-const { merge } = require("webpack-merge");
-const { resolve } = require("path");
-const Dotenv = require("dotenv-webpack");
-const commonConfig = require("./common");
-
-module.exports = merge(commonConfig, {
- mode: "production",
- output: {
- filename: "js/bundle.[contenthash].min.js",
- path: resolve(__dirname, "../../dist"),
- publicPath: "/",
- },
- devtool: "source-map",
- plugins: [new Dotenv()],
-});
diff --git a/spaces/Jamkonams/AutoGPT/scripts/check_requirements.py b/spaces/Jamkonams/AutoGPT/scripts/check_requirements.py
deleted file mode 100644
index e4eab024a6280c0d54110c69b2e03de639325fa6..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/scripts/check_requirements.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import sys
-
-import pkg_resources
-
-
-def main():
- requirements_file = sys.argv[1]
- with open(requirements_file, "r") as f:
- required_packages = [
- line.strip().split("#")[0].strip() for line in f.readlines()
- ]
-
- installed_packages = [package.key for package in pkg_resources.working_set]
-
- missing_packages = []
- for package in required_packages:
- if not package: # Skip empty lines
- continue
- package_name = package.strip().split("==")[0]
- if package_name.lower() not in installed_packages:
- missing_packages.append(package_name)
-
- if missing_packages:
- print("Missing packages:")
- print(", ".join(missing_packages))
- sys.exit(1)
- else:
- print("All packages are installed.")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/JerynC/catloaf/README.md b/spaces/JerynC/catloaf/README.md
deleted file mode 100644
index 0dac7daf6fde3a9845e5a7a5c684ff586708a490..0000000000000000000000000000000000000000
--- a/spaces/JerynC/catloaf/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Catloaf
-emoji: 🐱
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Joeythemonster/Text-To-image-AllModels/README.md b/spaces/Joeythemonster/Text-To-image-AllModels/README.md
deleted file mode 100644
index fd88f124f1e902b8d98b1cad1a16b7150fffc935..0000000000000000000000000000000000000000
--- a/spaces/Joeythemonster/Text-To-image-AllModels/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Text To Image AllModels
-emoji: 🐠
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: openrail
-duplicated_from: BilalSardar/Text-To-image-AllModels
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/AutoGPT/autogpt/speech/brian.py b/spaces/Kevin676/AutoGPT/autogpt/speech/brian.py
deleted file mode 100644
index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/speech/brian.py
+++ /dev/null
@@ -1,40 +0,0 @@
-""" Brian speech module for autogpt """
-import os
-
-import requests
-from playsound import playsound
-
-from autogpt.speech.base import VoiceBase
-
-
-class BrianSpeech(VoiceBase):
- """Brian speech module for autogpt"""
-
- def _setup(self) -> None:
- """Setup the voices, API key, etc."""
- pass
-
- def _speech(self, text: str, _: int = 0) -> bool:
- """Speak text using Brian with the streamelements API
-
- Args:
- text (str): The text to speak
-
- Returns:
- bool: True if the request was successful, False otherwise
- """
- tts_url = (
- f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}"
- )
- response = requests.get(tts_url)
-
- if response.status_code == 200:
- with open("speech.mp3", "wb") as f:
- f.write(response.content)
- playsound("speech.mp3")
- os.remove("speech.mp3")
- return True
- else:
- print("Request failed with status code:", response.status_code)
- print("Response content:", response.content)
- return False
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/__init__.py
deleted file mode 100644
index 6905fa0da4ea5b5b30797d5dae08dd2a199318ad..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-
-from .core import Opyrator
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/train.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/train.py
deleted file mode 100644
index 44e0929ac67d778b5cc78b669b42fb89e17acf9e..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/train.py
+++ /dev/null
@@ -1,127 +0,0 @@
-from vocoder.wavernn.models.fatchord_version import WaveRNN
-from vocoder.vocoder_dataset import VocoderDataset, collate_vocoder
-from vocoder.distribution import discretized_mix_logistic_loss
-from vocoder.display import stream, simple_table
-from vocoder.wavernn.gen_wavernn import gen_testset
-from torch.utils.data import DataLoader
-from pathlib import Path
-from torch import optim
-import torch.nn.functional as F
-import vocoder.wavernn.hparams as hp
-import numpy as np
-import time
-import torch
-
-
-def train(run_id: str, syn_dir: Path, voc_dir: Path, models_dir: Path, ground_truth: bool,
- save_every: int, backup_every: int, force_restart: bool):
- # Check to make sure the hop length is correctly factorised
- assert np.cumprod(hp.voc_upsample_factors)[-1] == hp.hop_length
-
- # Instantiate the model
- print("Initializing the model...")
- model = WaveRNN(
- rnn_dims=hp.voc_rnn_dims,
- fc_dims=hp.voc_fc_dims,
- bits=hp.bits,
- pad=hp.voc_pad,
- upsample_factors=hp.voc_upsample_factors,
- feat_dims=hp.num_mels,
- compute_dims=hp.voc_compute_dims,
- res_out_dims=hp.voc_res_out_dims,
- res_blocks=hp.voc_res_blocks,
- hop_length=hp.hop_length,
- sample_rate=hp.sample_rate,
- mode=hp.voc_mode
- )
-
- if torch.cuda.is_available():
- model = model.cuda()
- device = torch.device('cuda')
- else:
- device = torch.device('cpu')
-
- # Initialize the optimizer
- optimizer = optim.Adam(model.parameters())
- for p in optimizer.param_groups:
- p["lr"] = hp.voc_lr
- loss_func = F.cross_entropy if model.mode == "RAW" else discretized_mix_logistic_loss
-
- # Load the weights
- model_dir = models_dir.joinpath(run_id)
- model_dir.mkdir(exist_ok=True)
- weights_fpath = model_dir.joinpath(run_id + ".pt")
- if force_restart or not weights_fpath.exists():
- print("\nStarting the training of WaveRNN from scratch\n")
- model.save(weights_fpath, optimizer)
- else:
- print("\nLoading weights at %s" % weights_fpath)
- model.load(weights_fpath, optimizer)
- print("WaveRNN weights loaded from step %d" % model.step)
-
- # Initialize the dataset
- metadata_fpath = syn_dir.joinpath("train.txt") if ground_truth else \
- voc_dir.joinpath("synthesized.txt")
- mel_dir = syn_dir.joinpath("mels") if ground_truth else voc_dir.joinpath("mels_gta")
- wav_dir = syn_dir.joinpath("audio")
- dataset = VocoderDataset(metadata_fpath, mel_dir, wav_dir)
- test_loader = DataLoader(dataset,
- batch_size=1,
- shuffle=True,
- pin_memory=True)
-
- # Begin the training
- simple_table([('Batch size', hp.voc_batch_size),
- ('LR', hp.voc_lr),
- ('Sequence Len', hp.voc_seq_len)])
-
- for epoch in range(1, 350):
- data_loader = DataLoader(dataset,
- collate_fn=collate_vocoder,
- batch_size=hp.voc_batch_size,
- num_workers=2,
- shuffle=True,
- pin_memory=True)
- start = time.time()
- running_loss = 0.
-
- for i, (x, y, m) in enumerate(data_loader, 1):
- if torch.cuda.is_available():
- x, m, y = x.cuda(), m.cuda(), y.cuda()
-
- # Forward pass
- y_hat = model(x, m)
- if model.mode == 'RAW':
- y_hat = y_hat.transpose(1, 2).unsqueeze(-1)
- elif model.mode == 'MOL':
- y = y.float()
- y = y.unsqueeze(-1)
-
- # Backward pass
- loss = loss_func(y_hat, y)
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- running_loss += loss.item()
- speed = i / (time.time() - start)
- avg_loss = running_loss / i
-
- step = model.get_step()
- k = step // 1000
-
- if backup_every != 0 and step % backup_every == 0 :
- model.checkpoint(model_dir, optimizer)
-
- if save_every != 0 and step % save_every == 0 :
- model.save(weights_fpath, optimizer)
-
- msg = f"| Epoch: {epoch} ({i}/{len(data_loader)}) | " \
- f"Loss: {avg_loss:.4f} | {speed:.1f} " \
- f"steps/s | Step: {k}k | "
- stream(msg)
-
-
- gen_testset(model, test_loader, hp.voc_gen_at_checkpoint, hp.voc_gen_batched,
- hp.voc_target, hp.voc_overlap, model_dir)
- print("")
diff --git a/spaces/KoboldAI/Koboldcpp-Tiefighter/Dockerfile b/spaces/KoboldAI/Koboldcpp-Tiefighter/Dockerfile
deleted file mode 100644
index 6095b01d001f40f55c9454bacfa7e91f29c75680..0000000000000000000000000000000000000000
--- a/spaces/KoboldAI/Koboldcpp-Tiefighter/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
-ARG MODEL
-ARG MODEL_NAME
-ARG ADDITIONAL
-RUN mkdir /opt/koboldcpp
-RUN apt update && apt install git build-essential libopenblas-dev wget python3-pip -y
-RUN git clone https://github.com/lostruins/koboldcpp /opt/koboldcpp
-WORKDIR /opt/koboldcpp
-RUN make LLAMA_OPENBLAS=1 LLAMA_CUBLAS=1 LLAMA_PORTABLE=1
-RUN wget -O model.ggml $MODEL
-CMD /bin/python3 ./koboldcpp.py --model model.ggml $ADDITIONAL --port 7860 --hordeconfig $MODEL_NAME 1 1
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/resnext.py b/spaces/KyanChen/RSPrompter/mmdet/models/backbones/resnext.py
deleted file mode 100644
index df3d79e046c3ab9b289bcfeb6f937c87f6c09bfa..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/resnext.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-from mmcv.cnn import build_conv_layer, build_norm_layer
-
-from mmdet.registry import MODELS
-from ..layers import ResLayer
-from .resnet import Bottleneck as _Bottleneck
-from .resnet import ResNet
-
-
-class Bottleneck(_Bottleneck):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- groups=1,
- base_width=4,
- base_channels=64,
- **kwargs):
- """Bottleneck block for ResNeXt.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
-
- if groups == 1:
- width = self.planes
- else:
- width = math.floor(self.planes *
- (base_width / base_channels)) * groups
-
- self.norm1_name, norm1 = build_norm_layer(
- self.norm_cfg, width, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(
- self.norm_cfg, width, postfix=2)
- self.norm3_name, norm3 = build_norm_layer(
- self.norm_cfg, self.planes * self.expansion, postfix=3)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- self.inplanes,
- width,
- kernel_size=1,
- stride=self.conv1_stride,
- bias=False)
- self.add_module(self.norm1_name, norm1)
- fallback_on_stride = False
- self.with_modulated_dcn = False
- if self.with_dcn:
- fallback_on_stride = self.dcn.pop('fallback_on_stride', False)
- if not self.with_dcn or fallback_on_stride:
- self.conv2 = build_conv_layer(
- self.conv_cfg,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- groups=groups,
- bias=False)
- else:
- assert self.conv_cfg is None, 'conv_cfg must be None for DCN'
- self.conv2 = build_conv_layer(
- self.dcn,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- groups=groups,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.conv3 = build_conv_layer(
- self.conv_cfg,
- width,
- self.planes * self.expansion,
- kernel_size=1,
- bias=False)
- self.add_module(self.norm3_name, norm3)
-
- if self.with_plugins:
- self._del_block_plugins(self.after_conv1_plugin_names +
- self.after_conv2_plugin_names +
- self.after_conv3_plugin_names)
- self.after_conv1_plugin_names = self.make_block_plugins(
- width, self.after_conv1_plugins)
- self.after_conv2_plugin_names = self.make_block_plugins(
- width, self.after_conv2_plugins)
- self.after_conv3_plugin_names = self.make_block_plugins(
- self.planes * self.expansion, self.after_conv3_plugins)
-
- def _del_block_plugins(self, plugin_names):
- """delete plugins for block if exist.
-
- Args:
- plugin_names (list[str]): List of plugins name to delete.
- """
- assert isinstance(plugin_names, list)
- for plugin_name in plugin_names:
- del self._modules[plugin_name]
-
-
-@MODELS.register_module()
-class ResNeXt(ResNet):
- """ResNeXt backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- in_channels (int): Number of input image channels. Default: 3.
- num_stages (int): Resnet stages. Default: 4.
- groups (int): Group of resnext.
- base_width (int): Base width of resnext.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- norm_cfg (dict): dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
- """
-
- arch_settings = {
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self, groups=1, base_width=4, **kwargs):
- self.groups = groups
- self.base_width = base_width
- super(ResNeXt, self).__init__(**kwargs)
-
- def make_res_layer(self, **kwargs):
- """Pack all blocks in a stage into a ``ResLayer``"""
- return ResLayer(
- groups=self.groups,
- base_width=self.base_width,
- base_channels=self.base_channels,
- **kwargs)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/losses/seesaw_loss.py b/spaces/KyanChen/RSPrompter/mmdet/models/losses/seesaw_loss.py
deleted file mode 100644
index 4dec62b0afdc01e848e0c7f53ba0b6b10b899ea4..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/losses/seesaw_loss.py
+++ /dev/null
@@ -1,278 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, Optional, Tuple, Union
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from .accuracy import accuracy
-from .cross_entropy_loss import cross_entropy
-from .utils import weight_reduce_loss
-
-
-def seesaw_ce_loss(cls_score: Tensor,
- labels: Tensor,
- label_weights: Tensor,
- cum_samples: Tensor,
- num_classes: int,
- p: float,
- q: float,
- eps: float,
- reduction: str = 'mean',
- avg_factor: Optional[int] = None) -> Tensor:
- """Calculate the Seesaw CrossEntropy loss.
-
- Args:
- cls_score (Tensor): The prediction with shape (N, C),
- C is the number of classes.
- labels (Tensor): The learning label of the prediction.
- label_weights (Tensor): Sample-wise loss weight.
- cum_samples (Tensor): Cumulative samples for each category.
- num_classes (int): The number of classes.
- p (float): The ``p`` in the mitigation factor.
- q (float): The ``q`` in the compenstation factor.
- eps (float): The minimal value of divisor to smooth
- the computation of compensation factor
- reduction (str, optional): The method used to reduce the loss.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
-
- Returns:
- Tensor: The calculated loss
- """
- assert cls_score.size(-1) == num_classes
- assert len(cum_samples) == num_classes
-
- onehot_labels = F.one_hot(labels, num_classes)
- seesaw_weights = cls_score.new_ones(onehot_labels.size())
-
- # mitigation factor
- if p > 0:
- sample_ratio_matrix = cum_samples[None, :].clamp(
- min=1) / cum_samples[:, None].clamp(min=1)
- index = (sample_ratio_matrix < 1.0).float()
- sample_weights = sample_ratio_matrix.pow(p) * index + (1 - index)
- mitigation_factor = sample_weights[labels.long(), :]
- seesaw_weights = seesaw_weights * mitigation_factor
-
- # compensation factor
- if q > 0:
- scores = F.softmax(cls_score.detach(), dim=1)
- self_scores = scores[
- torch.arange(0, len(scores)).to(scores.device).long(),
- labels.long()]
- score_matrix = scores / self_scores[:, None].clamp(min=eps)
- index = (score_matrix > 1.0).float()
- compensation_factor = score_matrix.pow(q) * index + (1 - index)
- seesaw_weights = seesaw_weights * compensation_factor
-
- cls_score = cls_score + (seesaw_weights.log() * (1 - onehot_labels))
-
- loss = F.cross_entropy(cls_score, labels, weight=None, reduction='none')
-
- if label_weights is not None:
- label_weights = label_weights.float()
- loss = weight_reduce_loss(
- loss, weight=label_weights, reduction=reduction, avg_factor=avg_factor)
- return loss
-
-
-@MODELS.register_module()
-class SeesawLoss(nn.Module):
- """
- Seesaw Loss for Long-Tailed Instance Segmentation (CVPR 2021)
- arXiv: https://arxiv.org/abs/2008.10032
-
- Args:
- use_sigmoid (bool, optional): Whether the prediction uses sigmoid
- of softmax. Only False is supported.
- p (float, optional): The ``p`` in the mitigation factor.
- Defaults to 0.8.
- q (float, optional): The ``q`` in the compenstation factor.
- Defaults to 2.0.
- num_classes (int, optional): The number of classes.
- Default to 1203 for LVIS v1 dataset.
- eps (float, optional): The minimal value of divisor to smooth
- the computation of compensation factor
- reduction (str, optional): The method that reduces the loss to a
- scalar. Options are "none", "mean" and "sum".
- loss_weight (float, optional): The weight of the loss. Defaults to 1.0
- return_dict (bool, optional): Whether return the losses as a dict.
- Default to True.
- """
-
- def __init__(self,
- use_sigmoid: bool = False,
- p: float = 0.8,
- q: float = 2.0,
- num_classes: int = 1203,
- eps: float = 1e-2,
- reduction: str = 'mean',
- loss_weight: float = 1.0,
- return_dict: bool = True) -> None:
- super().__init__()
- assert not use_sigmoid
- self.use_sigmoid = False
- self.p = p
- self.q = q
- self.num_classes = num_classes
- self.eps = eps
- self.reduction = reduction
- self.loss_weight = loss_weight
- self.return_dict = return_dict
-
- # 0 for pos, 1 for neg
- self.cls_criterion = seesaw_ce_loss
-
- # cumulative samples for each category
- self.register_buffer(
- 'cum_samples',
- torch.zeros(self.num_classes + 1, dtype=torch.float))
-
- # custom output channels of the classifier
- self.custom_cls_channels = True
- # custom activation of cls_score
- self.custom_activation = True
- # custom accuracy of the classsifier
- self.custom_accuracy = True
-
- def _split_cls_score(self, cls_score: Tensor) -> Tuple[Tensor, Tensor]:
- """split cls_score.
-
- Args:
- cls_score (Tensor): The prediction with shape (N, C + 2).
-
- Returns:
- Tuple[Tensor, Tensor]: The score for classes and objectness,
- respectively
- """
- # split cls_score to cls_score_classes and cls_score_objectness
- assert cls_score.size(-1) == self.num_classes + 2
- cls_score_classes = cls_score[..., :-2]
- cls_score_objectness = cls_score[..., -2:]
- return cls_score_classes, cls_score_objectness
-
- def get_cls_channels(self, num_classes: int) -> int:
- """Get custom classification channels.
-
- Args:
- num_classes (int): The number of classes.
-
- Returns:
- int: The custom classification channels.
- """
- assert num_classes == self.num_classes
- return num_classes + 2
-
- def get_activation(self, cls_score: Tensor) -> Tensor:
- """Get custom activation of cls_score.
-
- Args:
- cls_score (Tensor): The prediction with shape (N, C + 2).
-
- Returns:
- Tensor: The custom activation of cls_score with shape
- (N, C + 1).
- """
- cls_score_classes, cls_score_objectness = self._split_cls_score(
- cls_score)
- score_classes = F.softmax(cls_score_classes, dim=-1)
- score_objectness = F.softmax(cls_score_objectness, dim=-1)
- score_pos = score_objectness[..., [0]]
- score_neg = score_objectness[..., [1]]
- score_classes = score_classes * score_pos
- scores = torch.cat([score_classes, score_neg], dim=-1)
- return scores
-
- def get_accuracy(self, cls_score: Tensor,
- labels: Tensor) -> Dict[str, Tensor]:
- """Get custom accuracy w.r.t. cls_score and labels.
-
- Args:
- cls_score (Tensor): The prediction with shape (N, C + 2).
- labels (Tensor): The learning label of the prediction.
-
- Returns:
- Dict [str, Tensor]: The accuracy for objectness and classes,
- respectively.
- """
- pos_inds = labels < self.num_classes
- obj_labels = (labels == self.num_classes).long()
- cls_score_classes, cls_score_objectness = self._split_cls_score(
- cls_score)
- acc_objectness = accuracy(cls_score_objectness, obj_labels)
- acc_classes = accuracy(cls_score_classes[pos_inds], labels[pos_inds])
- acc = dict()
- acc['acc_objectness'] = acc_objectness
- acc['acc_classes'] = acc_classes
- return acc
-
- def forward(
- self,
- cls_score: Tensor,
- labels: Tensor,
- label_weights: Optional[Tensor] = None,
- avg_factor: Optional[int] = None,
- reduction_override: Optional[str] = None
- ) -> Union[Tensor, Dict[str, Tensor]]:
- """Forward function.
-
- Args:
- cls_score (Tensor): The prediction with shape (N, C + 2).
- labels (Tensor): The learning label of the prediction.
- label_weights (Tensor, optional): Sample-wise loss weight.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
-
- Returns:
- Tensor | Dict [str, Tensor]:
- if return_dict == False: The calculated loss |
- if return_dict == True: The dict of calculated losses
- for objectness and classes, respectively.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- assert cls_score.size(-1) == self.num_classes + 2
- pos_inds = labels < self.num_classes
- # 0 for pos, 1 for neg
- obj_labels = (labels == self.num_classes).long()
-
- # accumulate the samples for each category
- unique_labels = labels.unique()
- for u_l in unique_labels:
- inds_ = labels == u_l.item()
- self.cum_samples[u_l] += inds_.sum()
-
- if label_weights is not None:
- label_weights = label_weights.float()
- else:
- label_weights = labels.new_ones(labels.size(), dtype=torch.float)
-
- cls_score_classes, cls_score_objectness = self._split_cls_score(
- cls_score)
- # calculate loss_cls_classes (only need pos samples)
- if pos_inds.sum() > 0:
- loss_cls_classes = self.loss_weight * self.cls_criterion(
- cls_score_classes[pos_inds], labels[pos_inds],
- label_weights[pos_inds], self.cum_samples[:self.num_classes],
- self.num_classes, self.p, self.q, self.eps, reduction,
- avg_factor)
- else:
- loss_cls_classes = cls_score_classes[pos_inds].sum()
- # calculate loss_cls_objectness
- loss_cls_objectness = self.loss_weight * cross_entropy(
- cls_score_objectness, obj_labels, label_weights, reduction,
- avg_factor)
-
- if self.return_dict:
- loss_cls = dict()
- loss_cls['loss_cls_objectness'] = loss_cls_objectness
- loss_cls['loss_cls_classes'] = loss_cls_classes
- else:
- loss_cls = loss_cls_classes + loss_cls_objectness
- return loss_cls
diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/collect_env.py b/spaces/KyanChen/RSPrompter/mmpl/utils/collect_env.py
deleted file mode 100644
index 94c675c841d74af49964c17ab360a6d3d754b4e2..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/utils/collect_env.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import mmcv
-from mmengine.utils import get_git_hash
-from mmengine.utils.dl_utils import collect_env as collect_base_env
-
-
-def collect_env() -> dict:
- """Collect the information of the running environments."""
- env_info = collect_base_env()
- env_info['MMCV'] = mmcv.__version__
- env_info['MMDetection'] = mmdet.__version__
- env_info['MMYOLO'] = mmyolo.__version__ + '+' + get_git_hash()[:7]
- return env_info
-
-
-if __name__ == '__main__':
- for name, val in collect_env().items():
- print(f'{name}: {val}')
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/attentions.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/attentions.py
deleted file mode 100644
index 693966841d9b371ce3b4497d74d040db3e6aaa46..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/attentions.py
+++ /dev/null
@@ -1,414 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from lib.infer.infer_pack import commons
-from lib.infer.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Lehele/bingai/Dockerfile b/spaces/Lehele/bingai/Dockerfile
deleted file mode 100644
index f544a833408dceebddb5637c78e572ef72f3cdbf..0000000000000000000000000000000000000000
--- a/spaces/Lehele/bingai/Dockerfile
+++ /dev/null
@@ -1,4 +0,0 @@
-FROM zklcdc/go-proxy-bingai
-# ENV USER_MUID=""
-EXPOSE 8080
-CMD ["/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/MGLDZM/chgpt/static/js/tabsHandler.js b/spaces/MGLDZM/chgpt/static/js/tabsHandler.js
deleted file mode 100644
index 32aa5cb888857c597f310af3888c51da6752a414..0000000000000000000000000000000000000000
--- a/spaces/MGLDZM/chgpt/static/js/tabsHandler.js
+++ /dev/null
@@ -1,26 +0,0 @@
-$(document).ready(function() {
- $(document).on("chat:creado",function(event){
- $(".tab-switch").off("change")
- $(".tab-switch").on("change", radioChanged)
- radioChanged()
- });
-
- $("#nuevoChat").on("click", (event) => {
- $(document).trigger("chat:crear");
- })
-
-})
-
-function radioChanged(){
- let tab = $(".tab");
- let tabActive = $(tab[$(".tab-label input:checked").val()]);
- let chat = tabActive.find(".chat")
-
-
- tab.removeClass("active")
- tabActive.addClass("active")
- tabActive.find("textarea").focus()
- if(chat.length>0){
- chat.scrollTop(chat[0].scrollHeight);
- }
-}
\ No newline at end of file
diff --git a/spaces/MKFMIKU/Bi-Noising.Diffusion/header.html b/spaces/MKFMIKU/Bi-Noising.Diffusion/header.html
deleted file mode 100644
index 78097967981e9fd501bb9eebdd112c96fc109912..0000000000000000000000000000000000000000
--- a/spaces/MKFMIKU/Bi-Noising.Diffusion/header.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
- 💊 Bi-Noising for Image-to-Image (I2I) Diffusion
-
-
-
-
- https://arxiv.org/abs/2212.07352. A new plug-and-play prior for diffusion guidance, which can fix biased noise during sampling. It is shown to be effective on all existing diffusion restoration models including, ILVR, SR3, Guided-Diffusion, etc.
-
-
-
\ No newline at end of file
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/meta_arch/d2_deformable_detr.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/meta_arch/d2_deformable_detr.py
deleted file mode 100644
index 47ff220fc3946d1bf68fad87076589e46b274ef3..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/meta_arch/d2_deformable_detr.py
+++ /dev/null
@@ -1,308 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-import torch.nn.functional as F
-from torch import nn
-import math
-
-from detectron2.modeling import META_ARCH_REGISTRY, build_backbone
-from detectron2.structures import Boxes, Instances
-from ..utils import load_class_freq, get_fed_loss_inds
-
-from models.backbone import Joiner
-from models.deformable_detr import DeformableDETR, SetCriterion, MLP
-from models.deformable_detr import _get_clones
-from models.matcher import HungarianMatcher
-from models.position_encoding import PositionEmbeddingSine
-from models.deformable_transformer import DeformableTransformer
-from models.segmentation import sigmoid_focal_loss
-from util.box_ops import box_cxcywh_to_xyxy, box_xyxy_to_cxcywh
-from util.misc import NestedTensor, accuracy
-
-
-__all__ = ["DeformableDetr"]
-
-class CustomSetCriterion(SetCriterion):
- def __init__(self, num_classes, matcher, weight_dict, losses, \
- focal_alpha=0.25, use_fed_loss=False):
- super().__init__(num_classes, matcher, weight_dict, losses, focal_alpha)
- self.use_fed_loss = use_fed_loss
- if self.use_fed_loss:
- self.register_buffer(
- 'fed_loss_weight', load_class_freq(freq_weight=0.5))
-
- def loss_labels(self, outputs, targets, indices, num_boxes, log=True):
- """Classification loss (NLL)
- targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
- """
- assert 'pred_logits' in outputs
- src_logits = outputs['pred_logits']
-
- idx = self._get_src_permutation_idx(indices)
- target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
- target_classes = torch.full(src_logits.shape[:2], self.num_classes,
- dtype=torch.int64, device=src_logits.device)
- target_classes[idx] = target_classes_o
-
- target_classes_onehot = torch.zeros(
- [src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1],
- dtype=src_logits.dtype, layout=src_logits.layout,
- device=src_logits.device)
- target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1)
-
- target_classes_onehot = target_classes_onehot[:,:,:-1] # B x N x C
- if self.use_fed_loss:
- inds = get_fed_loss_inds(
- gt_classes=target_classes_o,
- num_sample_cats=50,
- weight=self.fed_loss_weight,
- C=target_classes_onehot.shape[2])
- loss_ce = sigmoid_focal_loss(
- src_logits[:, :, inds],
- target_classes_onehot[:, :, inds],
- num_boxes,
- alpha=self.focal_alpha,
- gamma=2) * src_logits.shape[1]
- else:
- loss_ce = sigmoid_focal_loss(
- src_logits, target_classes_onehot, num_boxes,
- alpha=self.focal_alpha,
- gamma=2) * src_logits.shape[1]
- losses = {'loss_ce': loss_ce}
-
- if log:
- # TODO this should probably be a separate loss, not hacked in this one here
- losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]
- return losses
-
-
-class MaskedBackbone(nn.Module):
- """ This is a thin wrapper around D2's backbone to provide padding masking"""
-
- def __init__(self, cfg):
- super().__init__()
- self.backbone = build_backbone(cfg)
- backbone_shape = self.backbone.output_shape()
- self.feature_strides = [backbone_shape[f].stride for f in backbone_shape.keys()]
- self.strides = [backbone_shape[f].stride for f in backbone_shape.keys()]
- self.num_channels = [backbone_shape[x].channels for x in backbone_shape.keys()]
-
- def forward(self, tensor_list: NestedTensor):
- xs = self.backbone(tensor_list.tensors)
- out = {}
- for name, x in xs.items():
- m = tensor_list.mask
- assert m is not None
- mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
- out[name] = NestedTensor(x, mask)
- return out
-
-@META_ARCH_REGISTRY.register()
-class DeformableDetr(nn.Module):
- """
- Implement Deformable Detr
- """
-
- def __init__(self, cfg):
- super().__init__()
- self.with_image_labels = cfg.WITH_IMAGE_LABELS
- self.weak_weight = cfg.MODEL.DETR.WEAK_WEIGHT
-
- self.device = torch.device(cfg.MODEL.DEVICE)
- self.test_topk = cfg.TEST.DETECTIONS_PER_IMAGE
- self.num_classes = cfg.MODEL.DETR.NUM_CLASSES
- self.mask_on = cfg.MODEL.MASK_ON
- hidden_dim = cfg.MODEL.DETR.HIDDEN_DIM
- num_queries = cfg.MODEL.DETR.NUM_OBJECT_QUERIES
-
- # Transformer parameters:
- nheads = cfg.MODEL.DETR.NHEADS
- dropout = cfg.MODEL.DETR.DROPOUT
- dim_feedforward = cfg.MODEL.DETR.DIM_FEEDFORWARD
- enc_layers = cfg.MODEL.DETR.ENC_LAYERS
- dec_layers = cfg.MODEL.DETR.DEC_LAYERS
- num_feature_levels = cfg.MODEL.DETR.NUM_FEATURE_LEVELS
- two_stage = cfg.MODEL.DETR.TWO_STAGE
- with_box_refine = cfg.MODEL.DETR.WITH_BOX_REFINE
-
- # Loss parameters:
- giou_weight = cfg.MODEL.DETR.GIOU_WEIGHT
- l1_weight = cfg.MODEL.DETR.L1_WEIGHT
- deep_supervision = cfg.MODEL.DETR.DEEP_SUPERVISION
- cls_weight = cfg.MODEL.DETR.CLS_WEIGHT
- focal_alpha = cfg.MODEL.DETR.FOCAL_ALPHA
-
- N_steps = hidden_dim // 2
- d2_backbone = MaskedBackbone(cfg)
- backbone = Joiner(d2_backbone, PositionEmbeddingSine(N_steps, normalize=True))
-
- transformer = DeformableTransformer(
- d_model=hidden_dim,
- nhead=nheads,
- num_encoder_layers=enc_layers,
- num_decoder_layers=dec_layers,
- dim_feedforward=dim_feedforward,
- dropout=dropout,
- activation="relu",
- return_intermediate_dec=True,
- num_feature_levels=num_feature_levels,
- dec_n_points=4,
- enc_n_points=4,
- two_stage=two_stage,
- two_stage_num_proposals=num_queries)
-
- self.detr = DeformableDETR(
- backbone, transformer, num_classes=self.num_classes,
- num_queries=num_queries,
- num_feature_levels=num_feature_levels,
- aux_loss=deep_supervision,
- with_box_refine=with_box_refine,
- two_stage=two_stage,
- )
-
- if self.mask_on:
- assert 0, 'Mask is not supported yet :('
-
- matcher = HungarianMatcher(
- cost_class=cls_weight, cost_bbox=l1_weight, cost_giou=giou_weight)
- weight_dict = {"loss_ce": cls_weight, "loss_bbox": l1_weight}
- weight_dict["loss_giou"] = giou_weight
- if deep_supervision:
- aux_weight_dict = {}
- for i in range(dec_layers - 1):
- aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()})
- weight_dict.update(aux_weight_dict)
- print('weight_dict', weight_dict)
- losses = ["labels", "boxes", "cardinality"]
- if self.mask_on:
- losses += ["masks"]
- self.criterion = CustomSetCriterion(
- self.num_classes, matcher=matcher, weight_dict=weight_dict,
- focal_alpha=focal_alpha,
- losses=losses,
- use_fed_loss=cfg.MODEL.DETR.USE_FED_LOSS
- )
- pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(3, 1, 1)
- pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(self.device).view(3, 1, 1)
- self.normalizer = lambda x: (x - pixel_mean) / pixel_std
-
-
- def forward(self, batched_inputs):
- """
- Args:
- Returns:
- dict[str: Tensor]:
- mapping from a named loss to a tensor storing the loss. Used during training only.
- """
- images = self.preprocess_image(batched_inputs)
- output = self.detr(images)
- if self.training:
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
- targets = self.prepare_targets(gt_instances)
- loss_dict = self.criterion(output, targets)
- weight_dict = self.criterion.weight_dict
- for k in loss_dict.keys():
- if k in weight_dict:
- loss_dict[k] *= weight_dict[k]
- if self.with_image_labels:
- if batched_inputs[0]['ann_type'] in ['image', 'captiontag']:
- loss_dict['loss_image'] = self.weak_weight * self._weak_loss(
- output, batched_inputs)
- else:
- loss_dict['loss_image'] = images[0].new_zeros(
- [1], dtype=torch.float32)[0]
- # import pdb; pdb.set_trace()
- return loss_dict
- else:
- image_sizes = output["pred_boxes"].new_tensor(
- [(t["height"], t["width"]) for t in batched_inputs])
- results = self.post_process(output, image_sizes)
- return results
-
-
- def prepare_targets(self, targets):
- new_targets = []
- for targets_per_image in targets:
- h, w = targets_per_image.image_size
- image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float, device=self.device)
- gt_classes = targets_per_image.gt_classes
- gt_boxes = targets_per_image.gt_boxes.tensor / image_size_xyxy
- gt_boxes = box_xyxy_to_cxcywh(gt_boxes)
- new_targets.append({"labels": gt_classes, "boxes": gt_boxes})
- if self.mask_on and hasattr(targets_per_image, 'gt_masks'):
- assert 0, 'Mask is not supported yet :('
- gt_masks = targets_per_image.gt_masks
- gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w)
- new_targets[-1].update({'masks': gt_masks})
- return new_targets
-
-
- def post_process(self, outputs, target_sizes):
- """
- """
- out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']
- assert len(out_logits) == len(target_sizes)
- assert target_sizes.shape[1] == 2
-
- prob = out_logits.sigmoid()
- topk_values, topk_indexes = torch.topk(
- prob.view(out_logits.shape[0], -1), self.test_topk, dim=1)
- scores = topk_values
- topk_boxes = topk_indexes // out_logits.shape[2]
- labels = topk_indexes % out_logits.shape[2]
- boxes = box_cxcywh_to_xyxy(out_bbox)
- boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4))
-
- # and from relative [0, 1] to absolute [0, height] coordinates
- img_h, img_w = target_sizes.unbind(1)
- scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)
- boxes = boxes * scale_fct[:, None, :]
-
- results = []
- for s, l, b, size in zip(scores, labels, boxes, target_sizes):
- r = Instances((size[0], size[1]))
- r.pred_boxes = Boxes(b)
- r.scores = s
- r.pred_classes = l
- results.append({'instances': r})
- return results
-
-
- def preprocess_image(self, batched_inputs):
- """
- Normalize, pad and batch the input images.
- """
- images = [self.normalizer(x["image"].to(self.device)) for x in batched_inputs]
- return images
-
-
- def _weak_loss(self, outputs, batched_inputs):
- loss = 0
- for b, x in enumerate(batched_inputs):
- labels = x['pos_category_ids']
- pred_logits = [outputs['pred_logits'][b]]
- pred_boxes = [outputs['pred_boxes'][b]]
- for xx in outputs['aux_outputs']:
- pred_logits.append(xx['pred_logits'][b])
- pred_boxes.append(xx['pred_boxes'][b])
- pred_logits = torch.stack(pred_logits, dim=0) # L x N x C
- pred_boxes = torch.stack(pred_boxes, dim=0) # L x N x 4
- for label in labels:
- loss += self._max_size_loss(
- pred_logits, pred_boxes, label) / len(labels)
- loss = loss / len(batched_inputs)
- return loss
-
-
- def _max_size_loss(self, logits, boxes, label):
- '''
- Inputs:
- logits: L x N x C
- boxes: L x N x 4
- '''
- target = logits.new_zeros((logits.shape[0], logits.shape[2]))
- target[:, label] = 1.
- sizes = boxes[..., 2] * boxes[..., 3] # L x N
- ind = sizes.argmax(dim=1) # L
- loss = F.binary_cross_entropy_with_logits(
- logits[range(len(ind)), ind], target, reduction='sum')
- return loss
\ No newline at end of file
diff --git a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/gmic3d.py b/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/gmic3d.py
deleted file mode 100644
index 6aceca1480c78b79599561f45f02fd6cb4172aae..0000000000000000000000000000000000000000
--- a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/gmic3d.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-import OncoMedley.GMIC.tools as tools
-import OncoMedley.GMIC.modules as m
-
-class GMIC3D(nn.Module):
- def __init__(self, parameters):
- super(GMIC3D, self).__init__()
-
- # save parameters
- self.experiment_parameters = parameters
-
- # construct networks
- # global network
- self.global_network = m.GlobalNetwork(self.experiment_parameters, self)
- self.global_network.add_layers()
-
- # aggregation function
- self.aggregation_function = m.TopTPercentAggregationFunctionFlattened(self.experiment_parameters, self)
-
- # detection module
- self.retrieve_roi_crops = m.RetrieveROIModule3D(self.experiment_parameters, self)
-
- # detection network
- self.local_network = m.LocalNetwork(self.experiment_parameters, self)
- self.local_network.add_layers()
-
- # MIL module
- self.attention_module = m.AttentionModule(self.experiment_parameters, self)
- self.attention_module.add_layers()
-
- def _convert_crop_position(self, crops_x_small, cam_size, x_original):
- """
- Function that converts the crop locations from cam_size to x_original
- :param crops_x_small: N, k*c, 2 numpy matrix
- :param cam_size: (h,w)
- :param x_original: N, C, H, W pytorch variable
- :return: N, k*c, 2 numpy matrix
- """
- # retrieve the dimension of both the original image and the small version
- h, w = cam_size
- _, _, H, W = x_original.size()
-
- # interpolate the 2d index in h_small to index in x_original
- top_k_prop_x = crops_x_small[:, :, 0] / h
- top_k_prop_y = crops_x_small[:, :, 1] / w
- # sanity check
- assert np.max(top_k_prop_x) <= 1.0, "top_k_prop_x >= 1.0"
- assert np.min(top_k_prop_x) >= 0.0, "top_k_prop_x <= 0.0"
- assert np.max(top_k_prop_y) <= 1.0, "top_k_prop_y >= 1.0"
- assert np.min(top_k_prop_y) >= 0.0, "top_k_prop_y <= 0.0"
- # interpolate the crop position from cam_size to x_original
- top_k_interpolate_x = np.expand_dims(np.around(top_k_prop_x * H), -1)
- top_k_interpolate_y = np.expand_dims(np.around(top_k_prop_y * W), -1)
- top_k_interpolate_2d = np.concatenate([top_k_interpolate_x, top_k_interpolate_y], axis=-1)
- return top_k_interpolate_2d
-
- def _retrieve_crop_3d(self, x_original_pytorch, crop_positions, crop_method, max_slice_numbers):
- """
- Function that takes in the original image and cropping position and returns the crops
-
- crop_positions contains all potential crop locations for all slices at each step.
- However, only the maximum crop among all slices is used at each step, indicated by max_slice_numbers.
- Therefore, for each step j, select only the true globally-maximum crop and ignore the rest.
-
- Assumes batch size of 1
-
- :param x_original_pytorch: PyTorch Tensor array (N,C,H,W)
- :param crop_positions:
- :return:
- """
- batch_size = 1
- num_slices, num_crops, _ = crop_positions.shape
- crop_h, crop_w = self.experiment_parameters["crop_shape"]
-
- output = torch.ones(
- (batch_size, num_crops, 1, crop_h, crop_w))
- if self.experiment_parameters["half"]:
- output = output.half()
- if self.experiment_parameters["device_type"] == "gpu":
- output = output.cuda()
- for i in range(batch_size):
- for j in range(num_crops):
- tools.crop_pytorch_3d(x_original_pytorch[max_slice_numbers[j].item(), :, :, :],
- self.experiment_parameters["crop_shape"],
- crop_positions[max_slice_numbers[j].item(), j, :],
- output[i, j, :, :, :],
- method=crop_method)
- return output
-
-
- def forward(self, x_original):
- """
- :param x_original: N,C,D,H,W torch tensor
- """
- device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
- final_preds = torch.Tensor([]).to(device)
- for p in x_original:
- p = torch.unsqueeze(p, 0).to(device)
- N, C, num_slices, image_height, image_width = p.shape
-
- assert N == 1, "3D-GMIC is designed to work with batch size of 1 per GPU"
- assert C == 1, "Input is expected to be 1-channel image"
-
- # reshape the tensor so that the slice dimension is now batch dimension
- p = p.reshape(num_slices, 1, image_height, image_width)
-
- # global network
- h_g, self.saliency_map = self.global_network.forward(p)
-
- num_slices, num_classes, H, W = self.saliency_map.shape
- cam_size = (H, W)
-
- # calculate y_global
- saliency_map_flattened = self.saliency_map.permute(1,0,2,3).reshape(1, 1, -1)
- self.y_global = self.aggregation_function.forward(saliency_map_flattened, num_slices=num_slices)
-
- # region proposal network
- self.intended_max_slice_numbers, self.max_slice_numbers, small_x_locations = self.retrieve_roi_crops.forward(p, cam_size, self.saliency_map)
-
- # convert crop locations that is on cam_size to p
- self.patch_locations = self._convert_crop_position(small_x_locations, cam_size, p)
-
- # patch retriever
- crops_variable = self._retrieve_crop_3d(p, self.patch_locations, self.retrieve_roi_crops.crop_method, self.max_slice_numbers)
-
- # detection network
- batch_size, num_crops, num_slices_per_patch, I, J = crops_variable.size()
- assert batch_size == 1
- self.patches = crops_variable
- crops_variable_reshaped = crops_variable.view(batch_size * num_crops, num_slices_per_patch, I, J).to(device)
- h_crops = self.local_network.forward(crops_variable_reshaped).view(batch_size, num_crops, -1).to(device)
-
- # MIL module
- z, self.patch_attns, self.y_local = self.attention_module.forward(h_crops)
-
- # final output without using fusion branch
- self.final_prediction = 0.5 * self.y_global + 0.5 * self.y_local
-
- final_preds = torch.cat([final_preds, self.final_prediction.to(device)], 0)
-
- return final_preds
\ No newline at end of file
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/embedding_manager.py b/spaces/MirageML/sjc/sd1/ldm/modules/embedding_manager.py
deleted file mode 100644
index cbabc4174da38a3cc0f2f5480e0d268172627c3a..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/ldm/modules/embedding_manager.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch
-from torch import nn
-
-from ldm.data.personalized import per_img_token_list
-from transformers import CLIPTokenizer
-from functools import partial
-
-DEFAULT_PLACEHOLDER_TOKEN = ["*"]
-
-PROGRESSIVE_SCALE = 2000
-
-def get_clip_token_for_string(tokenizer, string):
- batch_encoding = tokenizer(string, truncation=True, max_length=77, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"]
- assert torch.count_nonzero(tokens - 49407) == 2, f"String '{string}' maps to more than a single token. Please use another string"
-
- return tokens[0, 1]
-
-def get_bert_token_for_string(tokenizer, string):
- token = tokenizer(string)
- assert torch.count_nonzero(token) == 3, f"String '{string}' maps to more than a single token. Please use another string"
-
- token = token[0, 1]
-
- return token
-
-def get_embedding_for_clip_token(embedder, token):
- return embedder(token.unsqueeze(0))[0, 0]
-
-
-class EmbeddingManager(nn.Module):
- def __init__(
- self,
- embedder,
- placeholder_strings=None,
- initializer_words=None,
- per_image_tokens=False,
- num_vectors_per_token=1,
- progressive_words=False,
- **kwargs
- ):
- super().__init__()
-
- self.string_to_token_dict = {}
-
- self.string_to_param_dict = nn.ParameterDict()
-
- self.initial_embeddings = nn.ParameterDict() # These should not be optimized
-
- self.progressive_words = progressive_words
- self.progressive_counter = 0
-
- self.max_vectors_per_token = num_vectors_per_token
-
- if hasattr(embedder, 'tokenizer'): # using Stable Diffusion's CLIP encoder
- self.is_clip = True
- get_token_for_string = partial(get_clip_token_for_string, embedder.tokenizer)
- get_embedding_for_tkn = partial(get_embedding_for_clip_token, embedder.transformer.text_model.embeddings)
- token_dim = 768
- else: # using LDM's BERT encoder
- self.is_clip = False
- get_token_for_string = partial(get_bert_token_for_string, embedder.tknz_fn)
- get_embedding_for_tkn = embedder.transformer.token_emb
- token_dim = 1280
-
- if per_image_tokens:
- placeholder_strings.extend(per_img_token_list)
-
- for idx, placeholder_string in enumerate(placeholder_strings):
-
- token = get_token_for_string(placeholder_string)
-
- if initializer_words and idx < len(initializer_words):
- init_word_token = get_token_for_string(initializer_words[idx])
-
- with torch.no_grad():
- init_word_embedding = get_embedding_for_tkn(init_word_token.cpu())
-
- token_params = torch.nn.Parameter(init_word_embedding.unsqueeze(0).repeat(num_vectors_per_token, 1), requires_grad=True)
- self.initial_embeddings[placeholder_string] = torch.nn.Parameter(init_word_embedding.unsqueeze(0).repeat(num_vectors_per_token, 1), requires_grad=False)
- else:
- token_params = torch.nn.Parameter(torch.rand(size=(num_vectors_per_token, token_dim), requires_grad=True))
-
- self.string_to_token_dict[placeholder_string] = token
- self.string_to_param_dict[placeholder_string] = token_params
-
- def forward(
- self,
- tokenized_text,
- embedded_text,
- ):
- b, n, device = *tokenized_text.shape, tokenized_text.device
-
- for placeholder_string, placeholder_token in self.string_to_token_dict.items():
-
- placeholder_embedding = self.string_to_param_dict[placeholder_string].to(device)
-
- if self.max_vectors_per_token == 1: # If there's only one vector per token, we can do a simple replacement
- placeholder_idx = torch.where(tokenized_text == placeholder_token.to(device))
- embedded_text[placeholder_idx] = placeholder_embedding
- else: # otherwise, need to insert and keep track of changing indices
- if self.progressive_words:
- self.progressive_counter += 1
- max_step_tokens = 1 + self.progressive_counter // PROGRESSIVE_SCALE
- else:
- max_step_tokens = self.max_vectors_per_token
-
- num_vectors_for_token = min(placeholder_embedding.shape[0], max_step_tokens)
-
- placeholder_rows, placeholder_cols = torch.where(tokenized_text == placeholder_token.to(device))
-
- if placeholder_rows.nelement() == 0:
- continue
-
- sorted_cols, sort_idx = torch.sort(placeholder_cols, descending=True)
- sorted_rows = placeholder_rows[sort_idx]
-
- for idx in range(len(sorted_rows)):
- row = sorted_rows[idx]
- col = sorted_cols[idx]
-
- new_token_row = torch.cat([tokenized_text[row][:col], placeholder_token.repeat(num_vectors_for_token).to(device), tokenized_text[row][col + 1:]], axis=0)[:n]
- new_embed_row = torch.cat([embedded_text[row][:col], placeholder_embedding[:num_vectors_for_token], embedded_text[row][col + 1:]], axis=0)[:n]
-
- embedded_text[row] = new_embed_row
- tokenized_text[row] = new_token_row
-
- return embedded_text
-
- def save(self, ckpt_path):
- torch.save({"string_to_token": self.string_to_token_dict,
- "string_to_param": self.string_to_param_dict}, ckpt_path)
-
- def load(self, ckpt_path):
- ckpt = torch.load(ckpt_path, map_location='cpu')
-
- self.string_to_token_dict = ckpt["string_to_token"]
- self.string_to_param_dict = ckpt["string_to_param"]
-
- def get_embedding_norms_squared(self):
- all_params = torch.cat(list(self.string_to_param_dict.values()), axis=0) # num_placeholders x embedding_dim
- param_norm_squared = (all_params * all_params).sum(axis=-1) # num_placeholders
-
- return param_norm_squared
-
- def embedding_parameters(self):
- return self.string_to_param_dict.parameters()
-
- def embedding_to_coarse_loss(self):
-
- loss = 0.
- num_embeddings = len(self.initial_embeddings)
-
- for key in self.initial_embeddings:
- optimized = self.string_to_param_dict[key]
- coarse = self.initial_embeddings[key].clone().to(optimized.device)
-
- loss = loss + (optimized - coarse) @ (optimized - coarse).T / num_embeddings
-
- return loss
\ No newline at end of file
diff --git a/spaces/Missinginaction/stablediffusionwithnofilter/app.py b/spaces/Missinginaction/stablediffusionwithnofilter/app.py
deleted file mode 100644
index 4eab1984c438dcee135fc7f5404191798893a5d8..0000000000000000000000000000000000000000
--- a/spaces/Missinginaction/stablediffusionwithnofilter/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
- os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
-
- os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/postprocessors/pan_postprocessor.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/postprocessors/pan_postprocessor.py
deleted file mode 100644
index 63676856bebd78dfc97156739a2745e51cb272da..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/postprocessors/pan_postprocessor.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Sequence
-
-import cv2
-import numpy as np
-import torch
-from mmcv.ops import pixel_group
-from mmengine.structures import InstanceData
-
-from mmocr.registry import MODELS
-from mmocr.structures import TextDetDataSample
-from .base import BaseTextDetPostProcessor
-
-
-@MODELS.register_module()
-class PANPostprocessor(BaseTextDetPostProcessor):
- """Convert scores to quadrangles via post processing in PANet. This is
- partially adapted from https://github.com/WenmuZhou/PAN.pytorch.
-
- Args:
- text_repr_type (str): The boundary encoding type 'poly' or 'quad'.
- Defaults to 'poly'.
- score_threshold (float): The minimal text score.
- Defaults to 0.3.
- rescale_fields (list[str]): The bbox/polygon field names to
- be rescaled. If None, no rescaling will be performed. Defaults to
- ['polygons'].
- min_text_confidence (float): The minimal text confidence.
- Defaults to 0.5.
- min_kernel_confidence (float): The minimal kernel confidence.
- Defaults to 0.5.
- distance_threshold (float): The minimal distance between the point to
- mean of text kernel. Defaults to 3.0.
- min_text_area (int): The minimal text instance region area.
- Defaults to 16.
- downsample_ratio (float): Downsample ratio. Defaults to 0.25.
- """
-
- def __init__(self,
- text_repr_type: str = 'poly',
- score_threshold: float = 0.3,
- rescale_fields: Sequence[str] = ['polygons'],
- min_text_confidence: float = 0.5,
- min_kernel_confidence: float = 0.5,
- distance_threshold: float = 3.0,
- min_text_area: int = 16,
- downsample_ratio: float = 0.25) -> None:
- super().__init__(text_repr_type, rescale_fields)
-
- self.min_text_confidence = min_text_confidence
- self.min_kernel_confidence = min_kernel_confidence
- self.score_threshold = score_threshold
- self.min_text_area = min_text_area
- self.distance_threshold = distance_threshold
- self.downsample_ratio = downsample_ratio
-
- def get_text_instances(self, pred_results: torch.Tensor,
- data_sample: TextDetDataSample,
- **kwargs) -> TextDetDataSample:
- """Get text instance predictions of one image.
-
- Args:
- pred_result (torch.Tensor): Prediction results of an image which
- is a tensor of shape :math:`(N, H, W)`.
- data_sample (TextDetDataSample): Datasample of an image.
-
- Returns:
- TextDetDataSample: A new DataSample with predictions filled in.
- Polygons and results are saved in
- ``TextDetDataSample.pred_instances.polygons``. The confidence
- scores are saved in ``TextDetDataSample.pred_instances.scores``.
- """
- assert pred_results.dim() == 3
-
- pred_results[:2, :, :] = torch.sigmoid(pred_results[:2, :, :])
- pred_results = pred_results.detach().cpu().numpy()
-
- text_score = pred_results[0].astype(np.float32)
- text = pred_results[0] > self.min_text_confidence
- kernel = (pred_results[1] > self.min_kernel_confidence) * text
- embeddings = pred_results[2:] * text.astype(np.float32)
- embeddings = embeddings.transpose((1, 2, 0)) # (h, w, 4)
-
- region_num, labels = cv2.connectedComponents(
- kernel.astype(np.uint8), connectivity=4)
- contours, _ = cv2.findContours((kernel * 255).astype(np.uint8),
- cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
- kernel_contours = np.zeros(text.shape, dtype='uint8')
- cv2.drawContours(kernel_contours, contours, -1, 255)
- text_points = pixel_group(text_score, text, embeddings, labels,
- kernel_contours, region_num,
- self.distance_threshold)
-
- polygons = []
- scores = []
- for text_point in text_points:
- text_confidence = text_point[0]
- text_point = text_point[2:]
- text_point = np.array(text_point, dtype=int).reshape(-1, 2)
- area = text_point.shape[0]
- if (area < self.min_text_area
- or text_confidence <= self.score_threshold):
- continue
-
- polygon = self._points2boundary(text_point)
- if len(polygon) > 0:
- polygons.append(polygon)
- scores.append(text_confidence)
- pred_instances = InstanceData()
- pred_instances.polygons = polygons
- pred_instances.scores = torch.FloatTensor(scores)
- data_sample.pred_instances = pred_instances
- scale_factor = data_sample.scale_factor
- scale_factor = tuple(factor * self.downsample_ratio
- for factor in scale_factor)
- data_sample.set_metainfo(dict(scale_factor=scale_factor))
- return data_sample
-
- def _points2boundary(self,
- points: np.ndarray,
- min_width: int = 0) -> List[float]:
- """Convert a text mask represented by point coordinates sequence into a
- text boundary.
-
- Args:
- points (ndarray): Mask index of size (n, 2).
- min_width (int): Minimum bounding box width to be converted. Only
- applicable to 'quad' type. Defaults to 0.
-
- Returns:
- list[float]: The text boundary point coordinates (x, y) list.
- Return [] if no text boundary found.
- """
- assert isinstance(points, np.ndarray)
- assert points.shape[1] == 2
- assert self.text_repr_type in ['quad', 'poly']
-
- if self.text_repr_type == 'quad':
- rect = cv2.minAreaRect(points)
- vertices = cv2.boxPoints(rect)
- boundary = []
- if min(rect[1]) >= min_width:
- boundary = [p for p in vertices.flatten().tolist()]
- elif self.text_repr_type == 'poly':
-
- height = np.max(points[:, 1]) + 10
- width = np.max(points[:, 0]) + 10
-
- mask = np.zeros((height, width), np.uint8)
- mask[points[:, 1], points[:, 0]] = 255
-
- contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
- cv2.CHAIN_APPROX_SIMPLE)
- boundary = list(contours[0].flatten().tolist())
-
- if len(boundary) < 8:
- return []
-
- return boundary
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/satrn.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/satrn.py
deleted file mode 100644
index 9182d8bea829b5453dc8228d842b91c6d9915a9e..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/satrn.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmocr.registry import MODELS
-from .encoder_decoder_recognizer import EncoderDecoderRecognizer
-
-
-@MODELS.register_module()
-class SATRN(EncoderDecoderRecognizer):
- """Implementation of `SATRN `_"""
diff --git a/spaces/NAACL2022/GlobEnc/src/model_attentions.py b/spaces/NAACL2022/GlobEnc/src/model_attentions.py
deleted file mode 100644
index ed79a05d37733bf21f84365df4eb80922e474fb9..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/GlobEnc/src/model_attentions.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import torch
-from tqdm.auto import tqdm
-import numpy as np
-
-
-def extract_attentions(model, encoder_func, dataset_len, device="cpu", delete_sep=False):
- """
- Extract raw attentions and norms
- :param model:
- :param encoder_func:
- :param dataset_len:
- :param device:
- :return:
- """
- raw_attentions = []
- norms_list = [[] for i in range(9)]
- # head_attn_n, attn_n, attnres_n, attnresln_n, (+attn_enc),
- # attn_n_ratio, attnres_n_ratio, attnresln_n_ratio, (+attn_enc_ratio)
- model.to(device)
- model.eval()
- for id in tqdm(range(dataset_len)):
- encoded = encoder_func(id).to(device)
- # encoded = tokenizer.encode_plus(data["text"], return_tensors="pt").to(device)
- with torch.no_grad():
- logits, attentions, norms = model(**encoded, output_attentions=True, output_norms=True, return_dict=False)
- # logits: [1, 2],
- # attentions: 12(layer) * [1, 12(heads), 24(sentence_len), 24(sentence_len)],
- # norms: 12(layer) * 7+2(type)
-
- last_token = attentions[0].shape[-1]
- if delete_sep:
- last_token -= 1
-
- num_layers = len(attentions)
- for attention_type in range(9):
- norm = torch.stack([norms[i][attention_type] for i in range(num_layers)]).squeeze().cpu().numpy()
- if 0 < attention_type < 5: # N: 1, N-Res: 2, N-ResLN: 3, N-Enc: 4
- norm = norm[:, :last_token, :last_token]
- elif attention_type >= 5:
- norm = norm[:, :last_token]
- norms_list[attention_type].append(norm)
-
- raw_attention = torch.mean(torch.stack(attentions).squeeze(), axis=1).cpu().numpy() # Mean of heads
- raw_attentions.append(raw_attention[:, :last_token, :last_token]) # (12, sentence_len, sentence_len)
- return raw_attentions, norms_list
-
-
-def build_ratio_residual_attentions(raw_attentions_list, norms_list):
- r_ratio_attentions = {
- "W-FixedRes": [],
- "W-Res": [],
- "N-FixedRes": [],
- # "Uniform-Res": []
- }
- for idx in tqdm(range(len(raw_attentions_list))):
- raw_attention = raw_attentions_list[idx]
- r_half_matrix = np.ones(norms_list[5][idx].shape) * 0.5
-
- r_ratio_attentions["W-FixedRes"].append(__build_ratio_residual_attention(raw_attention, r_half_matrix))
-
- # normalized_attn_n = norms_list[1][idx] / np.max(norms_list[1][idx], axis=(1, 2), keepdims=True)
- normalized_attn_n = norms_list[1][idx] / np.sum(norms_list[1][idx], axis=2, keepdims=True)
- r_ratio_attentions["N-FixedRes"].append(__build_ratio_residual_attention(normalized_attn_n, r_half_matrix))
-
- # norms_list[8]: N-Enc_ratio
- r_ratio_attentions["W-Res"].append(__build_ratio_residual_attention(raw_attention, norms_list[8][idx], wres=True))
-
- # r_ratio_attentions["Uniform-Res"].append(
- # __build_ratio_residual_attention(np.ones_like(raw_attention) / len(raw_attention), norms_list[8][idx]))
-
- return r_ratio_attentions
-
-
-def __build_ratio_residual_attention(raw_attention, ratio_matrix, wres=False):
- """
- :param raw_attention: (layers, sentence_len, sentence_len)
- :param ratio_matrix: (layers, sentence_len)
- :return:
- """
- result_attention = np.zeros(raw_attention.shape)
- for layer in range(raw_attention.shape[0]):
- result_attention[layer] = __add_residual(raw_attention[layer], ratio_matrix[layer], wres)
- return result_attention
-
-
-def __add_residual(att_mat, ratios, wres=False):
- """
- :param att_mat: (sentence_len, sentence_len)
- :param ratios: (sentence_len)
- :return:
- """
- att_mat_cp = np.copy(att_mat)
- for token_idx in range(att_mat_cp.shape[0]):
- r = ratios[token_idx]
- if wres:
- att_mat_cp[token_idx][token_idx] = 0
- att_mat_cp[token_idx] /= np.sum(att_mat_cp[token_idx])
- att_mat_cp[token_idx] *= r
- att_mat_cp[token_idx][token_idx] += (1 - r)
- return att_mat_cp
diff --git a/spaces/NATSpeech/DiffSpeech/modules/vocoder/hifigan/hifigan.py b/spaces/NATSpeech/DiffSpeech/modules/vocoder/hifigan/hifigan.py
deleted file mode 100644
index fddd5278760427d5d93b9b38240319ba5bdb0bdf..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/modules/vocoder/hifigan/hifigan.py
+++ /dev/null
@@ -1,338 +0,0 @@
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-import numpy as np
-
-LRELU_SLOPE = 0.1
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Conv1d1x1(Conv1d):
- """1x1 Conv1d with customized initialization."""
-
- def __init__(self, in_channels, out_channels, bias):
- """Initialize 1x1 Conv1d module."""
- super(Conv1d1x1, self).__init__(in_channels, out_channels,
- kernel_size=1, padding=0,
- dilation=1, bias=bias)
-
-
-class HifiGanGenerator(torch.nn.Module):
- def __init__(self, h, c_out=1):
- super(HifiGanGenerator, self).__init__()
- self.h = h
- self.num_kernels = len(h['resblock_kernel_sizes'])
- self.num_upsamples = len(h['upsample_rates'])
-
- self.conv_pre = weight_norm(Conv1d(80, h['upsample_initial_channel'], 7, 1, padding=3))
- resblock = ResBlock1 if h['resblock'] == '1' else ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h['upsample_rates'], h['upsample_kernel_sizes'])):
- c_cur = h['upsample_initial_channel'] // (2 ** (i + 1))
- self.ups.append(weight_norm(
- ConvTranspose1d(c_cur * 2, c_cur, k, u, padding=(k - u) // 2)))
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h['upsample_initial_channel'] // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h['resblock_kernel_sizes'], h['resblock_dilation_sizes'])):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, c_out, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x, f0=None):
- x = self.conv_pre(x)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False, use_cond=False, c_in=1):
- super(DiscriminatorP, self).__init__()
- self.use_cond = use_cond
- if use_cond:
- from utils.commons.hparams import hparams
- t = hparams['hop_size']
- self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2)
- c_in = 2
-
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(c_in, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x, mel):
- fmap = []
- if self.use_cond:
- x_mel = self.cond_net(mel)
- x = torch.cat([x_mel, x], 1)
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_cond=False, c_in=1):
- super(MultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorP(2, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(3, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(5, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(7, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(11, use_cond=use_cond, c_in=c_in),
- ])
-
- def forward(self, y, y_hat, mel=None):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y, mel)
- y_d_g, fmap_g = d(y_hat, mel)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False, use_cond=False, upsample_rates=None, c_in=1):
- super(DiscriminatorS, self).__init__()
- self.use_cond = use_cond
- if use_cond:
- t = np.prod(upsample_rates)
- self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2)
- c_in = 2
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(c_in, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x, mel):
- if self.use_cond:
- x_mel = self.cond_net(mel)
- x = torch.cat([x_mel, x], 1)
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self, use_cond=False, c_in=1):
- super(MultiScaleDiscriminator, self).__init__()
- from utils.commons.hparams import hparams
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True, use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 16],
- c_in=c_in),
- DiscriminatorS(use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 32],
- c_in=c_in),
- DiscriminatorS(use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 64],
- c_in=c_in),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=1),
- AvgPool1d(4, 2, padding=1)
- ])
-
- def forward(self, y, y_hat, mel=None):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y, mel)
- y_d_g, fmap_g = d(y_hat, mel)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- r_losses = 0
- g_losses = 0
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- r_losses += r_loss
- g_losses += g_loss
- r_losses = r_losses / len(disc_real_outputs)
- g_losses = g_losses / len(disc_real_outputs)
- return r_losses, g_losses
-
-
-def cond_discriminator_loss(outputs):
- loss = 0
- for dg in outputs:
- g_loss = torch.mean(dg ** 2)
- loss += g_loss
- loss = loss / len(outputs)
- return loss
-
-
-def generator_loss(disc_outputs):
- loss = 0
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- loss += l
- loss = loss / len(disc_outputs)
- return loss
diff --git a/spaces/NATSpeech/PortaSpeech/modules/vocoder/hifigan/mel_utils.py b/spaces/NATSpeech/PortaSpeech/modules/vocoder/hifigan/mel_utils.py
deleted file mode 100644
index a75fce72db54812320bc60aedfdd378ccecb3374..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/modules/vocoder/hifigan/mel_utils.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import numpy as np
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-from scipy.io.wavfile import read
-
-MAX_WAV_VALUE = 32768.0
-
-
-def load_wav(full_path):
- sampling_rate, data = read(full_path)
- return data, sampling_rate
-
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def mel_spectrogram(y, hparams, center=False, complex=False):
- # hop_size: 512 # For 22050Hz, 275 ~= 12.5 ms (0.0125 * sample_rate)
- # win_size: 2048 # For 22050Hz, 1100 ~= 50 ms (If None, win_size: fft_size) (0.05 * sample_rate)
- # fmin: 55 # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])
- # fmax: 10000 # To be increased/reduced depending on data.
- # fft_size: 2048 # Extra window size is filled with 0 paddings to match this parameter
- # n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax,
- n_fft = hparams['fft_size']
- num_mels = hparams['audio_num_mel_bins']
- sampling_rate = hparams['audio_sample_rate']
- hop_size = hparams['hop_size']
- win_size = hparams['win_size']
- fmin = hparams['fmin']
- fmax = hparams['fmax']
- y = y.clamp(min=-1., max=1.)
- global mel_basis, hann_window
- if fmax not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device)
- hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), [int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)],
- mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- if not complex:
- spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
- spec = torch.matmul(mel_basis[str(fmax) + '_' + str(y.device)], spec)
- spec = spectral_normalize_torch(spec)
- else:
- B, C, T, _ = spec.shape
- spec = spec.transpose(1, 2) # [B, T, n_fft, 2]
- return spec
diff --git a/spaces/NATSpeech/PortaSpeech/tasks/tts/ps_flow.py b/spaces/NATSpeech/PortaSpeech/tasks/tts/ps_flow.py
deleted file mode 100644
index 37a2469ed08d382b58bcb6b8b1750986bb3dd345..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/tasks/tts/ps_flow.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import torch
-from modules.tts.portaspeech.portaspeech_flow import PortaSpeechFlow
-from tasks.tts.fs import FastSpeechTask
-from tasks.tts.ps import PortaSpeechTask
-from utils.audio.pitch.utils import denorm_f0
-from utils.commons.hparams import hparams
-
-
-class PortaSpeechFlowTask(PortaSpeechTask):
- def __init__(self):
- super().__init__()
- self.training_post_glow = False
-
- def build_tts_model(self):
- ph_dict_size = len(self.token_encoder)
- word_dict_size = len(self.word_encoder)
- self.model = PortaSpeechFlow(ph_dict_size, word_dict_size, hparams)
-
- def _training_step(self, sample, batch_idx, opt_idx):
- self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
- and hparams['use_post_flow']
- if hparams['two_stage'] and \
- ((opt_idx == 0 and self.training_post_glow) or (opt_idx == 1 and not self.training_post_glow)):
- return None
- loss_output, _ = self.run_model(sample)
- total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
- if 'postflow' in loss_output and loss_output['postflow'] is None:
- return None
- return total_loss, loss_output
-
- def run_model(self, sample, infer=False, *args, **kwargs):
- if not infer:
- training_post_glow = self.training_post_glow
- spk_embed = sample.get('spk_embed')
- spk_id = sample.get('spk_ids')
- output = self.model(sample['txt_tokens'],
- sample['word_tokens'],
- ph2word=sample['ph2word'],
- mel2word=sample['mel2word'],
- mel2ph=sample['mel2ph'],
- word_len=sample['word_lengths'].max(),
- tgt_mels=sample['mels'],
- pitch=sample.get('pitch'),
- spk_embed=spk_embed,
- spk_id=spk_id,
- infer=False,
- forward_post_glow=training_post_glow,
- two_stage=hparams['two_stage'],
- global_step=self.global_step)
- losses = {}
- self.add_mel_loss(output['mel_out'], sample['mels'], losses)
- if (training_post_glow or not hparams['two_stage']) and hparams['use_post_flow']:
- losses['postflow'] = output['postflow']
- losses['l1'] = losses['l1'].detach()
- losses['ssim'] = losses['ssim'].detach()
- if not training_post_glow or not hparams['two_stage'] or not self.training:
- losses['kl'] = output['kl']
- if self.global_step < hparams['kl_start_steps']:
- losses['kl'] = losses['kl'].detach()
- else:
- losses['kl'] = torch.clamp(losses['kl'], min=hparams['kl_min'])
- losses['kl'] = losses['kl'] * hparams['lambda_kl']
- if hparams['dur_level'] == 'word':
- self.add_dur_loss(
- output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses)
- self.get_attn_stats(output['attn'], sample, losses)
- else:
- super().add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses)
- return losses, output
- else:
- use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur'])
- forward_post_glow = self.global_step >= hparams['post_glow_training_start'] + 1000 \
- and hparams['use_post_flow']
- spk_embed = sample.get('spk_embed')
- spk_id = sample.get('spk_ids')
- output = self.model(
- sample['txt_tokens'],
- sample['word_tokens'],
- ph2word=sample['ph2word'],
- word_len=sample['word_lengths'].max(),
- pitch=sample.get('pitch'),
- mel2ph=sample['mel2ph'] if use_gt_dur else None,
- mel2word=sample['mel2word'] if hparams['profile_infer'] or hparams['use_gt_dur'] else None,
- infer=True,
- forward_post_glow=forward_post_glow,
- spk_embed=spk_embed,
- spk_id=spk_id,
- two_stage=hparams['two_stage']
- )
- return output
-
- def validation_step(self, sample, batch_idx):
- self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
- and hparams['use_post_flow']
- return super().validation_step(sample, batch_idx)
-
- def save_valid_result(self, sample, batch_idx, model_out):
- super(PortaSpeechFlowTask, self).save_valid_result(sample, batch_idx, model_out)
- sr = hparams['audio_sample_rate']
- f0_gt = None
- if sample.get('f0') is not None:
- f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
- if self.global_step > 0:
- # save FVAE result
- if hparams['use_post_flow']:
- wav_pred = self.vocoder.spec2wav(model_out['mel_out_fvae'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_fvae_{batch_idx}', wav_pred, self.global_step, sr)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out_fvae'][0],
- f'mel_fvae_{batch_idx}', f0s=f0_gt)
-
- def build_optimizer(self, model):
- if hparams['two_stage'] and hparams['use_post_flow']:
- self.optimizer = torch.optim.AdamW(
- [p for name, p in self.model.named_parameters() if 'post_flow' not in name],
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- self.post_flow_optimizer = torch.optim.AdamW(
- self.model.post_flow.parameters(),
- lr=hparams['post_flow_lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return [self.optimizer, self.post_flow_optimizer]
- else:
- self.optimizer = torch.optim.AdamW(
- self.model.parameters(),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return [self.optimizer]
-
- def build_scheduler(self, optimizer):
- return FastSpeechTask.build_scheduler(self, optimizer[0])
\ No newline at end of file
diff --git a/spaces/NN520/AI/src/components/chat.tsx b/spaces/NN520/AI/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-
"
-
- # Real-time search results preview (based on your title and description)
- query = f"{title} {description}" # Use title and description as the search query
- search_results = list(search(query, num_results=3))
- if search_results:
- serp_preview += "
Real-time Search Preview:
"
- for index, result in enumerate(search_results):
- serp_preview += f"
"
-
- return serp_preview
-
-# Function to generate bar chart
-def generate_bar_chart(labels, values, title, x_label, y_label):
- fig, ax = plt.subplots()
- ax.bar(labels, values)
- ax.set_title(title)
- ax.set_xlabel(x_label)
- ax.set_ylabel(y_label)
- st.pyplot(fig)
-
-
-# Function to generate speedometer chart
-
-def generate_speedometer_chart(value, min_value, max_value, title):
- fig = go.Figure(go.Indicator(
- mode="gauge+number",
- value=value,
- title={'text': title},
- gauge={'axis': {'range': [min_value, max_value]},
- 'bar': {'color': "darkblue"},
- 'bgcolor': "white",
- 'borderwidth': 2,
- 'bordercolor': "gray",
- 'steps': [
- {'range': [min_value, max_value], 'color': 'lightgray'}],
- 'threshold': {'line': {'color': "red", 'width': 4},
- 'thickness': 0.75,
- 'value': value}}))
-
- return fig
-
-def main():
- st.title("SEO Blog Post Optimizer")
- option = st.radio("Select an option", ("Enter blog post content", "Paste web page link"), key="option_selection")
-
- word_count = 0
- readability_score = 0
- keyword_density = 0.0
- seo_score = 0.0
- content = ""
-
- if option == "Enter blog post content":
- content = st.text_area("Enter your blog post content")
- niche = st.text_input("Enter your niche")
- title = st.text_input("Enter the title of your blog post")
- description = st.text_input("Enter the meta description of your blog post")
- target_keywords = st.text_input("Enter the target keywords (separated by commas)").split(",")
- target_keywords = [keyword.strip().lower() for keyword in target_keywords]
- elif option == "Paste web page link":
- target_keywords = st.text_input("Enter the target keywords (separated by commas)").split(",")
- target_keywords = [keyword.strip().lower() for keyword in target_keywords]
- web_link = st.text_input("Paste the web page link")
- if st.button("Scrape Content"):
- content = fetch_webpage_content(web_link)
- if content is None:
- st.error("Failed to scrape the content from the provided link. Please check the link and try again.")
- return
- else:
- return
- else:
- return
-
- suggested_keywords = suggest_keywords(content)
- recommendations = optimize_on_page_elements(content)
-
- if content:
- word_count, readability_score, keyword_density = analyze_content(content, target_keywords)
- seo_score = calculate_seo_score(content, target_keywords)
-
- if option == "Enter blog post content":
- competitor_insights, most_common_keywords = analyze_competitors(niche, target_keywords)
-
- if option == "Enter blog post content":
- serp_preview = generate_serp_preview(title, description, content)
-
- st.sidebar.markdown("
Blog Analysis
", unsafe_allow_html=True)
- st.sidebar.markdown("---")
- st.sidebar.subheader("Keyword Analysis")
- keyword_labels = [keyword for keyword, _ in suggested_keywords]
- keyword_counts = [count for _, count in suggested_keywords]
- keyword_df = pd.DataFrame({"Keywords": keyword_labels, "Counts": keyword_counts})
- st.sidebar.bar_chart(keyword_df.set_index("Keywords"))
- st.sidebar.write(keyword_df)
-
- # Content Analysis
- st.sidebar.markdown("---")
- st.sidebar.subheader("Content Analysis")
- st.sidebar.markdown("#### Word Count")
- if word_count >= 300:
- st.sidebar.markdown(f"Value: {word_count} (Optimal)", unsafe_allow_html=True)
- elif word_count >= 200:
- st.sidebar.markdown(f"Value: {word_count} (Good)", unsafe_allow_html=True)
- else:
- st.sidebar.markdown(f"Value: {word_count} (Low)", unsafe_allow_html=True)
-
- st.sidebar.markdown("#### Readability Score")
- if readability_score >= 70:
- st.sidebar.markdown(f"Value: {readability_score} (Optimal)", unsafe_allow_html=True)
- elif readability_score >= 50:
- st.sidebar.markdown(f"Value: {readability_score} (Good)", unsafe_allow_html=True)
- else:
- st.sidebar.markdown(f"Value: {readability_score} (Low)", unsafe_allow_html=True)
-
- st.sidebar.markdown("#### Keyword Density")
- if 0.03 <= keyword_density <= 0.04:
- st.sidebar.markdown(f"Value: {keyword_density} (Optimal)", unsafe_allow_html=True)
- elif 0.02 <= keyword_density < 0.03:
- st.sidebar.markdown(f"Value: {keyword_density} (Good)", unsafe_allow_html=True)
- else:
- st.sidebar.markdown(f"Value: {keyword_density} (Low)", unsafe_allow_html=True)
-
- st.subheader("SEO Score")
- st.plotly_chart(generate_speedometer_chart(seo_score, 0, 10, "SEO Score"))
-
- st.markdown("---")
-
- # On-Page Optimization
- st.subheader("On-Page Optimization")
- for rec in recommendations:
- st.write(rec)
-
- if option == "Enter blog post content":
- # Recommended Keywords
- st.markdown("### Recommended Keywords")
- st.markdown("Try to add the below keywords to your content to increase the search engine appearance score against given niche. The values beside indicates the frequencies in your competitors blogs")
- for keyword in most_common_keywords[:5]:
- st.markdown(f"- {keyword[0]}")
-
- st.subheader("Competitor Analysis")
- for insight in competitor_insights:
- st.markdown(insight, unsafe_allow_html=True)
-
- st.subheader("SERP Preview")
- st.markdown(serp_preview, unsafe_allow_html=True)
-
- st.write("---")
- st.write("Thank you for using SEO Blog Post Optimizer!")
-
-
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/ShubhamVermaDS/text_to_image/README.md b/spaces/ShubhamVermaDS/text_to_image/README.md
deleted file mode 100644
index 6d5896c88a797af3260dfb3c6dd7e5a309a85c5b..0000000000000000000000000000000000000000
--- a/spaces/ShubhamVermaDS/text_to_image/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Text To Image
-emoji: 🔥
-colorFrom: red
-colorTo: gray
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Shypanties22/FantasyMe/train_dreambooth.py b/spaces/Shypanties22/FantasyMe/train_dreambooth.py
deleted file mode 100644
index 35b568425de57d117a1ad001ad7beba79378e638..0000000000000000000000000000000000000000
--- a/spaces/Shypanties22/FantasyMe/train_dreambooth.py
+++ /dev/null
@@ -1,890 +0,0 @@
-import argparse
-import itertools
-import math
-import os
-from pathlib import Path
-from typing import Optional
-import subprocess
-import sys
-import gc
-import random
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-from torch.utils.data import Dataset
-
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
-from diffusers.optimization import get_scheduler
-from diffusers.utils.import_utils import is_xformers_available
-from huggingface_hub import HfFolder, Repository, whoami
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel, CLIPTokenizer
-
-
-logger = get_logger(__name__)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- #required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--instance_data_dir",
- type=str,
- default=None,
- #required=True,
- help="A folder containing the training data of instance images.",
- )
- parser.add_argument(
- "--class_data_dir",
- type=str,
- default=None,
- #required=False,
- help="A folder containing the training data of class images.",
- )
- parser.add_argument(
- "--instance_prompt",
- type=str,
- default=None,
- help="The prompt with identifier specifying the instance",
- )
- parser.add_argument(
- "--class_prompt",
- type=str,
- default="",
- help="The prompt to specify images in the same class as provided instance images.",
- )
- parser.add_argument(
- "--with_prior_preservation",
- default=False,
- action="store_true",
- help="Flag to add prior preservation loss.",
- )
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
- parser.add_argument(
- "--num_class_images",
- type=int,
- default=100,
- help=(
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
- " sampled with class_prompt."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
- )
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
- parser.add_argument(
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
- )
- parser.add_argument("--num_train_epochs", type=int, default=1)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-6,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
-
- parser.add_argument(
- "--save_n_steps",
- type=int,
- default=1,
- help=("Save the model every n global_steps"),
- )
-
-
- parser.add_argument(
- "--save_starting_step",
- type=int,
- default=1,
- help=("The step from which it starts saving intermediary checkpoints"),
- )
-
- parser.add_argument(
- "--stop_text_encoder_training",
- type=int,
- default=1000000,
- help=("The step at which the text_encoder is no longer trained"),
- )
-
-
- parser.add_argument(
- "--image_captions_filename",
- action="store_true",
- help="Get captions from filename",
- )
-
-
- parser.add_argument(
- "--dump_only_text_encoder",
- action="store_true",
- default=False,
- help="Dump only text encoder",
- )
-
- parser.add_argument(
- "--train_only_unet",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--cache_latents",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--Session_dir",
- type=str,
- default="",
- help="Current session directory",
- )
-
-
-
-
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- #if args.instance_data_dir is None:
- # raise ValueError("You must specify a train data directory.")
-
- #if args.with_prior_preservation:
- # if args.class_data_dir is None:
- # raise ValueError("You must specify a data directory for class images.")
- # if args.class_prompt is None:
- # raise ValueError("You must specify prompt for class images.")
-
- return args
-
-
-class DreamBoothDataset(Dataset):
- """
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
- It pre-processes the images and the tokenizes prompts.
- """
-
- def __init__(
- self,
- instance_data_root,
- instance_prompt,
- tokenizer,
- args,
- class_data_root=None,
- class_prompt=None,
- size=512,
- center_crop=False,
- ):
- self.size = size
- self.center_crop = center_crop
- self.tokenizer = tokenizer
- self.image_captions_filename = None
-
- self.instance_data_root = Path(instance_data_root)
- if not self.instance_data_root.exists():
- raise ValueError("Instance images root doesn't exists.")
-
- self.instance_images_path = list(Path(instance_data_root).iterdir())
- self.num_instance_images = len(self.instance_images_path)
- self.instance_prompt = instance_prompt
- self._length = self.num_instance_images
-
- if args.image_captions_filename:
- self.image_captions_filename = True
-
- if class_data_root is not None:
- self.class_data_root = Path(class_data_root)
- self.class_data_root.mkdir(parents=True, exist_ok=True)
- self.class_images_path = list(self.class_data_root.iterdir())
- random.shuffle(self.class_images_path)
- self.num_class_images = len(self.class_images_path)
- self._length = max(self.num_class_images, self.num_instance_images)
- self.class_prompt = class_prompt
- else:
- self.class_data_root = None
-
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, index):
- example = {}
- path = self.instance_images_path[index % self.num_instance_images]
- instance_image = Image.open(path)
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
-
- instance_prompt = self.instance_prompt
-
- if self.image_captions_filename:
- filename = Path(path).stem
- pt=''.join([i for i in filename if not i.isdigit()])
- pt=pt.replace("_"," ")
- pt=pt.replace("(","")
- pt=pt.replace(")","")
- pt=pt.replace("-","")
- instance_prompt = pt
- sys.stdout.write(" [0;32m" +instance_prompt+" [0m")
- sys.stdout.flush()
-
-
- example["instance_images"] = self.image_transforms(instance_image)
- example["instance_prompt_ids"] = self.tokenizer(
- instance_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- if self.class_data_root:
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
- if not class_image.mode == "RGB":
- class_image = class_image.convert("RGB")
- example["class_images"] = self.image_transforms(class_image)
- example["class_prompt_ids"] = self.tokenizer(
- self.class_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- return example
-
-
-
-class PromptDataset(Dataset):
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
-
- def __init__(self, prompt, num_samples):
- self.prompt = prompt
- self.num_samples = num_samples
-
- def __len__(self):
- return self.num_samples
-
- def __getitem__(self, index):
- example = {}
- example["prompt"] = self.prompt
- example["index"] = index
- return example
-
-class LatentsDataset(Dataset):
- def __init__(self, latents_cache, text_encoder_cache):
- self.latents_cache = latents_cache
- self.text_encoder_cache = text_encoder_cache
-
- def __len__(self):
- return len(self.latents_cache)
-
- def __getitem__(self, index):
- return self.latents_cache[index], self.text_encoder_cache[index]
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
- """
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
- the first starting/base dict with the second updater dict.
-
- For later: how does d = {**d1, **d2} replace collision?
-
- :param starting_dict:
- :param updater_dict:
- :return:
- """
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
- return new_dict
-
-def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
- """
-
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
- :param args1:
- :param args2:
- :return:
- """
- # - the merged args
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
- args = argparse.Namespace(**merged_key_values_for_namespace)
- return args
-
-def run_training(args_imported):
- args_default = parse_args()
- args = merge_args(args_default, args_imported)
- print(args)
- logging_dir = Path(args.output_dir, args.logging_dir)
- i=args.save_starting_step
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with="tensorboard",
- logging_dir=logging_dir,
- )
-
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
- raise ValueError(
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
- )
-
- if args.seed is not None:
- set_seed(args.seed)
-
- if args.with_prior_preservation:
- class_images_dir = Path(args.class_data_dir)
- if not class_images_dir.exists():
- class_images_dir.mkdir(parents=True)
- cur_class_images = len(list(class_images_dir.iterdir()))
-
- if cur_class_images < args.num_class_images:
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
- )
- pipeline.set_progress_bar_config(disable=True)
-
- num_new_images = args.num_class_images - cur_class_images
- logger.info(f"Number of class images to sample: {num_new_images}.")
-
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
-
- sample_dataloader = accelerator.prepare(sample_dataloader)
- pipeline.to(accelerator.device)
-
- for example in tqdm(
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
- ):
- with torch.autocast("cuda"):
- images = pipeline(example["prompt"]).images
-
- for i, image in enumerate(images):
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
-
- del pipeline
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- repo = Repository(args.output_dir, clone_from=repo_name)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Load the tokenizer
- if args.tokenizer_name:
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
- elif args.pretrained_model_name_or_path:
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-
- # Load models and create wrapper for stable diffusion
- if args.train_only_unet:
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
- if is_xformers_available():
- try:
- print("Enabling memory efficient attention with xformers...")
- unet.enable_xformers_memory_efficient_attention()
- except Exception as e:
- logger.warning(
- f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}"
- )
-
- vae.requires_grad_(False)
- if not args.train_text_encoder:
- text_encoder.requires_grad_(False)
-
- if args.gradient_checkpointing:
- unet.enable_gradient_checkpointing()
- if args.train_text_encoder:
- text_encoder.gradient_checkpointing_enable()
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
- )
-
- optimizer_class = bnb.optim.AdamW8bit
- else:
- optimizer_class = torch.optim.AdamW
-
- params_to_optimize = (
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
- )
- optimizer = optimizer_class(
- params_to_optimize,
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
-
- train_dataset = DreamBoothDataset(
- instance_data_root=args.instance_data_dir,
- instance_prompt=args.instance_prompt,
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
- class_prompt=args.class_prompt,
- tokenizer=tokenizer,
- size=args.resolution,
- center_crop=args.center_crop,
- args=args,
- )
-
- def collate_fn(examples):
- input_ids = [example["instance_prompt_ids"] for example in examples]
- pixel_values = [example["instance_images"] for example in examples]
-
- # Concat class and instance examples for prior preservation.
- # We do this to avoid doing two forward passes.
- if args.with_prior_preservation:
- input_ids += [example["class_prompt_ids"] for example in examples]
- pixel_values += [example["class_images"] for example in examples]
-
- pixel_values = torch.stack(pixel_values)
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
-
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
-
- batch = {
- "input_ids": input_ids,
- "pixel_values": pixel_values,
- }
- return batch
-
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- if args.train_text_encoder:
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
- )
- else:
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, optimizer, train_dataloader, lr_scheduler
- )
-
- weight_dtype = torch.float32
- if args.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif args.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu.
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- vae.to(accelerator.device, dtype=weight_dtype)
- if not args.train_text_encoder:
- text_encoder.to(accelerator.device, dtype=weight_dtype)
-
-
- if args.cache_latents:
- latents_cache = []
- text_encoder_cache = []
- for batch in tqdm(train_dataloader, desc="Caching latents"):
- with torch.no_grad():
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
- if args.train_text_encoder:
- text_encoder_cache.append(batch["input_ids"])
- else:
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
-
- del vae
- #if not args.train_text_encoder:
- # del text_encoder
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("dreambooth", config=vars(args))
-
- def bar(prg):
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
- return br
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- global_step = 0
-
- for epoch in range(args.num_train_epochs):
- unet.train()
- if args.train_text_encoder:
- text_encoder.train()
- for step, batch in enumerate(train_dataloader):
- with accelerator.accumulate(unet):
- # Convert images to latent space
- with torch.no_grad():
- if args.cache_latents:
- latents_dist = batch[0][0]
- else:
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
- latents = latents_dist.sample() * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- if(args.cache_latents):
- if args.train_text_encoder:
- encoder_hidden_states = text_encoder(batch[0][1])[0]
- else:
- encoder_hidden_states = batch[0][1]
- else:
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- if args.with_prior_preservation:
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
- target, target_prior = torch.chunk(target, 2, dim=0)
-
- # Compute instance loss
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
-
- # Compute prior loss
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
-
- # Add the prior loss to the instance loss.
- loss = loss + args.prior_loss_weight * prior_loss
- else:
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- params_to_clip = (
- itertools.chain(unet.parameters(), text_encoder.parameters())
- if args.train_text_encoder
- else unet.parameters()
- )
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- fll=round((global_step*100)/args.max_train_steps)
- fll=round(fll/4)
- pr=bar(fll)
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- progress_bar.set_description_str("Progress:"+pr)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
- if accelerator.is_main_process:
- print(" [0;32m" +" Freezing the text_encoder ..."+" [0m")
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if os.path.exists(frz_dir):
- subprocess.call('rm -r '+ frz_dir, shell=True)
- os.mkdir(frz_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(frz_dir)
-
- if args.save_n_steps >= 200:
- if global_step < args.max_train_steps and global_step+1==i:
- ckpt_name = "_step_" + str(global_step+1)
- save_dir = Path(args.output_dir+ckpt_name)
- save_dir=str(save_dir)
- save_dir=save_dir.replace(" ", "_")
- if not os.path.exists(save_dir):
- os.mkdir(save_dir)
- inst=save_dir[16:]
- inst=inst.replace(" ", "_")
- print(" [1;32mSAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
- # Create the pipeline using the trained modules and save it.
- if accelerator.is_main_process:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(save_dir)
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
- chkpth=args.Session_dir+"/"+inst+".ckpt"
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
- subprocess.call('rm -r '+ save_dir, shell=True)
- i=i+args.save_n_steps
-
- accelerator.wait_for_everyone()
-
- # Create the pipeline using using the trained modules and save it.
- if accelerator.is_main_process:
- if args.dump_only_text_encoder:
- txt_dir=args.output_dir + "/text_encoder_trained"
- if not os.path.exists(txt_dir):
- os.mkdir(txt_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(txt_dir)
-
- elif args.train_only_unet:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(args.output_dir)
- txt_dir=args.output_dir + "/text_encoder_trained"
- subprocess.call('rm -r '+txt_dir, shell=True)
-
- else:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- frz_dir=args.output_dir + "/text_encoder_frozen"
- pipeline.save_pretrained(args.output_dir)
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
- subprocess.call('rm -r '+ frz_dir, shell=True)
-
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
-
- accelerator.end_training()
- del pipeline
- torch.cuda.empty_cache()
- gc.collect()
-if __name__ == "__main__":
- pass
- #main()
-
diff --git a/spaces/SoUmNerd/FlowiseAI/README.md b/spaces/SoUmNerd/FlowiseAI/README.md
deleted file mode 100644
index 4ff23ec34f32333283368c0fb27f40fa9ba49343..0000000000000000000000000000000000000000
--- a/spaces/SoUmNerd/FlowiseAI/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: FlowiseAI
-emoji: 🐠
-colorFrom: blue
-colorTo: gray
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/completer.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/completer.py
deleted file mode 100644
index 01af0bc88227da4af1ec1b6593e5abaee050d85f..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/completer.py
+++ /dev/null
@@ -1,3346 +0,0 @@
-"""Completion for IPython.
-
-This module started as fork of the rlcompleter module in the Python standard
-library. The original enhancements made to rlcompleter have been sent
-upstream and were accepted as of Python 2.3,
-
-This module now support a wide variety of completion mechanism both available
-for normal classic Python code, as well as completer for IPython specific
-Syntax like magics.
-
-Latex and Unicode completion
-============================
-
-IPython and compatible frontends not only can complete your code, but can help
-you to input a wide range of characters. In particular we allow you to insert
-a unicode character using the tab completion mechanism.
-
-Forward latex/unicode completion
---------------------------------
-
-Forward completion allows you to easily type a unicode character using its latex
-name, or unicode long description. To do so type a backslash follow by the
-relevant name and press tab:
-
-
-Using latex completion:
-
-.. code::
-
- \\alpha
- α
-
-or using unicode completion:
-
-
-.. code::
-
- \\GREEK SMALL LETTER ALPHA
- α
-
-
-Only valid Python identifiers will complete. Combining characters (like arrow or
-dots) are also available, unlike latex they need to be put after the their
-counterpart that is to say, ``F\\\\vec`` is correct, not ``\\\\vecF``.
-
-Some browsers are known to display combining characters incorrectly.
-
-Backward latex completion
--------------------------
-
-It is sometime challenging to know how to type a character, if you are using
-IPython, or any compatible frontend you can prepend backslash to the character
-and press :kbd:`Tab` to expand it to its latex form.
-
-.. code::
-
- \\α
- \\alpha
-
-
-Both forward and backward completions can be deactivated by setting the
-:std:configtrait:`Completer.backslash_combining_completions` option to
-``False``.
-
-
-Experimental
-============
-
-Starting with IPython 6.0, this module can make use of the Jedi library to
-generate completions both using static analysis of the code, and dynamically
-inspecting multiple namespaces. Jedi is an autocompletion and static analysis
-for Python. The APIs attached to this new mechanism is unstable and will
-raise unless use in an :any:`provisionalcompleter` context manager.
-
-You will find that the following are experimental:
-
- - :any:`provisionalcompleter`
- - :any:`IPCompleter.completions`
- - :any:`Completion`
- - :any:`rectify_completions`
-
-.. note::
-
- better name for :any:`rectify_completions` ?
-
-We welcome any feedback on these new API, and we also encourage you to try this
-module in debug mode (start IPython with ``--Completer.debug=True``) in order
-to have extra logging information if :any:`jedi` is crashing, or if current
-IPython completer pending deprecations are returning results not yet handled
-by :any:`jedi`
-
-Using Jedi for tab completion allow snippets like the following to work without
-having to execute any code:
-
- >>> myvar = ['hello', 42]
- ... myvar[1].bi
-
-Tab completion will be able to infer that ``myvar[1]`` is a real number without
-executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
-option.
-
-Be sure to update :any:`jedi` to the latest stable version or to try the
-current development version to get better completions.
-
-Matchers
-========
-
-All completions routines are implemented using unified *Matchers* API.
-The matchers API is provisional and subject to change without notice.
-
-The built-in matchers include:
-
-- :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
-- :any:`IPCompleter.magic_matcher`: completions for magics,
-- :any:`IPCompleter.unicode_name_matcher`,
- :any:`IPCompleter.fwd_unicode_matcher`
- and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
-- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
-- :any:`IPCompleter.file_matcher`: paths to files and directories,
-- :any:`IPCompleter.python_func_kw_matcher` - function keywords,
-- :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
-- ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
-- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
- implementation in :any:`InteractiveShell` which uses IPython hooks system
- (`complete_command`) with string dispatch (including regular expressions).
- Differently to other matchers, ``custom_completer_matcher`` will not suppress
- Jedi results to match behaviour in earlier IPython versions.
-
-Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
-
-Matcher API
------------
-
-Simplifying some details, the ``Matcher`` interface can described as
-
-.. code-block::
-
- MatcherAPIv1 = Callable[[str], list[str]]
- MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
-
- Matcher = MatcherAPIv1 | MatcherAPIv2
-
-The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
-and remains supported as a simplest way for generating completions. This is also
-currently the only API supported by the IPython hooks system `complete_command`.
-
-To distinguish between matcher versions ``matcher_api_version`` attribute is used.
-More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
-and requires a literal ``2`` for v2 Matchers.
-
-Once the API stabilises future versions may relax the requirement for specifying
-``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
-please do not rely on the presence of ``matcher_api_version`` for any purposes.
-
-Suppression of competing matchers
----------------------------------
-
-By default results from all matchers are combined, in the order determined by
-their priority. Matchers can request to suppress results from subsequent
-matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
-
-When multiple matchers simultaneously request surpression, the results from of
-the matcher with higher priority will be returned.
-
-Sometimes it is desirable to suppress most but not all other matchers;
-this can be achieved by adding a set of identifiers of matchers which
-should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
-
-The suppression behaviour can is user-configurable via
-:std:configtrait:`IPCompleter.suppress_competing_matchers`.
-"""
-
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-#
-# Some of this code originated from rlcompleter in the Python standard library
-# Copyright (C) 2001 Python Software Foundation, www.python.org
-
-from __future__ import annotations
-import builtins as builtin_mod
-import enum
-import glob
-import inspect
-import itertools
-import keyword
-import os
-import re
-import string
-import sys
-import tokenize
-import time
-import unicodedata
-import uuid
-import warnings
-from ast import literal_eval
-from collections import defaultdict
-from contextlib import contextmanager
-from dataclasses import dataclass
-from functools import cached_property, partial
-from types import SimpleNamespace
-from typing import (
- Iterable,
- Iterator,
- List,
- Tuple,
- Union,
- Any,
- Sequence,
- Dict,
- Optional,
- TYPE_CHECKING,
- Set,
- Sized,
- TypeVar,
- Literal,
-)
-
-from IPython.core.guarded_eval import guarded_eval, EvaluationContext
-from IPython.core.error import TryNext
-from IPython.core.inputtransformer2 import ESC_MAGIC
-from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
-from IPython.core.oinspect import InspectColors
-from IPython.testing.skipdoctest import skip_doctest
-from IPython.utils import generics
-from IPython.utils.decorators import sphinx_options
-from IPython.utils.dir2 import dir2, get_real_method
-from IPython.utils.docs import GENERATING_DOCUMENTATION
-from IPython.utils.path import ensure_dir_exists
-from IPython.utils.process import arg_split
-from traitlets import (
- Bool,
- Enum,
- Int,
- List as ListTrait,
- Unicode,
- Dict as DictTrait,
- Union as UnionTrait,
- observe,
-)
-from traitlets.config.configurable import Configurable
-
-import __main__
-
-# skip module docstests
-__skip_doctest__ = True
-
-
-try:
- import jedi
- jedi.settings.case_insensitive_completion = False
- import jedi.api.helpers
- import jedi.api.classes
- JEDI_INSTALLED = True
-except ImportError:
- JEDI_INSTALLED = False
-
-
-if TYPE_CHECKING or GENERATING_DOCUMENTATION and sys.version_info >= (3, 11):
- from typing import cast
- from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
-else:
- from typing import Generic
-
- def cast(type_, obj):
- """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
- return obj
-
- # do not require on runtime
- NotRequired = Tuple # requires Python >=3.11
- TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
- Protocol = object # requires Python >=3.8
- TypeAlias = Any # requires Python >=3.10
- TypeGuard = Generic # requires Python >=3.10
-if GENERATING_DOCUMENTATION:
- from typing import TypedDict
-
-# -----------------------------------------------------------------------------
-# Globals
-#-----------------------------------------------------------------------------
-
-# ranges where we have most of the valid unicode names. We could be more finer
-# grained but is it worth it for performance While unicode have character in the
-# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
-# write this). With below range we cover them all, with a density of ~67%
-# biggest next gap we consider only adds up about 1% density and there are 600
-# gaps that would need hard coding.
-_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
-
-# Public API
-__all__ = ["Completer", "IPCompleter"]
-
-if sys.platform == 'win32':
- PROTECTABLES = ' '
-else:
- PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
-
-# Protect against returning an enormous number of completions which the frontend
-# may have trouble processing.
-MATCHES_LIMIT = 500
-
-# Completion type reported when no type can be inferred.
-_UNKNOWN_TYPE = ""
-
-# sentinel value to signal lack of a match
-not_found = object()
-
-class ProvisionalCompleterWarning(FutureWarning):
- """
- Exception raise by an experimental feature in this module.
-
- Wrap code in :any:`provisionalcompleter` context manager if you
- are certain you want to use an unstable feature.
- """
- pass
-
-warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
-
-
-@skip_doctest
-@contextmanager
-def provisionalcompleter(action='ignore'):
- """
- This context manager has to be used in any place where unstable completer
- behavior and API may be called.
-
- >>> with provisionalcompleter():
- ... completer.do_experimental_things() # works
-
- >>> completer.do_experimental_things() # raises.
-
- .. note::
-
- Unstable
-
- By using this context manager you agree that the API in use may change
- without warning, and that you won't complain if they do so.
-
- You also understand that, if the API is not to your liking, you should report
- a bug to explain your use case upstream.
-
- We'll be happy to get your feedback, feature requests, and improvements on
- any of the unstable APIs!
- """
- with warnings.catch_warnings():
- warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
- yield
-
-
-def has_open_quotes(s):
- """Return whether a string has open quotes.
-
- This simply counts whether the number of quote characters of either type in
- the string is odd.
-
- Returns
- -------
- If there is an open quote, the quote character is returned. Else, return
- False.
- """
- # We check " first, then ', so complex cases with nested quotes will get
- # the " to take precedence.
- if s.count('"') % 2:
- return '"'
- elif s.count("'") % 2:
- return "'"
- else:
- return False
-
-
-def protect_filename(s, protectables=PROTECTABLES):
- """Escape a string to protect certain characters."""
- if set(s) & set(protectables):
- if sys.platform == "win32":
- return '"' + s + '"'
- else:
- return "".join(("\\" + c if c in protectables else c) for c in s)
- else:
- return s
-
-
-def expand_user(path:str) -> Tuple[str, bool, str]:
- """Expand ``~``-style usernames in strings.
-
- This is similar to :func:`os.path.expanduser`, but it computes and returns
- extra information that will be useful if the input was being used in
- computing completions, and you wish to return the completions with the
- original '~' instead of its expanded value.
-
- Parameters
- ----------
- path : str
- String to be expanded. If no ~ is present, the output is the same as the
- input.
-
- Returns
- -------
- newpath : str
- Result of ~ expansion in the input path.
- tilde_expand : bool
- Whether any expansion was performed or not.
- tilde_val : str
- The value that ~ was replaced with.
- """
- # Default values
- tilde_expand = False
- tilde_val = ''
- newpath = path
-
- if path.startswith('~'):
- tilde_expand = True
- rest = len(path)-1
- newpath = os.path.expanduser(path)
- if rest:
- tilde_val = newpath[:-rest]
- else:
- tilde_val = newpath
-
- return newpath, tilde_expand, tilde_val
-
-
-def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
- """Does the opposite of expand_user, with its outputs.
- """
- if tilde_expand:
- return path.replace(tilde_val, '~')
- else:
- return path
-
-
-def completions_sorting_key(word):
- """key for sorting completions
-
- This does several things:
-
- - Demote any completions starting with underscores to the end
- - Insert any %magic and %%cellmagic completions in the alphabetical order
- by their name
- """
- prio1, prio2 = 0, 0
-
- if word.startswith('__'):
- prio1 = 2
- elif word.startswith('_'):
- prio1 = 1
-
- if word.endswith('='):
- prio1 = -1
-
- if word.startswith('%%'):
- # If there's another % in there, this is something else, so leave it alone
- if not "%" in word[2:]:
- word = word[2:]
- prio2 = 2
- elif word.startswith('%'):
- if not "%" in word[1:]:
- word = word[1:]
- prio2 = 1
-
- return prio1, word, prio2
-
-
-class _FakeJediCompletion:
- """
- This is a workaround to communicate to the UI that Jedi has crashed and to
- report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
-
- Added in IPython 6.0 so should likely be removed for 7.0
-
- """
-
- def __init__(self, name):
-
- self.name = name
- self.complete = name
- self.type = 'crashed'
- self.name_with_symbols = name
- self.signature = ""
- self._origin = "fake"
- self.text = "crashed"
-
- def __repr__(self):
- return ''
-
-
-_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
-
-
-class Completion:
- """
- Completion object used and returned by IPython completers.
-
- .. warning::
-
- Unstable
-
- This function is unstable, API may change without warning.
- It will also raise unless use in proper context manager.
-
- This act as a middle ground :any:`Completion` object between the
- :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
- object. While Jedi need a lot of information about evaluator and how the
- code should be ran/inspected, PromptToolkit (and other frontend) mostly
- need user facing information.
-
- - Which range should be replaced replaced by what.
- - Some metadata (like completion type), or meta information to displayed to
- the use user.
-
- For debugging purpose we can also store the origin of the completion (``jedi``,
- ``IPython.python_matches``, ``IPython.magics_matches``...).
- """
-
- __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
-
- def __init__(
- self,
- start: int,
- end: int,
- text: str,
- *,
- type: Optional[str] = None,
- _origin="",
- signature="",
- ) -> None:
- warnings.warn(
- "``Completion`` is a provisional API (as of IPython 6.0). "
- "It may change without warnings. "
- "Use in corresponding context manager.",
- category=ProvisionalCompleterWarning,
- stacklevel=2,
- )
-
- self.start = start
- self.end = end
- self.text = text
- self.type = type
- self.signature = signature
- self._origin = _origin
-
- def __repr__(self):
- return '' % \
- (self.start, self.end, self.text, self.type or '?', self.signature or '?')
-
- def __eq__(self, other) -> bool:
- """
- Equality and hash do not hash the type (as some completer may not be
- able to infer the type), but are use to (partially) de-duplicate
- completion.
-
- Completely de-duplicating completion is a bit tricker that just
- comparing as it depends on surrounding text, which Completions are not
- aware of.
- """
- return self.start == other.start and \
- self.end == other.end and \
- self.text == other.text
-
- def __hash__(self):
- return hash((self.start, self.end, self.text))
-
-
-class SimpleCompletion:
- """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
-
- .. warning::
-
- Provisional
-
- This class is used to describe the currently supported attributes of
- simple completion items, and any additional implementation details
- should not be relied on. Additional attributes may be included in
- future versions, and meaning of text disambiguated from the current
- dual meaning of "text to insert" and "text to used as a label".
- """
-
- __slots__ = ["text", "type"]
-
- def __init__(self, text: str, *, type: Optional[str] = None):
- self.text = text
- self.type = type
-
- def __repr__(self):
- return f""
-
-
-class _MatcherResultBase(TypedDict):
- """Definition of dictionary to be returned by new-style Matcher (API v2)."""
-
- #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
- matched_fragment: NotRequired[str]
-
- #: Whether to suppress results from all other matchers (True), some
- #: matchers (set of identifiers) or none (False); default is False.
- suppress: NotRequired[Union[bool, Set[str]]]
-
- #: Identifiers of matchers which should NOT be suppressed when this matcher
- #: requests to suppress all other matchers; defaults to an empty set.
- do_not_suppress: NotRequired[Set[str]]
-
- #: Are completions already ordered and should be left as-is? default is False.
- ordered: NotRequired[bool]
-
-
-@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
-class SimpleMatcherResult(_MatcherResultBase, TypedDict):
- """Result of new-style completion matcher."""
-
- # note: TypedDict is added again to the inheritance chain
- # in order to get __orig_bases__ for documentation
-
- #: List of candidate completions
- completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
-
-
-class _JediMatcherResult(_MatcherResultBase):
- """Matching result returned by Jedi (will be processed differently)"""
-
- #: list of candidate completions
- completions: Iterator[_JediCompletionLike]
-
-
-AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
-AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
-
-
-@dataclass
-class CompletionContext:
- """Completion context provided as an argument to matchers in the Matcher API v2."""
-
- # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
- # which was not explicitly visible as an argument of the matcher, making any refactor
- # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
- # from the completer, and make substituting them in sub-classes easier.
-
- #: Relevant fragment of code directly preceding the cursor.
- #: The extraction of token is implemented via splitter heuristic
- #: (following readline behaviour for legacy reasons), which is user configurable
- #: (by switching the greedy mode).
- token: str
-
- #: The full available content of the editor or buffer
- full_text: str
-
- #: Cursor position in the line (the same for ``full_text`` and ``text``).
- cursor_position: int
-
- #: Cursor line in ``full_text``.
- cursor_line: int
-
- #: The maximum number of completions that will be used downstream.
- #: Matchers can use this information to abort early.
- #: The built-in Jedi matcher is currently excepted from this limit.
- # If not given, return all possible completions.
- limit: Optional[int]
-
- @cached_property
- def text_until_cursor(self) -> str:
- return self.line_with_cursor[: self.cursor_position]
-
- @cached_property
- def line_with_cursor(self) -> str:
- return self.full_text.split("\n")[self.cursor_line]
-
-
-#: Matcher results for API v2.
-MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
-
-
-class _MatcherAPIv1Base(Protocol):
- def __call__(self, text: str) -> List[str]:
- """Call signature."""
- ...
-
- #: Used to construct the default matcher identifier
- __qualname__: str
-
-
-class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
- #: API version
- matcher_api_version: Optional[Literal[1]]
-
- def __call__(self, text: str) -> List[str]:
- """Call signature."""
- ...
-
-
-#: Protocol describing Matcher API v1.
-MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
-
-
-class MatcherAPIv2(Protocol):
- """Protocol describing Matcher API v2."""
-
- #: API version
- matcher_api_version: Literal[2] = 2
-
- def __call__(self, context: CompletionContext) -> MatcherResult:
- """Call signature."""
- ...
-
- #: Used to construct the default matcher identifier
- __qualname__: str
-
-
-Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
-
-
-def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
- api_version = _get_matcher_api_version(matcher)
- return api_version == 1
-
-
-def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
- api_version = _get_matcher_api_version(matcher)
- return api_version == 2
-
-
-def _is_sizable(value: Any) -> TypeGuard[Sized]:
- """Determines whether objects is sizable"""
- return hasattr(value, "__len__")
-
-
-def _is_iterator(value: Any) -> TypeGuard[Iterator]:
- """Determines whether objects is sizable"""
- return hasattr(value, "__next__")
-
-
-def has_any_completions(result: MatcherResult) -> bool:
- """Check if any result includes any completions."""
- completions = result["completions"]
- if _is_sizable(completions):
- return len(completions) != 0
- if _is_iterator(completions):
- try:
- old_iterator = completions
- first = next(old_iterator)
- result["completions"] = cast(
- Iterator[SimpleCompletion],
- itertools.chain([first], old_iterator),
- )
- return True
- except StopIteration:
- return False
- raise ValueError(
- "Completions returned by matcher need to be an Iterator or a Sizable"
- )
-
-
-def completion_matcher(
- *,
- priority: Optional[float] = None,
- identifier: Optional[str] = None,
- api_version: int = 1,
-):
- """Adds attributes describing the matcher.
-
- Parameters
- ----------
- priority : Optional[float]
- The priority of the matcher, determines the order of execution of matchers.
- Higher priority means that the matcher will be executed first. Defaults to 0.
- identifier : Optional[str]
- identifier of the matcher allowing users to modify the behaviour via traitlets,
- and also used to for debugging (will be passed as ``origin`` with the completions).
-
- Defaults to matcher function's ``__qualname__`` (for example,
- ``IPCompleter.file_matcher`` for the built-in matched defined
- as a ``file_matcher`` method of the ``IPCompleter`` class).
- api_version: Optional[int]
- version of the Matcher API used by this matcher.
- Currently supported values are 1 and 2.
- Defaults to 1.
- """
-
- def wrapper(func: Matcher):
- func.matcher_priority = priority or 0 # type: ignore
- func.matcher_identifier = identifier or func.__qualname__ # type: ignore
- func.matcher_api_version = api_version # type: ignore
- if TYPE_CHECKING:
- if api_version == 1:
- func = cast(MatcherAPIv1, func)
- elif api_version == 2:
- func = cast(MatcherAPIv2, func)
- return func
-
- return wrapper
-
-
-def _get_matcher_priority(matcher: Matcher):
- return getattr(matcher, "matcher_priority", 0)
-
-
-def _get_matcher_id(matcher: Matcher):
- return getattr(matcher, "matcher_identifier", matcher.__qualname__)
-
-
-def _get_matcher_api_version(matcher):
- return getattr(matcher, "matcher_api_version", 1)
-
-
-context_matcher = partial(completion_matcher, api_version=2)
-
-
-_IC = Iterable[Completion]
-
-
-def _deduplicate_completions(text: str, completions: _IC)-> _IC:
- """
- Deduplicate a set of completions.
-
- .. warning::
-
- Unstable
-
- This function is unstable, API may change without warning.
-
- Parameters
- ----------
- text : str
- text that should be completed.
- completions : Iterator[Completion]
- iterator over the completions to deduplicate
-
- Yields
- ------
- `Completions` objects
- Completions coming from multiple sources, may be different but end up having
- the same effect when applied to ``text``. If this is the case, this will
- consider completions as equal and only emit the first encountered.
- Not folded in `completions()` yet for debugging purpose, and to detect when
- the IPython completer does return things that Jedi does not, but should be
- at some point.
- """
- completions = list(completions)
- if not completions:
- return
-
- new_start = min(c.start for c in completions)
- new_end = max(c.end for c in completions)
-
- seen = set()
- for c in completions:
- new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
- if new_text not in seen:
- yield c
- seen.add(new_text)
-
-
-def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
- """
- Rectify a set of completions to all have the same ``start`` and ``end``
-
- .. warning::
-
- Unstable
-
- This function is unstable, API may change without warning.
- It will also raise unless use in proper context manager.
-
- Parameters
- ----------
- text : str
- text that should be completed.
- completions : Iterator[Completion]
- iterator over the completions to rectify
- _debug : bool
- Log failed completion
-
- Notes
- -----
- :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
- the Jupyter Protocol requires them to behave like so. This will readjust
- the completion to have the same ``start`` and ``end`` by padding both
- extremities with surrounding text.
-
- During stabilisation should support a ``_debug`` option to log which
- completion are return by the IPython completer and not found in Jedi in
- order to make upstream bug report.
- """
- warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
- "It may change without warnings. "
- "Use in corresponding context manager.",
- category=ProvisionalCompleterWarning, stacklevel=2)
-
- completions = list(completions)
- if not completions:
- return
- starts = (c.start for c in completions)
- ends = (c.end for c in completions)
-
- new_start = min(starts)
- new_end = max(ends)
-
- seen_jedi = set()
- seen_python_matches = set()
- for c in completions:
- new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
- if c._origin == 'jedi':
- seen_jedi.add(new_text)
- elif c._origin == 'IPCompleter.python_matches':
- seen_python_matches.add(new_text)
- yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
- diff = seen_python_matches.difference(seen_jedi)
- if diff and _debug:
- print('IPython.python matches have extras:', diff)
-
-
-if sys.platform == 'win32':
- DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
-else:
- DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
-
-GREEDY_DELIMS = ' =\r\n'
-
-
-class CompletionSplitter(object):
- """An object to split an input line in a manner similar to readline.
-
- By having our own implementation, we can expose readline-like completion in
- a uniform manner to all frontends. This object only needs to be given the
- line of text to be split and the cursor position on said line, and it
- returns the 'word' to be completed on at the cursor after splitting the
- entire line.
-
- What characters are used as splitting delimiters can be controlled by
- setting the ``delims`` attribute (this is a property that internally
- automatically builds the necessary regular expression)"""
-
- # Private interface
-
- # A string of delimiter characters. The default value makes sense for
- # IPython's most typical usage patterns.
- _delims = DELIMS
-
- # The expression (a normal string) to be compiled into a regular expression
- # for actual splitting. We store it as an attribute mostly for ease of
- # debugging, since this type of code can be so tricky to debug.
- _delim_expr = None
-
- # The regular expression that does the actual splitting
- _delim_re = None
-
- def __init__(self, delims=None):
- delims = CompletionSplitter._delims if delims is None else delims
- self.delims = delims
-
- @property
- def delims(self):
- """Return the string of delimiter characters."""
- return self._delims
-
- @delims.setter
- def delims(self, delims):
- """Set the delimiters for line splitting."""
- expr = '[' + ''.join('\\'+ c for c in delims) + ']'
- self._delim_re = re.compile(expr)
- self._delims = delims
- self._delim_expr = expr
-
- def split_line(self, line, cursor_pos=None):
- """Split a line of text with a cursor at the given position.
- """
- l = line if cursor_pos is None else line[:cursor_pos]
- return self._delim_re.split(l)[-1]
-
-
-
-class Completer(Configurable):
-
- greedy = Bool(
- False,
- help="""Activate greedy completion.
-
- .. deprecated:: 8.8
- Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
-
- When enabled in IPython 8.8 or newer, changes configuration as follows:
-
- - ``Completer.evaluation = 'unsafe'``
- - ``Completer.auto_close_dict_keys = True``
- """,
- ).tag(config=True)
-
- evaluation = Enum(
- ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
- default_value="limited",
- help="""Policy for code evaluation under completion.
-
- Successive options allow to enable more eager evaluation for better
- completion suggestions, including for nested dictionaries, nested lists,
- or even results of function calls.
- Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
- code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
-
- Allowed values are:
-
- - ``forbidden``: no evaluation of code is permitted,
- - ``minimal``: evaluation of literals and access to built-in namespace;
- no item/attribute evaluationm no access to locals/globals,
- no evaluation of any operations or comparisons.
- - ``limited``: access to all namespaces, evaluation of hard-coded methods
- (for example: :any:`dict.keys`, :any:`object.__getattr__`,
- :any:`object.__getitem__`) on allow-listed objects (for example:
- :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
- - ``unsafe``: evaluation of all methods and function calls but not of
- syntax with side-effects like `del x`,
- - ``dangerous``: completely arbitrary evaluation.
- """,
- ).tag(config=True)
-
- use_jedi = Bool(default_value=JEDI_INSTALLED,
- help="Experimental: Use Jedi to generate autocompletions. "
- "Default to True if jedi is installed.").tag(config=True)
-
- jedi_compute_type_timeout = Int(default_value=400,
- help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
- Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
- performance by preventing jedi to build its cache.
- """).tag(config=True)
-
- debug = Bool(default_value=False,
- help='Enable debug for the Completer. Mostly print extra '
- 'information for experimental jedi integration.')\
- .tag(config=True)
-
- backslash_combining_completions = Bool(True,
- help="Enable unicode completions, e.g. \\alpha . "
- "Includes completion of latex commands, unicode names, and expanding "
- "unicode characters back to latex commands.").tag(config=True)
-
- auto_close_dict_keys = Bool(
- False,
- help="""
- Enable auto-closing dictionary keys.
-
- When enabled string keys will be suffixed with a final quote
- (matching the opening quote), tuple keys will also receive a
- separating comma if needed, and keys which are final will
- receive a closing bracket (``]``).
- """,
- ).tag(config=True)
-
- def __init__(self, namespace=None, global_namespace=None, **kwargs):
- """Create a new completer for the command line.
-
- Completer(namespace=ns, global_namespace=ns2) -> completer instance.
-
- If unspecified, the default namespace where completions are performed
- is __main__ (technically, __main__.__dict__). Namespaces should be
- given as dictionaries.
-
- An optional second namespace can be given. This allows the completer
- to handle cases where both the local and global scopes need to be
- distinguished.
- """
-
- # Don't bind to namespace quite yet, but flag whether the user wants a
- # specific namespace or to use __main__.__dict__. This will allow us
- # to bind to __main__.__dict__ at completion time, not now.
- if namespace is None:
- self.use_main_ns = True
- else:
- self.use_main_ns = False
- self.namespace = namespace
-
- # The global namespace, if given, can be bound directly
- if global_namespace is None:
- self.global_namespace = {}
- else:
- self.global_namespace = global_namespace
-
- self.custom_matchers = []
-
- super(Completer, self).__init__(**kwargs)
-
- def complete(self, text, state):
- """Return the next possible completion for 'text'.
-
- This is called successively with state == 0, 1, 2, ... until it
- returns None. The completion should begin with 'text'.
-
- """
- if self.use_main_ns:
- self.namespace = __main__.__dict__
-
- if state == 0:
- if "." in text:
- self.matches = self.attr_matches(text)
- else:
- self.matches = self.global_matches(text)
- try:
- return self.matches[state]
- except IndexError:
- return None
-
- def global_matches(self, text):
- """Compute matches when text is a simple name.
-
- Return a list of all keywords, built-in functions and names currently
- defined in self.namespace or self.global_namespace that match.
-
- """
- matches = []
- match_append = matches.append
- n = len(text)
- for lst in [
- keyword.kwlist,
- builtin_mod.__dict__.keys(),
- list(self.namespace.keys()),
- list(self.global_namespace.keys()),
- ]:
- for word in lst:
- if word[:n] == text and word != "__builtins__":
- match_append(word)
-
- snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
- for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
- shortened = {
- "_".join([sub[0] for sub in word.split("_")]): word
- for word in lst
- if snake_case_re.match(word)
- }
- for word in shortened.keys():
- if word[:n] == text and word != "__builtins__":
- match_append(shortened[word])
- return matches
-
- def attr_matches(self, text):
- """Compute matches when text contains a dot.
-
- Assuming the text is of the form NAME.NAME....[NAME], and is
- evaluatable in self.namespace or self.global_namespace, it will be
- evaluated and its attributes (as revealed by dir()) are used as
- possible completions. (For class instances, class members are
- also considered.)
-
- WARNING: this can still invoke arbitrary C code, if an object
- with a __getattr__ hook is evaluated.
-
- """
- m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
- if not m2:
- return []
- expr, attr = m2.group(1, 2)
-
- obj = self._evaluate_expr(expr)
-
- if obj is not_found:
- return []
-
- if self.limit_to__all__ and hasattr(obj, '__all__'):
- words = get__all__entries(obj)
- else:
- words = dir2(obj)
-
- try:
- words = generics.complete_object(obj, words)
- except TryNext:
- pass
- except AssertionError:
- raise
- except Exception:
- # Silence errors from completion function
- pass
- # Build match list to return
- n = len(attr)
-
- # Note: ideally we would just return words here and the prefix
- # reconciliator would know that we intend to append to rather than
- # replace the input text; this requires refactoring to return range
- # which ought to be replaced (as does jedi).
- tokens = _parse_tokens(expr)
- rev_tokens = reversed(tokens)
- skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
- name_turn = True
-
- parts = []
- for token in rev_tokens:
- if token.type in skip_over:
- continue
- if token.type == tokenize.NAME and name_turn:
- parts.append(token.string)
- name_turn = False
- elif token.type == tokenize.OP and token.string == "." and not name_turn:
- parts.append(token.string)
- name_turn = True
- else:
- # short-circuit if not empty nor name token
- break
-
- prefix_after_space = "".join(reversed(parts))
-
- return ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr]
-
- def _evaluate_expr(self, expr):
- obj = not_found
- done = False
- while not done and expr:
- try:
- obj = guarded_eval(
- expr,
- EvaluationContext(
- globals=self.global_namespace,
- locals=self.namespace,
- evaluation=self.evaluation,
- ),
- )
- done = True
- except Exception as e:
- if self.debug:
- print("Evaluation exception", e)
- # trim the expression to remove any invalid prefix
- # e.g. user starts `(d[`, so we get `expr = '(d'`,
- # where parenthesis is not closed.
- # TODO: make this faster by reusing parts of the computation?
- expr = expr[1:]
- return obj
-
-def get__all__entries(obj):
- """returns the strings in the __all__ attribute"""
- try:
- words = getattr(obj, '__all__')
- except:
- return []
-
- return [w for w in words if isinstance(w, str)]
-
-
-class _DictKeyState(enum.Flag):
- """Represent state of the key match in context of other possible matches.
-
- - given `d1 = {'a': 1}` completion on `d1['` will yield `{'a': END_OF_ITEM}` as there is no tuple.
- - given `d2 = {('a', 'b'): 1}`: `d2['a', '` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
- - given `d3 = {('a', 'b'): 1}`: `d3['` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
- - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
- """
-
- BASELINE = 0
- END_OF_ITEM = enum.auto()
- END_OF_TUPLE = enum.auto()
- IN_TUPLE = enum.auto()
-
-
-def _parse_tokens(c):
- """Parse tokens even if there is an error."""
- tokens = []
- token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
- while True:
- try:
- tokens.append(next(token_generator))
- except tokenize.TokenError:
- return tokens
- except StopIteration:
- return tokens
-
-
-def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
- """Match any valid Python numeric literal in a prefix of dictionary keys.
-
- References:
- - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
- - https://docs.python.org/3/library/tokenize.html
- """
- if prefix[-1].isspace():
- # if user typed a space we do not have anything to complete
- # even if there was a valid number token before
- return None
- tokens = _parse_tokens(prefix)
- rev_tokens = reversed(tokens)
- skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
- number = None
- for token in rev_tokens:
- if token.type in skip_over:
- continue
- if number is None:
- if token.type == tokenize.NUMBER:
- number = token.string
- continue
- else:
- # we did not match a number
- return None
- if token.type == tokenize.OP:
- if token.string == ",":
- break
- if token.string in {"+", "-"}:
- number = token.string + number
- else:
- return None
- return number
-
-
-_INT_FORMATS = {
- "0b": bin,
- "0o": oct,
- "0x": hex,
-}
-
-
-def match_dict_keys(
- keys: List[Union[str, bytes, Tuple[Union[str, bytes], ...]]],
- prefix: str,
- delims: str,
- extra_prefix: Optional[Tuple[Union[str, bytes], ...]] = None,
-) -> Tuple[str, int, Dict[str, _DictKeyState]]:
- """Used by dict_key_matches, matching the prefix to a list of keys
-
- Parameters
- ----------
- keys
- list of keys in dictionary currently being completed.
- prefix
- Part of the text already typed by the user. E.g. `mydict[b'fo`
- delims
- String of delimiters to consider when finding the current key.
- extra_prefix : optional
- Part of the text already typed in multi-key index cases. E.g. for
- `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
-
- Returns
- -------
- A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
- ``quote`` being the quote that need to be used to close current string.
- ``token_start`` the position where the replacement should start occurring,
- ``matches`` a dictionary of replacement/completion keys on keys and values
- indicating whether the state.
- """
- prefix_tuple = extra_prefix if extra_prefix else ()
-
- prefix_tuple_size = sum(
- [
- # for pandas, do not count slices as taking space
- not isinstance(k, slice)
- for k in prefix_tuple
- ]
- )
- text_serializable_types = (str, bytes, int, float, slice)
-
- def filter_prefix_tuple(key):
- # Reject too short keys
- if len(key) <= prefix_tuple_size:
- return False
- # Reject keys which cannot be serialised to text
- for k in key:
- if not isinstance(k, text_serializable_types):
- return False
- # Reject keys that do not match the prefix
- for k, pt in zip(key, prefix_tuple):
- if k != pt and not isinstance(pt, slice):
- return False
- # All checks passed!
- return True
-
- filtered_key_is_final: Dict[
- Union[str, bytes, int, float], _DictKeyState
- ] = defaultdict(lambda: _DictKeyState.BASELINE)
-
- for k in keys:
- # If at least one of the matches is not final, mark as undetermined.
- # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
- # `111` appears final on first match but is not final on the second.
-
- if isinstance(k, tuple):
- if filter_prefix_tuple(k):
- key_fragment = k[prefix_tuple_size]
- filtered_key_is_final[key_fragment] |= (
- _DictKeyState.END_OF_TUPLE
- if len(k) == prefix_tuple_size + 1
- else _DictKeyState.IN_TUPLE
- )
- elif prefix_tuple_size > 0:
- # we are completing a tuple but this key is not a tuple,
- # so we should ignore it
- pass
- else:
- if isinstance(k, text_serializable_types):
- filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
-
- filtered_keys = filtered_key_is_final.keys()
-
- if not prefix:
- return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
-
- quote_match = re.search("(?:\"|')", prefix)
- is_user_prefix_numeric = False
-
- if quote_match:
- quote = quote_match.group()
- valid_prefix = prefix + quote
- try:
- prefix_str = literal_eval(valid_prefix)
- except Exception:
- return "", 0, {}
- else:
- # If it does not look like a string, let's assume
- # we are dealing with a number or variable.
- number_match = _match_number_in_dict_key_prefix(prefix)
-
- # We do not want the key matcher to suggest variable names so we yield:
- if number_match is None:
- # The alternative would be to assume that user forgort the quote
- # and if the substring matches, suggest adding it at the start.
- return "", 0, {}
-
- prefix_str = number_match
- is_user_prefix_numeric = True
- quote = ""
-
- pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
- token_match = re.search(pattern, prefix, re.UNICODE)
- assert token_match is not None # silence mypy
- token_start = token_match.start()
- token_prefix = token_match.group()
-
- matched: Dict[str, _DictKeyState] = {}
-
- str_key: Union[str, bytes]
-
- for key in filtered_keys:
- if isinstance(key, (int, float)):
- # User typed a number but this key is not a number.
- if not is_user_prefix_numeric:
- continue
- str_key = str(key)
- if isinstance(key, int):
- int_base = prefix_str[:2].lower()
- # if user typed integer using binary/oct/hex notation:
- if int_base in _INT_FORMATS:
- int_format = _INT_FORMATS[int_base]
- str_key = int_format(key)
- else:
- # User typed a string but this key is a number.
- if is_user_prefix_numeric:
- continue
- str_key = key
- try:
- if not str_key.startswith(prefix_str):
- continue
- except (AttributeError, TypeError, UnicodeError) as e:
- # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
- continue
-
- # reformat remainder of key to begin with prefix
- rem = str_key[len(prefix_str) :]
- # force repr wrapped in '
- rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
- rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
- if quote == '"':
- # The entered prefix is quoted with ",
- # but the match is quoted with '.
- # A contained " hence needs escaping for comparison:
- rem_repr = rem_repr.replace('"', '\\"')
-
- # then reinsert prefix from start of token
- match = "%s%s" % (token_prefix, rem_repr)
-
- matched[match] = filtered_key_is_final[key]
- return quote, token_start, matched
-
-
-def cursor_to_position(text:str, line:int, column:int)->int:
- """
- Convert the (line,column) position of the cursor in text to an offset in a
- string.
-
- Parameters
- ----------
- text : str
- The text in which to calculate the cursor offset
- line : int
- Line of the cursor; 0-indexed
- column : int
- Column of the cursor 0-indexed
-
- Returns
- -------
- Position of the cursor in ``text``, 0-indexed.
-
- See Also
- --------
- position_to_cursor : reciprocal of this function
-
- """
- lines = text.split('\n')
- assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
-
- return sum(len(l) + 1 for l in lines[:line]) + column
-
-def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
- """
- Convert the position of the cursor in text (0 indexed) to a line
- number(0-indexed) and a column number (0-indexed) pair
-
- Position should be a valid position in ``text``.
-
- Parameters
- ----------
- text : str
- The text in which to calculate the cursor offset
- offset : int
- Position of the cursor in ``text``, 0-indexed.
-
- Returns
- -------
- (line, column) : (int, int)
- Line of the cursor; 0-indexed, column of the cursor 0-indexed
-
- See Also
- --------
- cursor_to_position : reciprocal of this function
-
- """
-
- assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
-
- before = text[:offset]
- blines = before.split('\n') # ! splitnes trim trailing \n
- line = before.count('\n')
- col = len(blines[-1])
- return line, col
-
-
-def _safe_isinstance(obj, module, class_name, *attrs):
- """Checks if obj is an instance of module.class_name if loaded
- """
- if module in sys.modules:
- m = sys.modules[module]
- for attr in [class_name, *attrs]:
- m = getattr(m, attr)
- return isinstance(obj, m)
-
-
-@context_matcher()
-def back_unicode_name_matcher(context: CompletionContext):
- """Match Unicode characters back to Unicode name
-
- Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
- """
- fragment, matches = back_unicode_name_matches(context.text_until_cursor)
- return _convert_matcher_v1_result_to_v2(
- matches, type="unicode", fragment=fragment, suppress_if_matches=True
- )
-
-
-def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
- """Match Unicode characters back to Unicode name
-
- This does ``☃`` -> ``\\snowman``
-
- Note that snowman is not a valid python3 combining character but will be expanded.
- Though it will not recombine back to the snowman character by the completion machinery.
-
- This will not either back-complete standard sequences like \\n, \\b ...
-
- .. deprecated:: 8.6
- You can use :meth:`back_unicode_name_matcher` instead.
-
- Returns
- =======
-
- Return a tuple with two elements:
-
- - The Unicode character that was matched (preceded with a backslash), or
- empty string,
- - a sequence (of 1), name for the match Unicode character, preceded by
- backslash, or empty if no match.
- """
- if len(text)<2:
- return '', ()
- maybe_slash = text[-2]
- if maybe_slash != '\\':
- return '', ()
-
- char = text[-1]
- # no expand on quote for completion in strings.
- # nor backcomplete standard ascii keys
- if char in string.ascii_letters or char in ('"',"'"):
- return '', ()
- try :
- unic = unicodedata.name(char)
- return '\\'+char,('\\'+unic,)
- except KeyError:
- pass
- return '', ()
-
-
-@context_matcher()
-def back_latex_name_matcher(context: CompletionContext):
- """Match latex characters back to unicode name
-
- Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
- """
- fragment, matches = back_latex_name_matches(context.text_until_cursor)
- return _convert_matcher_v1_result_to_v2(
- matches, type="latex", fragment=fragment, suppress_if_matches=True
- )
-
-
-def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
- """Match latex characters back to unicode name
-
- This does ``\\ℵ`` -> ``\\aleph``
-
- .. deprecated:: 8.6
- You can use :meth:`back_latex_name_matcher` instead.
- """
- if len(text)<2:
- return '', ()
- maybe_slash = text[-2]
- if maybe_slash != '\\':
- return '', ()
-
-
- char = text[-1]
- # no expand on quote for completion in strings.
- # nor backcomplete standard ascii keys
- if char in string.ascii_letters or char in ('"',"'"):
- return '', ()
- try :
- latex = reverse_latex_symbol[char]
- # '\\' replace the \ as well
- return '\\'+char,[latex]
- except KeyError:
- pass
- return '', ()
-
-
-def _formatparamchildren(parameter) -> str:
- """
- Get parameter name and value from Jedi Private API
-
- Jedi does not expose a simple way to get `param=value` from its API.
-
- Parameters
- ----------
- parameter
- Jedi's function `Param`
-
- Returns
- -------
- A string like 'a', 'b=1', '*args', '**kwargs'
-
- """
- description = parameter.description
- if not description.startswith('param '):
- raise ValueError('Jedi function parameter description have change format.'
- 'Expected "param ...", found %r".' % description)
- return description[6:]
-
-def _make_signature(completion)-> str:
- """
- Make the signature from a jedi completion
-
- Parameters
- ----------
- completion : jedi.Completion
- object does not complete a function type
-
- Returns
- -------
- a string consisting of the function signature, with the parenthesis but
- without the function name. example:
- `(a, *args, b=1, **kwargs)`
-
- """
-
- # it looks like this might work on jedi 0.17
- if hasattr(completion, 'get_signatures'):
- signatures = completion.get_signatures()
- if not signatures:
- return '(?)'
-
- c0 = completion.get_signatures()[0]
- return '('+c0.to_string().split('(', maxsplit=1)[1]
-
- return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
- for p in signature.defined_names()) if f])
-
-
-_CompleteResult = Dict[str, MatcherResult]
-
-
-DICT_MATCHER_REGEX = re.compile(
- r"""(?x)
-( # match dict-referring - or any get item object - expression
- .+
-)
-\[ # open bracket
-\s* # and optional whitespace
-# Capture any number of serializable objects (e.g. "a", "b", 'c')
-# and slices
-((?:(?:
- (?: # closed string
- [uUbB]? # string prefix (r not handled)
- (?:
- '(?:[^']|(? SimpleMatcherResult:
- """Utility to help with transition"""
- result = {
- "completions": [SimpleCompletion(text=match, type=type) for match in matches],
- "suppress": (True if matches else False) if suppress_if_matches else False,
- }
- if fragment is not None:
- result["matched_fragment"] = fragment
- return cast(SimpleMatcherResult, result)
-
-
-class IPCompleter(Completer):
- """Extension of the completer class with IPython-specific features"""
-
- @observe('greedy')
- def _greedy_changed(self, change):
- """update the splitter and readline delims when greedy is changed"""
- if change["new"]:
- self.evaluation = "unsafe"
- self.auto_close_dict_keys = True
- self.splitter.delims = GREEDY_DELIMS
- else:
- self.evaluation = "limited"
- self.auto_close_dict_keys = False
- self.splitter.delims = DELIMS
-
- dict_keys_only = Bool(
- False,
- help="""
- Whether to show dict key matches only.
-
- (disables all matchers except for `IPCompleter.dict_key_matcher`).
- """,
- )
-
- suppress_competing_matchers = UnionTrait(
- [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
- default_value=None,
- help="""
- Whether to suppress completions from other *Matchers*.
-
- When set to ``None`` (default) the matchers will attempt to auto-detect
- whether suppression of other matchers is desirable. For example, at
- the beginning of a line followed by `%` we expect a magic completion
- to be the only applicable option, and after ``my_dict['`` we usually
- expect a completion with an existing dictionary key.
-
- If you want to disable this heuristic and see completions from all matchers,
- set ``IPCompleter.suppress_competing_matchers = False``.
- To disable the heuristic for specific matchers provide a dictionary mapping:
- ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
-
- Set ``IPCompleter.suppress_competing_matchers = True`` to limit
- completions to the set of matchers with the highest priority;
- this is equivalent to ``IPCompleter.merge_completions`` and
- can be beneficial for performance, but will sometimes omit relevant
- candidates from matchers further down the priority list.
- """,
- ).tag(config=True)
-
- merge_completions = Bool(
- True,
- help="""Whether to merge completion results into a single list
-
- If False, only the completion results from the first non-empty
- completer will be returned.
-
- As of version 8.6.0, setting the value to ``False`` is an alias for:
- ``IPCompleter.suppress_competing_matchers = True.``.
- """,
- ).tag(config=True)
-
- disable_matchers = ListTrait(
- Unicode(),
- help="""List of matchers to disable.
-
- The list should contain matcher identifiers (see :any:`completion_matcher`).
- """,
- ).tag(config=True)
-
- omit__names = Enum(
- (0, 1, 2),
- default_value=2,
- help="""Instruct the completer to omit private method names
-
- Specifically, when completing on ``object.``.
-
- When 2 [default]: all names that start with '_' will be excluded.
-
- When 1: all 'magic' names (``__foo__``) will be excluded.
-
- When 0: nothing will be excluded.
- """
- ).tag(config=True)
- limit_to__all__ = Bool(False,
- help="""
- DEPRECATED as of version 5.0.
-
- Instruct the completer to use __all__ for the completion
-
- Specifically, when completing on ``object.``.
-
- When True: only those names in obj.__all__ will be included.
-
- When False [default]: the __all__ attribute is ignored
- """,
- ).tag(config=True)
-
- profile_completions = Bool(
- default_value=False,
- help="If True, emit profiling data for completion subsystem using cProfile."
- ).tag(config=True)
-
- profiler_output_dir = Unicode(
- default_value=".completion_profiles",
- help="Template for path at which to output profile data for completions."
- ).tag(config=True)
-
- @observe('limit_to__all__')
- def _limit_to_all_changed(self, change):
- warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
- 'value has been deprecated since IPython 5.0, will be made to have '
- 'no effects and then removed in future version of IPython.',
- UserWarning)
-
- def __init__(
- self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
- ):
- """IPCompleter() -> completer
-
- Return a completer object.
-
- Parameters
- ----------
- shell
- a pointer to the ipython shell itself. This is needed
- because this completer knows about magic functions, and those can
- only be accessed via the ipython instance.
- namespace : dict, optional
- an optional dict where completions are performed.
- global_namespace : dict, optional
- secondary optional dict for completions, to
- handle cases (such as IPython embedded inside functions) where
- both Python scopes are visible.
- config : Config
- traitlet's config object
- **kwargs
- passed to super class unmodified.
- """
-
- self.magic_escape = ESC_MAGIC
- self.splitter = CompletionSplitter()
-
- # _greedy_changed() depends on splitter and readline being defined:
- super().__init__(
- namespace=namespace,
- global_namespace=global_namespace,
- config=config,
- **kwargs,
- )
-
- # List where completion matches will be stored
- self.matches = []
- self.shell = shell
- # Regexp to split filenames with spaces in them
- self.space_name_re = re.compile(r'([^\\] )')
- # Hold a local ref. to glob.glob for speed
- self.glob = glob.glob
-
- # Determine if we are running on 'dumb' terminals, like (X)Emacs
- # buffers, to avoid completion problems.
- term = os.environ.get('TERM','xterm')
- self.dumb_terminal = term in ['dumb','emacs']
-
- # Special handling of backslashes needed in win32 platforms
- if sys.platform == "win32":
- self.clean_glob = self._clean_glob_win32
- else:
- self.clean_glob = self._clean_glob
-
- #regexp to parse docstring for function signature
- self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
- self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
- #use this if positional argument name is also needed
- #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
-
- self.magic_arg_matchers = [
- self.magic_config_matcher,
- self.magic_color_matcher,
- ]
-
- # This is set externally by InteractiveShell
- self.custom_completers = None
-
- # This is a list of names of unicode characters that can be completed
- # into their corresponding unicode value. The list is large, so we
- # lazily initialize it on first use. Consuming code should access this
- # attribute through the `@unicode_names` property.
- self._unicode_names = None
-
- self._backslash_combining_matchers = [
- self.latex_name_matcher,
- self.unicode_name_matcher,
- back_latex_name_matcher,
- back_unicode_name_matcher,
- self.fwd_unicode_matcher,
- ]
-
- if not self.backslash_combining_completions:
- for matcher in self._backslash_combining_matchers:
- self.disable_matchers.append(_get_matcher_id(matcher))
-
- if not self.merge_completions:
- self.suppress_competing_matchers = True
-
- @property
- def matchers(self) -> List[Matcher]:
- """All active matcher routines for completion"""
- if self.dict_keys_only:
- return [self.dict_key_matcher]
-
- if self.use_jedi:
- return [
- *self.custom_matchers,
- *self._backslash_combining_matchers,
- *self.magic_arg_matchers,
- self.custom_completer_matcher,
- self.magic_matcher,
- self._jedi_matcher,
- self.dict_key_matcher,
- self.file_matcher,
- ]
- else:
- return [
- *self.custom_matchers,
- *self._backslash_combining_matchers,
- *self.magic_arg_matchers,
- self.custom_completer_matcher,
- self.dict_key_matcher,
- # TODO: convert python_matches to v2 API
- self.magic_matcher,
- self.python_matches,
- self.file_matcher,
- self.python_func_kw_matcher,
- ]
-
- def all_completions(self, text:str) -> List[str]:
- """
- Wrapper around the completion methods for the benefit of emacs.
- """
- prefix = text.rpartition('.')[0]
- with provisionalcompleter():
- return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
- for c in self.completions(text, len(text))]
-
- return self.complete(text)[1]
-
- def _clean_glob(self, text:str):
- return self.glob("%s*" % text)
-
- def _clean_glob_win32(self, text:str):
- return [f.replace("\\","/")
- for f in self.glob("%s*" % text)]
-
- @context_matcher()
- def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Same as :any:`file_matches`, but adopted to new Matcher API."""
- matches = self.file_matches(context.token)
- # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
- # starts with `/home/`, `C:\`, etc)
- return _convert_matcher_v1_result_to_v2(matches, type="path")
-
- def file_matches(self, text: str) -> List[str]:
- """Match filenames, expanding ~USER type strings.
-
- Most of the seemingly convoluted logic in this completer is an
- attempt to handle filenames with spaces in them. And yet it's not
- quite perfect, because Python's readline doesn't expose all of the
- GNU readline details needed for this to be done correctly.
-
- For a filename with a space in it, the printed completions will be
- only the parts after what's already been typed (instead of the
- full completions, as is normally done). I don't think with the
- current (as of Python 2.3) Python readline it's possible to do
- better.
-
- .. deprecated:: 8.6
- You can use :meth:`file_matcher` instead.
- """
-
- # chars that require escaping with backslash - i.e. chars
- # that readline treats incorrectly as delimiters, but we
- # don't want to treat as delimiters in filename matching
- # when escaped with backslash
- if text.startswith('!'):
- text = text[1:]
- text_prefix = u'!'
- else:
- text_prefix = u''
-
- text_until_cursor = self.text_until_cursor
- # track strings with open quotes
- open_quotes = has_open_quotes(text_until_cursor)
-
- if '(' in text_until_cursor or '[' in text_until_cursor:
- lsplit = text
- else:
- try:
- # arg_split ~ shlex.split, but with unicode bugs fixed by us
- lsplit = arg_split(text_until_cursor)[-1]
- except ValueError:
- # typically an unmatched ", or backslash without escaped char.
- if open_quotes:
- lsplit = text_until_cursor.split(open_quotes)[-1]
- else:
- return []
- except IndexError:
- # tab pressed on empty line
- lsplit = ""
-
- if not open_quotes and lsplit != protect_filename(lsplit):
- # if protectables are found, do matching on the whole escaped name
- has_protectables = True
- text0,text = text,lsplit
- else:
- has_protectables = False
- text = os.path.expanduser(text)
-
- if text == "":
- return [text_prefix + protect_filename(f) for f in self.glob("*")]
-
- # Compute the matches from the filesystem
- if sys.platform == 'win32':
- m0 = self.clean_glob(text)
- else:
- m0 = self.clean_glob(text.replace('\\', ''))
-
- if has_protectables:
- # If we had protectables, we need to revert our changes to the
- # beginning of filename so that we don't double-write the part
- # of the filename we have so far
- len_lsplit = len(lsplit)
- matches = [text_prefix + text0 +
- protect_filename(f[len_lsplit:]) for f in m0]
- else:
- if open_quotes:
- # if we have a string with an open quote, we don't need to
- # protect the names beyond the quote (and we _shouldn't_, as
- # it would cause bugs when the filesystem call is made).
- matches = m0 if sys.platform == "win32" else\
- [protect_filename(f, open_quotes) for f in m0]
- else:
- matches = [text_prefix +
- protect_filename(f) for f in m0]
-
- # Mark directories in input list by appending '/' to their names.
- return [x+'/' if os.path.isdir(x) else x for x in matches]
-
- @context_matcher()
- def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Match magics."""
- text = context.token
- matches = self.magic_matches(text)
- result = _convert_matcher_v1_result_to_v2(matches, type="magic")
- is_magic_prefix = len(text) > 0 and text[0] == "%"
- result["suppress"] = is_magic_prefix and bool(result["completions"])
- return result
-
- def magic_matches(self, text: str):
- """Match magics.
-
- .. deprecated:: 8.6
- You can use :meth:`magic_matcher` instead.
- """
- # Get all shell magics now rather than statically, so magics loaded at
- # runtime show up too.
- lsm = self.shell.magics_manager.lsmagic()
- line_magics = lsm['line']
- cell_magics = lsm['cell']
- pre = self.magic_escape
- pre2 = pre+pre
-
- explicit_magic = text.startswith(pre)
-
- # Completion logic:
- # - user gives %%: only do cell magics
- # - user gives %: do both line and cell magics
- # - no prefix: do both
- # In other words, line magics are skipped if the user gives %% explicitly
- #
- # We also exclude magics that match any currently visible names:
- # https://github.com/ipython/ipython/issues/4877, unless the user has
- # typed a %:
- # https://github.com/ipython/ipython/issues/10754
- bare_text = text.lstrip(pre)
- global_matches = self.global_matches(bare_text)
- if not explicit_magic:
- def matches(magic):
- """
- Filter magics, in particular remove magics that match
- a name present in global namespace.
- """
- return ( magic.startswith(bare_text) and
- magic not in global_matches )
- else:
- def matches(magic):
- return magic.startswith(bare_text)
-
- comp = [ pre2+m for m in cell_magics if matches(m)]
- if not text.startswith(pre2):
- comp += [ pre+m for m in line_magics if matches(m)]
-
- return comp
-
- @context_matcher()
- def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Match class names and attributes for %config magic."""
- # NOTE: uses `line_buffer` equivalent for compatibility
- matches = self.magic_config_matches(context.line_with_cursor)
- return _convert_matcher_v1_result_to_v2(matches, type="param")
-
- def magic_config_matches(self, text: str) -> List[str]:
- """Match class names and attributes for %config magic.
-
- .. deprecated:: 8.6
- You can use :meth:`magic_config_matcher` instead.
- """
- texts = text.strip().split()
-
- if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
- # get all configuration classes
- classes = sorted(set([ c for c in self.shell.configurables
- if c.__class__.class_traits(config=True)
- ]), key=lambda x: x.__class__.__name__)
- classnames = [ c.__class__.__name__ for c in classes ]
-
- # return all classnames if config or %config is given
- if len(texts) == 1:
- return classnames
-
- # match classname
- classname_texts = texts[1].split('.')
- classname = classname_texts[0]
- classname_matches = [ c for c in classnames
- if c.startswith(classname) ]
-
- # return matched classes or the matched class with attributes
- if texts[1].find('.') < 0:
- return classname_matches
- elif len(classname_matches) == 1 and \
- classname_matches[0] == classname:
- cls = classes[classnames.index(classname)].__class__
- help = cls.class_get_help()
- # strip leading '--' from cl-args:
- help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
- return [ attr.split('=')[0]
- for attr in help.strip().splitlines()
- if attr.startswith(texts[1]) ]
- return []
-
- @context_matcher()
- def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Match color schemes for %colors magic."""
- # NOTE: uses `line_buffer` equivalent for compatibility
- matches = self.magic_color_matches(context.line_with_cursor)
- return _convert_matcher_v1_result_to_v2(matches, type="param")
-
- def magic_color_matches(self, text: str) -> List[str]:
- """Match color schemes for %colors magic.
-
- .. deprecated:: 8.6
- You can use :meth:`magic_color_matcher` instead.
- """
- texts = text.split()
- if text.endswith(' '):
- # .split() strips off the trailing whitespace. Add '' back
- # so that: '%colors ' -> ['%colors', '']
- texts.append('')
-
- if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
- prefix = texts[1]
- return [ color for color in InspectColors.keys()
- if color.startswith(prefix) ]
- return []
-
- @context_matcher(identifier="IPCompleter.jedi_matcher")
- def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
- matches = self._jedi_matches(
- cursor_column=context.cursor_position,
- cursor_line=context.cursor_line,
- text=context.full_text,
- )
- return {
- "completions": matches,
- # static analysis should not suppress other matchers
- "suppress": False,
- }
-
- def _jedi_matches(
- self, cursor_column: int, cursor_line: int, text: str
- ) -> Iterator[_JediCompletionLike]:
- """
- Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
- cursor position.
-
- Parameters
- ----------
- cursor_column : int
- column position of the cursor in ``text``, 0-indexed.
- cursor_line : int
- line position of the cursor in ``text``, 0-indexed
- text : str
- text to complete
-
- Notes
- -----
- If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
- object containing a string with the Jedi debug information attached.
-
- .. deprecated:: 8.6
- You can use :meth:`_jedi_matcher` instead.
- """
- namespaces = [self.namespace]
- if self.global_namespace is not None:
- namespaces.append(self.global_namespace)
-
- completion_filter = lambda x:x
- offset = cursor_to_position(text, cursor_line, cursor_column)
- # filter output if we are completing for object members
- if offset:
- pre = text[offset-1]
- if pre == '.':
- if self.omit__names == 2:
- completion_filter = lambda c:not c.name.startswith('_')
- elif self.omit__names == 1:
- completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
- elif self.omit__names == 0:
- completion_filter = lambda x:x
- else:
- raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
-
- interpreter = jedi.Interpreter(text[:offset], namespaces)
- try_jedi = True
-
- try:
- # find the first token in the current tree -- if it is a ' or " then we are in a string
- completing_string = False
- try:
- first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
- except StopIteration:
- pass
- else:
- # note the value may be ', ", or it may also be ''' or """, or
- # in some cases, """what/you/typed..., but all of these are
- # strings.
- completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
-
- # if we are in a string jedi is likely not the right candidate for
- # now. Skip it.
- try_jedi = not completing_string
- except Exception as e:
- # many of things can go wrong, we are using private API just don't crash.
- if self.debug:
- print("Error detecting if completing a non-finished string :", e, '|')
-
- if not try_jedi:
- return iter([])
- try:
- return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
- except Exception as e:
- if self.debug:
- return iter(
- [
- _FakeJediCompletion(
- 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
- % (e)
- )
- ]
- )
- else:
- return iter([])
-
- @completion_matcher(api_version=1)
- def python_matches(self, text: str) -> Iterable[str]:
- """Match attributes or global python names"""
- if "." in text:
- try:
- matches = self.attr_matches(text)
- if text.endswith('.') and self.omit__names:
- if self.omit__names == 1:
- # true if txt is _not_ a __ name, false otherwise:
- no__name = (lambda txt:
- re.match(r'.*\.__.*?__',txt) is None)
- else:
- # true if txt is _not_ a _ name, false otherwise:
- no__name = (lambda txt:
- re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
- matches = filter(no__name, matches)
- except NameError:
- # catches .
- matches = []
- else:
- matches = self.global_matches(text)
- return matches
-
- def _default_arguments_from_docstring(self, doc):
- """Parse the first line of docstring for call signature.
-
- Docstring should be of the form 'min(iterable[, key=func])\n'.
- It can also parse cython docstring of the form
- 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
- """
- if doc is None:
- return []
-
- #care only the firstline
- line = doc.lstrip().splitlines()[0]
-
- #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
- #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
- sig = self.docstring_sig_re.search(line)
- if sig is None:
- return []
- # iterable[, key=func]' -> ['iterable[' ,' key=func]']
- sig = sig.groups()[0].split(',')
- ret = []
- for s in sig:
- #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
- ret += self.docstring_kwd_re.findall(s)
- return ret
-
- def _default_arguments(self, obj):
- """Return the list of default arguments of obj if it is callable,
- or empty list otherwise."""
- call_obj = obj
- ret = []
- if inspect.isbuiltin(obj):
- pass
- elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
- if inspect.isclass(obj):
- #for cython embedsignature=True the constructor docstring
- #belongs to the object itself not __init__
- ret += self._default_arguments_from_docstring(
- getattr(obj, '__doc__', ''))
- # for classes, check for __init__,__new__
- call_obj = (getattr(obj, '__init__', None) or
- getattr(obj, '__new__', None))
- # for all others, check if they are __call__able
- elif hasattr(obj, '__call__'):
- call_obj = obj.__call__
- ret += self._default_arguments_from_docstring(
- getattr(call_obj, '__doc__', ''))
-
- _keeps = (inspect.Parameter.KEYWORD_ONLY,
- inspect.Parameter.POSITIONAL_OR_KEYWORD)
-
- try:
- sig = inspect.signature(obj)
- ret.extend(k for k, v in sig.parameters.items() if
- v.kind in _keeps)
- except ValueError:
- pass
-
- return list(set(ret))
-
- @context_matcher()
- def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Match named parameters (kwargs) of the last open function."""
- matches = self.python_func_kw_matches(context.token)
- return _convert_matcher_v1_result_to_v2(matches, type="param")
-
- def python_func_kw_matches(self, text):
- """Match named parameters (kwargs) of the last open function.
-
- .. deprecated:: 8.6
- You can use :meth:`python_func_kw_matcher` instead.
- """
-
- if "." in text: # a parameter cannot be dotted
- return []
- try: regexp = self.__funcParamsRegex
- except AttributeError:
- regexp = self.__funcParamsRegex = re.compile(r'''
- '.*?(?,a=1)", the candidate is "foo"
- tokens = regexp.findall(self.text_until_cursor)
- iterTokens = reversed(tokens); openPar = 0
-
- for token in iterTokens:
- if token == ')':
- openPar -= 1
- elif token == '(':
- openPar += 1
- if openPar > 0:
- # found the last unclosed parenthesis
- break
- else:
- return []
- # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
- ids = []
- isId = re.compile(r'\w+$').match
-
- while True:
- try:
- ids.append(next(iterTokens))
- if not isId(ids[-1]):
- ids.pop(); break
- if not next(iterTokens) == '.':
- break
- except StopIteration:
- break
-
- # Find all named arguments already assigned to, as to avoid suggesting
- # them again
- usedNamedArgs = set()
- par_level = -1
- for token, next_token in zip(tokens, tokens[1:]):
- if token == '(':
- par_level += 1
- elif token == ')':
- par_level -= 1
-
- if par_level != 0:
- continue
-
- if next_token != '=':
- continue
-
- usedNamedArgs.add(token)
-
- argMatches = []
- try:
- callableObj = '.'.join(ids[::-1])
- namedArgs = self._default_arguments(eval(callableObj,
- self.namespace))
-
- # Remove used named arguments from the list, no need to show twice
- for namedArg in set(namedArgs) - usedNamedArgs:
- if namedArg.startswith(text):
- argMatches.append("%s=" %namedArg)
- except:
- pass
-
- return argMatches
-
- @staticmethod
- def _get_keys(obj: Any) -> List[Any]:
- # Objects can define their own completions by defining an
- # _ipy_key_completions_() method.
- method = get_real_method(obj, '_ipython_key_completions_')
- if method is not None:
- return method()
-
- # Special case some common in-memory dict-like types
- if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
- try:
- return list(obj.keys())
- except Exception:
- return []
- elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
- try:
- return list(obj.obj.keys())
- except Exception:
- return []
- elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
- _safe_isinstance(obj, 'numpy', 'void'):
- return obj.dtype.names or []
- return []
-
- @context_matcher()
- def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
- """Match string keys in a dictionary, after e.g. ``foo[``."""
- matches = self.dict_key_matches(context.token)
- return _convert_matcher_v1_result_to_v2(
- matches, type="dict key", suppress_if_matches=True
- )
-
- def dict_key_matches(self, text: str) -> List[str]:
- """Match string keys in a dictionary, after e.g. ``foo[``.
-
- .. deprecated:: 8.6
- You can use :meth:`dict_key_matcher` instead.
- """
-
- # Short-circuit on closed dictionary (regular expression would
- # not match anyway, but would take quite a while).
- if self.text_until_cursor.strip().endswith("]"):
- return []
-
- match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
-
- if match is None:
- return []
-
- expr, prior_tuple_keys, key_prefix = match.groups()
-
- obj = self._evaluate_expr(expr)
-
- if obj is not_found:
- return []
-
- keys = self._get_keys(obj)
- if not keys:
- return keys
-
- tuple_prefix = guarded_eval(
- prior_tuple_keys,
- EvaluationContext(
- globals=self.global_namespace,
- locals=self.namespace,
- evaluation=self.evaluation,
- in_subscript=True,
- ),
- )
-
- closing_quote, token_offset, matches = match_dict_keys(
- keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
- )
- if not matches:
- return []
-
- # get the cursor position of
- # - the text being completed
- # - the start of the key text
- # - the start of the completion
- text_start = len(self.text_until_cursor) - len(text)
- if key_prefix:
- key_start = match.start(3)
- completion_start = key_start + token_offset
- else:
- key_start = completion_start = match.end()
-
- # grab the leading prefix, to make sure all completions start with `text`
- if text_start > key_start:
- leading = ''
- else:
- leading = text[text_start:completion_start]
-
- # append closing quote and bracket as appropriate
- # this is *not* appropriate if the opening quote or bracket is outside
- # the text given to this method, e.g. `d["""a\nt
- can_close_quote = False
- can_close_bracket = False
-
- continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
-
- if continuation.startswith(closing_quote):
- # do not close if already closed, e.g. `d['a'`
- continuation = continuation[len(closing_quote) :]
- else:
- can_close_quote = True
-
- continuation = continuation.strip()
-
- # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
- # handling it is out of scope, so let's avoid appending suffixes.
- has_known_tuple_handling = isinstance(obj, dict)
-
- can_close_bracket = (
- not continuation.startswith("]") and self.auto_close_dict_keys
- )
- can_close_tuple_item = (
- not continuation.startswith(",")
- and has_known_tuple_handling
- and self.auto_close_dict_keys
- )
- can_close_quote = can_close_quote and self.auto_close_dict_keys
-
- # fast path if closing qoute should be appended but not suffix is allowed
- if not can_close_quote and not can_close_bracket and closing_quote:
- return [leading + k for k in matches]
-
- results = []
-
- end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
-
- for k, state_flag in matches.items():
- result = leading + k
- if can_close_quote and closing_quote:
- result += closing_quote
-
- if state_flag == end_of_tuple_or_item:
- # We do not know which suffix to add,
- # e.g. both tuple item and string
- # match this item.
- pass
-
- if state_flag in end_of_tuple_or_item and can_close_bracket:
- result += "]"
- if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
- result += ", "
- results.append(result)
- return results
-
- @context_matcher()
- def unicode_name_matcher(self, context: CompletionContext):
- """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
- fragment, matches = self.unicode_name_matches(context.text_until_cursor)
- return _convert_matcher_v1_result_to_v2(
- matches, type="unicode", fragment=fragment, suppress_if_matches=True
- )
-
- @staticmethod
- def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
- """Match Latex-like syntax for unicode characters base
- on the name of the character.
-
- This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
-
- Works only on valid python 3 identifier, or on combining characters that
- will combine to form a valid identifier.
- """
- slashpos = text.rfind('\\')
- if slashpos > -1:
- s = text[slashpos+1:]
- try :
- unic = unicodedata.lookup(s)
- # allow combining chars
- if ('a'+unic).isidentifier():
- return '\\'+s,[unic]
- except KeyError:
- pass
- return '', []
-
- @context_matcher()
- def latex_name_matcher(self, context: CompletionContext):
- """Match Latex syntax for unicode characters.
-
- This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
- """
- fragment, matches = self.latex_matches(context.text_until_cursor)
- return _convert_matcher_v1_result_to_v2(
- matches, type="latex", fragment=fragment, suppress_if_matches=True
- )
-
- def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
- """Match Latex syntax for unicode characters.
-
- This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
-
- .. deprecated:: 8.6
- You can use :meth:`latex_name_matcher` instead.
- """
- slashpos = text.rfind('\\')
- if slashpos > -1:
- s = text[slashpos:]
- if s in latex_symbols:
- # Try to complete a full latex symbol to unicode
- # \\alpha -> α
- return s, [latex_symbols[s]]
- else:
- # If a user has partially typed a latex symbol, give them
- # a full list of options \al -> [\aleph, \alpha]
- matches = [k for k in latex_symbols if k.startswith(s)]
- if matches:
- return s, matches
- return '', ()
-
- @context_matcher()
- def custom_completer_matcher(self, context):
- """Dispatch custom completer.
-
- If a match is found, suppresses all other matchers except for Jedi.
- """
- matches = self.dispatch_custom_completer(context.token) or []
- result = _convert_matcher_v1_result_to_v2(
- matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
- )
- result["ordered"] = True
- result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
- return result
-
- def dispatch_custom_completer(self, text):
- """
- .. deprecated:: 8.6
- You can use :meth:`custom_completer_matcher` instead.
- """
- if not self.custom_completers:
- return
-
- line = self.line_buffer
- if not line.strip():
- return None
-
- # Create a little structure to pass all the relevant information about
- # the current completion to any custom completer.
- event = SimpleNamespace()
- event.line = line
- event.symbol = text
- cmd = line.split(None,1)[0]
- event.command = cmd
- event.text_until_cursor = self.text_until_cursor
-
- # for foo etc, try also to find completer for %foo
- if not cmd.startswith(self.magic_escape):
- try_magic = self.custom_completers.s_matches(
- self.magic_escape + cmd)
- else:
- try_magic = []
-
- for c in itertools.chain(self.custom_completers.s_matches(cmd),
- try_magic,
- self.custom_completers.flat_matches(self.text_until_cursor)):
- try:
- res = c(event)
- if res:
- # first, try case sensitive match
- withcase = [r for r in res if r.startswith(text)]
- if withcase:
- return withcase
- # if none, then case insensitive ones are ok too
- text_low = text.lower()
- return [r for r in res if r.lower().startswith(text_low)]
- except TryNext:
- pass
- except KeyboardInterrupt:
- """
- If custom completer take too long,
- let keyboard interrupt abort and return nothing.
- """
- break
-
- return None
-
- def completions(self, text: str, offset: int)->Iterator[Completion]:
- """
- Returns an iterator over the possible completions
-
- .. warning::
-
- Unstable
-
- This function is unstable, API may change without warning.
- It will also raise unless use in proper context manager.
-
- Parameters
- ----------
- text : str
- Full text of the current input, multi line string.
- offset : int
- Integer representing the position of the cursor in ``text``. Offset
- is 0-based indexed.
-
- Yields
- ------
- Completion
-
- Notes
- -----
- The cursor on a text can either be seen as being "in between"
- characters or "On" a character depending on the interface visible to
- the user. For consistency the cursor being on "in between" characters X
- and Y is equivalent to the cursor being "on" character Y, that is to say
- the character the cursor is on is considered as being after the cursor.
-
- Combining characters may span more that one position in the
- text.
-
- .. note::
-
- If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
- fake Completion token to distinguish completion returned by Jedi
- and usual IPython completion.
-
- .. note::
-
- Completions are not completely deduplicated yet. If identical
- completions are coming from different sources this function does not
- ensure that each completion object will only be present once.
- """
- warnings.warn("_complete is a provisional API (as of IPython 6.0). "
- "It may change without warnings. "
- "Use in corresponding context manager.",
- category=ProvisionalCompleterWarning, stacklevel=2)
-
- seen = set()
- profiler:Optional[cProfile.Profile]
- try:
- if self.profile_completions:
- import cProfile
- profiler = cProfile.Profile()
- profiler.enable()
- else:
- profiler = None
-
- for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
- if c and (c in seen):
- continue
- yield c
- seen.add(c)
- except KeyboardInterrupt:
- """if completions take too long and users send keyboard interrupt,
- do not crash and return ASAP. """
- pass
- finally:
- if profiler is not None:
- profiler.disable()
- ensure_dir_exists(self.profiler_output_dir)
- output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
- print("Writing profiler output to", output_path)
- profiler.dump_stats(output_path)
-
- def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
- """
- Core completion module.Same signature as :any:`completions`, with the
- extra `timeout` parameter (in seconds).
-
- Computing jedi's completion ``.type`` can be quite expensive (it is a
- lazy property) and can require some warm-up, more warm up than just
- computing the ``name`` of a completion. The warm-up can be :
-
- - Long warm-up the first time a module is encountered after
- install/update: actually build parse/inference tree.
-
- - first time the module is encountered in a session: load tree from
- disk.
-
- We don't want to block completions for tens of seconds so we give the
- completer a "budget" of ``_timeout`` seconds per invocation to compute
- completions types, the completions that have not yet been computed will
- be marked as "unknown" an will have a chance to be computed next round
- are things get cached.
-
- Keep in mind that Jedi is not the only thing treating the completion so
- keep the timeout short-ish as if we take more than 0.3 second we still
- have lots of processing to do.
-
- """
- deadline = time.monotonic() + _timeout
-
- before = full_text[:offset]
- cursor_line, cursor_column = position_to_cursor(full_text, offset)
-
- jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
-
- def is_non_jedi_result(
- result: MatcherResult, identifier: str
- ) -> TypeGuard[SimpleMatcherResult]:
- return identifier != jedi_matcher_id
-
- results = self._complete(
- full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
- )
-
- non_jedi_results: Dict[str, SimpleMatcherResult] = {
- identifier: result
- for identifier, result in results.items()
- if is_non_jedi_result(result, identifier)
- }
-
- jedi_matches = (
- cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
- if jedi_matcher_id in results
- else ()
- )
-
- iter_jm = iter(jedi_matches)
- if _timeout:
- for jm in iter_jm:
- try:
- type_ = jm.type
- except Exception:
- if self.debug:
- print("Error in Jedi getting type of ", jm)
- type_ = None
- delta = len(jm.name_with_symbols) - len(jm.complete)
- if type_ == 'function':
- signature = _make_signature(jm)
- else:
- signature = ''
- yield Completion(start=offset - delta,
- end=offset,
- text=jm.name_with_symbols,
- type=type_,
- signature=signature,
- _origin='jedi')
-
- if time.monotonic() > deadline:
- break
-
- for jm in iter_jm:
- delta = len(jm.name_with_symbols) - len(jm.complete)
- yield Completion(
- start=offset - delta,
- end=offset,
- text=jm.name_with_symbols,
- type=_UNKNOWN_TYPE, # don't compute type for speed
- _origin="jedi",
- signature="",
- )
-
- # TODO:
- # Suppress this, right now just for debug.
- if jedi_matches and non_jedi_results and self.debug:
- some_start_offset = before.rfind(
- next(iter(non_jedi_results.values()))["matched_fragment"]
- )
- yield Completion(
- start=some_start_offset,
- end=offset,
- text="--jedi/ipython--",
- _origin="debug",
- type="none",
- signature="",
- )
-
- ordered: List[Completion] = []
- sortable: List[Completion] = []
-
- for origin, result in non_jedi_results.items():
- matched_text = result["matched_fragment"]
- start_offset = before.rfind(matched_text)
- is_ordered = result.get("ordered", False)
- container = ordered if is_ordered else sortable
-
- # I'm unsure if this is always true, so let's assert and see if it
- # crash
- assert before.endswith(matched_text)
-
- for simple_completion in result["completions"]:
- completion = Completion(
- start=start_offset,
- end=offset,
- text=simple_completion.text,
- _origin=origin,
- signature="",
- type=simple_completion.type or _UNKNOWN_TYPE,
- )
- container.append(completion)
-
- yield from list(self._deduplicate(ordered + self._sort(sortable)))[
- :MATCHES_LIMIT
- ]
-
- def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
- """Find completions for the given text and line context.
-
- Note that both the text and the line_buffer are optional, but at least
- one of them must be given.
-
- Parameters
- ----------
- text : string, optional
- Text to perform the completion on. If not given, the line buffer
- is split using the instance's CompletionSplitter object.
- line_buffer : string, optional
- If not given, the completer attempts to obtain the current line
- buffer via readline. This keyword allows clients which are
- requesting for text completions in non-readline contexts to inform
- the completer of the entire text.
- cursor_pos : int, optional
- Index of the cursor in the full line buffer. Should be provided by
- remote frontends where kernel has no access to frontend state.
-
- Returns
- -------
- Tuple of two items:
- text : str
- Text that was actually used in the completion.
- matches : list
- A list of completion matches.
-
- Notes
- -----
- This API is likely to be deprecated and replaced by
- :any:`IPCompleter.completions` in the future.
-
- """
- warnings.warn('`Completer.complete` is pending deprecation since '
- 'IPython 6.0 and will be replaced by `Completer.completions`.',
- PendingDeprecationWarning)
- # potential todo, FOLD the 3rd throw away argument of _complete
- # into the first 2 one.
- # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
- # TODO: should we deprecate now, or does it stay?
-
- results = self._complete(
- line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
- )
-
- jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
-
- return self._arrange_and_extract(
- results,
- # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
- skip_matchers={jedi_matcher_id},
- # this API does not support different start/end positions (fragments of token).
- abort_if_offset_changes=True,
- )
-
- def _arrange_and_extract(
- self,
- results: Dict[str, MatcherResult],
- skip_matchers: Set[str],
- abort_if_offset_changes: bool,
- ):
- sortable: List[AnyMatcherCompletion] = []
- ordered: List[AnyMatcherCompletion] = []
- most_recent_fragment = None
- for identifier, result in results.items():
- if identifier in skip_matchers:
- continue
- if not result["completions"]:
- continue
- if not most_recent_fragment:
- most_recent_fragment = result["matched_fragment"]
- if (
- abort_if_offset_changes
- and result["matched_fragment"] != most_recent_fragment
- ):
- break
- if result.get("ordered", False):
- ordered.extend(result["completions"])
- else:
- sortable.extend(result["completions"])
-
- if not most_recent_fragment:
- most_recent_fragment = "" # to satisfy typechecker (and just in case)
-
- return most_recent_fragment, [
- m.text for m in self._deduplicate(ordered + self._sort(sortable))
- ]
-
- def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
- full_text=None) -> _CompleteResult:
- """
- Like complete but can also returns raw jedi completions as well as the
- origin of the completion text. This could (and should) be made much
- cleaner but that will be simpler once we drop the old (and stateful)
- :any:`complete` API.
-
- With current provisional API, cursor_pos act both (depending on the
- caller) as the offset in the ``text`` or ``line_buffer``, or as the
- ``column`` when passing multiline strings this could/should be renamed
- but would add extra noise.
-
- Parameters
- ----------
- cursor_line
- Index of the line the cursor is on. 0 indexed.
- cursor_pos
- Position of the cursor in the current line/line_buffer/text. 0
- indexed.
- line_buffer : optional, str
- The current line the cursor is in, this is mostly due to legacy
- reason that readline could only give a us the single current line.
- Prefer `full_text`.
- text : str
- The current "token" the cursor is in, mostly also for historical
- reasons. as the completer would trigger only after the current line
- was parsed.
- full_text : str
- Full text of the current cell.
-
- Returns
- -------
- An ordered dictionary where keys are identifiers of completion
- matchers and values are ``MatcherResult``s.
- """
-
- # if the cursor position isn't given, the only sane assumption we can
- # make is that it's at the end of the line (the common case)
- if cursor_pos is None:
- cursor_pos = len(line_buffer) if text is None else len(text)
-
- if self.use_main_ns:
- self.namespace = __main__.__dict__
-
- # if text is either None or an empty string, rely on the line buffer
- if (not line_buffer) and full_text:
- line_buffer = full_text.split('\n')[cursor_line]
- if not text: # issue #11508: check line_buffer before calling split_line
- text = (
- self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
- )
-
- # If no line buffer is given, assume the input text is all there was
- if line_buffer is None:
- line_buffer = text
-
- # deprecated - do not use `line_buffer` in new code.
- self.line_buffer = line_buffer
- self.text_until_cursor = self.line_buffer[:cursor_pos]
-
- if not full_text:
- full_text = line_buffer
-
- context = CompletionContext(
- full_text=full_text,
- cursor_position=cursor_pos,
- cursor_line=cursor_line,
- token=text,
- limit=MATCHES_LIMIT,
- )
-
- # Start with a clean slate of completions
- results: Dict[str, MatcherResult] = {}
-
- jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
-
- suppressed_matchers: Set[str] = set()
-
- matchers = {
- _get_matcher_id(matcher): matcher
- for matcher in sorted(
- self.matchers, key=_get_matcher_priority, reverse=True
- )
- }
-
- for matcher_id, matcher in matchers.items():
- matcher_id = _get_matcher_id(matcher)
-
- if matcher_id in self.disable_matchers:
- continue
-
- if matcher_id in results:
- warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
-
- if matcher_id in suppressed_matchers:
- continue
-
- result: MatcherResult
- try:
- if _is_matcher_v1(matcher):
- result = _convert_matcher_v1_result_to_v2(
- matcher(text), type=_UNKNOWN_TYPE
- )
- elif _is_matcher_v2(matcher):
- result = matcher(context)
- else:
- api_version = _get_matcher_api_version(matcher)
- raise ValueError(f"Unsupported API version {api_version}")
- except:
- # Show the ugly traceback if the matcher causes an
- # exception, but do NOT crash the kernel!
- sys.excepthook(*sys.exc_info())
- continue
-
- # set default value for matched fragment if suffix was not selected.
- result["matched_fragment"] = result.get("matched_fragment", context.token)
-
- if not suppressed_matchers:
- suppression_recommended: Union[bool, Set[str]] = result.get(
- "suppress", False
- )
-
- suppression_config = (
- self.suppress_competing_matchers.get(matcher_id, None)
- if isinstance(self.suppress_competing_matchers, dict)
- else self.suppress_competing_matchers
- )
- should_suppress = (
- (suppression_config is True)
- or (suppression_recommended and (suppression_config is not False))
- ) and has_any_completions(result)
-
- if should_suppress:
- suppression_exceptions: Set[str] = result.get(
- "do_not_suppress", set()
- )
- if isinstance(suppression_recommended, Iterable):
- to_suppress = set(suppression_recommended)
- else:
- to_suppress = set(matchers)
- suppressed_matchers = to_suppress - suppression_exceptions
-
- new_results = {}
- for previous_matcher_id, previous_result in results.items():
- if previous_matcher_id not in suppressed_matchers:
- new_results[previous_matcher_id] = previous_result
- results = new_results
-
- results[matcher_id] = result
-
- _, matches = self._arrange_and_extract(
- results,
- # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
- # if it was omission, we can remove the filtering step, otherwise remove this comment.
- skip_matchers={jedi_matcher_id},
- abort_if_offset_changes=False,
- )
-
- # populate legacy stateful API
- self.matches = matches
-
- return results
-
- @staticmethod
- def _deduplicate(
- matches: Sequence[AnyCompletion],
- ) -> Iterable[AnyCompletion]:
- filtered_matches: Dict[str, AnyCompletion] = {}
- for match in matches:
- text = match.text
- if (
- text not in filtered_matches
- or filtered_matches[text].type == _UNKNOWN_TYPE
- ):
- filtered_matches[text] = match
-
- return filtered_matches.values()
-
- @staticmethod
- def _sort(matches: Sequence[AnyCompletion]):
- return sorted(matches, key=lambda x: completions_sorting_key(x.text))
-
- @context_matcher()
- def fwd_unicode_matcher(self, context: CompletionContext):
- """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
- # TODO: use `context.limit` to terminate early once we matched the maximum
- # number that will be used downstream; can be added as an optional to
- # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
- fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
- return _convert_matcher_v1_result_to_v2(
- matches, type="unicode", fragment=fragment, suppress_if_matches=True
- )
-
- def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
- """
- Forward match a string starting with a backslash with a list of
- potential Unicode completions.
-
- Will compute list of Unicode character names on first call and cache it.
-
- .. deprecated:: 8.6
- You can use :meth:`fwd_unicode_matcher` instead.
-
- Returns
- -------
- At tuple with:
- - matched text (empty if no matches)
- - list of potential completions, empty tuple otherwise)
- """
- # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
- # We could do a faster match using a Trie.
-
- # Using pygtrie the following seem to work:
-
- # s = PrefixSet()
-
- # for c in range(0,0x10FFFF + 1):
- # try:
- # s.add(unicodedata.name(chr(c)))
- # except ValueError:
- # pass
- # [''.join(k) for k in s.iter(prefix)]
-
- # But need to be timed and adds an extra dependency.
-
- slashpos = text.rfind('\\')
- # if text starts with slash
- if slashpos > -1:
- # PERF: It's important that we don't access self._unicode_names
- # until we're inside this if-block. _unicode_names is lazily
- # initialized, and it takes a user-noticeable amount of time to
- # initialize it, so we don't want to initialize it unless we're
- # actually going to use it.
- s = text[slashpos + 1 :]
- sup = s.upper()
- candidates = [x for x in self.unicode_names if x.startswith(sup)]
- if candidates:
- return s, candidates
- candidates = [x for x in self.unicode_names if sup in x]
- if candidates:
- return s, candidates
- splitsup = sup.split(" ")
- candidates = [
- x for x in self.unicode_names if all(u in x for u in splitsup)
- ]
- if candidates:
- return s, candidates
-
- return "", ()
-
- # if text does not start with slash
- else:
- return '', ()
-
- @property
- def unicode_names(self) -> List[str]:
- """List of names of unicode code points that can be completed.
-
- The list is lazily initialized on first access.
- """
- if self._unicode_names is None:
- names = []
- for c in range(0,0x10FFFF + 1):
- try:
- names.append(unicodedata.name(chr(c)))
- except ValueError:
- pass
- self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
-
- return self._unicode_names
-
-def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
- names = []
- for start,stop in ranges:
- for c in range(start, stop) :
- try:
- names.append(unicodedata.name(chr(c)))
- except ValueError:
- pass
- return names
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/display.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/display.py
deleted file mode 100644
index 6c0eff6884f7666548f2c701c3a901b59f4c5abc..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/display.py
+++ /dev/null
@@ -1,93 +0,0 @@
-"""Simple magics for display formats"""
-#-----------------------------------------------------------------------------
-# Copyright (c) 2012 The IPython Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-#-----------------------------------------------------------------------------
-# Imports
-#-----------------------------------------------------------------------------
-
-# Our own packages
-from IPython.display import display, Javascript, Latex, SVG, HTML, Markdown
-from IPython.core.magic import (
- Magics, magics_class, cell_magic
-)
-from IPython.core import magic_arguments
-
-#-----------------------------------------------------------------------------
-# Magic implementation classes
-#-----------------------------------------------------------------------------
-
-
-@magics_class
-class DisplayMagics(Magics):
- """Magics for displaying various output types with literals
-
- Defines javascript/latex/svg/html cell magics for writing
- blocks in those languages, to be rendered in the frontend.
- """
-
- @cell_magic
- def js(self, line, cell):
- """Run the cell block of Javascript code
-
- Alias of `%%javascript`
-
- Starting with IPython 8.0 %%javascript is pending deprecation to be replaced
- by a more flexible system
-
- Please See https://github.com/ipython/ipython/issues/13376
- """
- self.javascript(line, cell)
-
- @cell_magic
- def javascript(self, line, cell):
- """Run the cell block of Javascript code
-
- Starting with IPython 8.0 %%javascript is pending deprecation to be replaced
- by a more flexible system
-
- Please See https://github.com/ipython/ipython/issues/13376
- """
- display(Javascript(cell))
-
-
- @cell_magic
- def latex(self, line, cell):
- """Render the cell as a block of LaTeX
-
- The subset of LaTeX which is supported depends on the implementation in
- the client. In the Jupyter Notebook, this magic only renders the subset
- of LaTeX defined by MathJax
- [here](https://docs.mathjax.org/en/v2.5-latest/tex.html)."""
- display(Latex(cell))
-
- @cell_magic
- def svg(self, line, cell):
- """Render the cell as an SVG literal"""
- display(SVG(cell))
-
- @magic_arguments.magic_arguments()
- @magic_arguments.argument(
- '--isolated', action='store_true', default=False,
- help="""Annotate the cell as 'isolated'.
-Isolated cells are rendered inside their own