diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Damodarastakam-In-Malayalam-Pdf-31.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Damodarastakam-In-Malayalam-Pdf-31.md deleted file mode 100644 index 8c5c0ba8860266f7e2883d49137bd94d2b299908..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Damodarastakam-In-Malayalam-Pdf-31.md +++ /dev/null @@ -1,70 +0,0 @@ -## Damodarastakam In Malayalam Pdf 31 - - - - - - ![Damodarastakam In Malayalam Pdf 31](https://mymshoes.com.tr/modules//smartblog/images/20-single-default.jpg) - - - - - -**LINK 🆓 [https://eromdesre.blogspot.com/?d=2txKRe](https://eromdesre.blogspot.com/?d=2txKRe)** - - - - - - - - - - - - ``` - -# Damodarastakam In Malayalam Pdf 31: A Devotional Song for Kartik Maas - - - -Damodarastakam is a Sanskrit hymn composed by Satyavrata Muni, a great devotee of Lord Krishna. It describes the pastime of Krishna being bound by a rope (damodara) by His mother Yashoda as a punishment for stealing butter. This pastime is celebrated during the month of Kartik (October-November), also known as Damodara Maas, when devotees offer lamps to Krishna and sing Damodarastakam every day. - - - -Damodarastakam consists of eight verses, each ending with the refrain "namo namah tulasi krishna-preyasi", which means "I offer my respectful obeisances to Tulasi Devi, who is very dear to Lord Krishna". The hymn expresses the mood of surrender, humility, and love for Krishna, and also reveals the glories of Tulasi Devi, the sacred plant that is worshiped by Vaishnavas. - - - -Damodarastakam has been translated into many languages, including Malayalam, the official language of Kerala state in India. A PDF file of Damodarastakam in Malayalam with transliteration and meaning can be downloaded from [this link](https://sway.office.com/wk1QiLzrQegRtCRb). The PDF file contains 31 pages, with one verse per page. The file also has an introduction and a conclusion that explain the significance and benefits of Damodarastakam. - - - -Damodarastakam in Malayalam can also be listened to on YouTube, where several videos have been uploaded by devotees. One such video is [this one](https://www.youtube.com/watch?v=oVon63hGYtQ), which features the voice of Nimais Media, a channel dedicated to spreading Krishna consciousness. The video has over 42,000 views and 55 comments as of April 2023. - - - -Damodarastakam in Malayalam is a beautiful way to connect with Krishna and His devotees during Kartik Maas. By singing or hearing this hymn, one can attain the mercy of Krishna and Tulasi Devi, and become free from all material bondage. - - ``` ``` - -If you are wondering why Damodarastakam is so important and beneficial, here are some reasons. First of all, Damodarastakam is a hymn that was composed by Satyavrata Muni, a great sage and devotee of Lord Krishna. He sang this hymn in a conversation with Narada Muni and Saunaka Rishi, who were also great authorities on spiritual knowledge. Therefore, Damodarastakam has the potency to bestow the highest realization of Krishna's sweetness and mercy. - - - -Secondly, Damodarastakam is recommended to be recited during the month of Kartik, which is also known as Damodara Maas. This month is very dear to Lord Krishna, as it commemorates His pastime of being bound by His mother's love. During this month, devotees offer lamps to Krishna and sing Damodarastakam every day. By doing so, they please Krishna and attract His special blessings. It is said that any devotional service performed in this month is multiplied a thousand times. - - - -Thirdly, Damodarastakam reveals the essence of pure devotion to Krishna. It shows how Krishna is conquered by the love of His devotees, especially His mother Yashoda. It also shows how Krishna reciprocates with His devotees by manifesting His most charming and playful form as a child. It also shows how Krishna grants His devotees the highest benediction of prema-bhakti, or pure love for Him. - - - -Therefore, Damodarastakam is a treasure for all devotees of Krishna. By reading, hearing, or singing this hymn, one can experience the bliss of Krishna's pastimes and develop a deep attachment to Him. - - ``` 1b8d091108 - - - - - diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Agbot Silkroad.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Agbot Silkroad.md deleted file mode 100644 index 9ce16ebbb8b28f33995d7baade4da58d991ed766..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Agbot Silkroad.md +++ /dev/null @@ -1,25 +0,0 @@ - -

Agbot: A Popular Bot for Silkroad Online

-

Silkroad Online is a massively multiplayer online role-playing game that takes place in a historical fantasy world inspired by the ancient Silk Road trade route. Players can choose from three races: Chinese, European, and Arabian, and explore a vast open world full of quests, dungeons, and PvP battles.

-

However, some players may find the game too grindy or repetitive, and may want to use a bot to automate some of the tasks. A bot is a software program that can perform certain actions in the game without human input, such as fighting monsters, collecting loot, selling items, or using skills.

-

agbot silkroad


Download File --->>> https://byltly.com/2uKwDt



-

One of the most popular bots for Silkroad Online is Agbot, which works for version 1.227 of the game. Agbot is a free bot that can be downloaded from various websites, such as GameFront or Elitepvpers. Agbot has many features and options that allow players to customize their botting experience, such as:

- -

To use Agbot, players need to follow some steps to set it up correctly. First, they need to use the media patcher in the Silkroad Online folder to patch the port and redirect the DNS using the hosts file. Then, they need to delete the data folder in Agbot and rename the datar folder to data. Next, they need to open Agbot.exe and configure the nuconnector.ini file with the IP address 31.193.168.140 or 31.193.168.141. Finally, they need to open nuconnector1.3 in Agbot folder and then open Silkroad Online, which will connect to nuconnector and load the modified nuconnector.ini file.

-

Once in the game, players can use Agbot to start botting by selecting their character name, choosing a hunting script, setting up their skills and options, and clicking on start. Agbot will then take over and perform the actions specified by the player.

-

Agbot is a useful tool for players who want to level up faster, earn more gold, or enjoy other aspects of Silkroad Online without spending too much time on grinding. However, players should also be aware of the risks involved in using a bot, such as getting banned by the game developers or losing their account information to hackers. Therefore, players should always use Agbot at their own discretion and responsibility.

- -

Using bots in online games can have both benefits and drawbacks for players and publishers. On one hand, bots can provide a convenient and fun way to practice skills, learn strategies, or enjoy the game without the pressure of competing with other human players. Bots can also help fill up servers, create diversity, and enhance the social aspects of online gaming by simulating different personalities and behaviors.

-

On the other hand, bots can also ruin the online gaming experience for many players and publishers by creating unfair advantages, disrupting the game balance, and violating the game rules. Bots can be used to cheat, spam, farm, or grief other players, which can lower their satisfaction and engagement with the game. Bots can also harm the game economy by inflating or deflating the value of in-game items and currencies, which can affect the revenue and reputation of the game publishers.

-

Therefore, it is important for game developers and designers to consider the impact of bots on their online games and to implement appropriate measures to prevent or mitigate their negative effects. Some possible solutions include detecting and banning bots, designing bot-proof game mechanics, educating and rewarding players for fair play, and creating official or authorized bots that can enhance the game experience without harming it.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bhaag Johnny Movie Download In Hindi 720p Download BEST.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bhaag Johnny Movie Download In Hindi 720p Download BEST.md deleted file mode 100644 index dc917d6ddad84e3b3e682cfd08c3ea839d83d0c3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bhaag Johnny Movie Download In Hindi 720p Download BEST.md +++ /dev/null @@ -1,16 +0,0 @@ -
-

Bhaag Johnny Movie Download in Hindi 720p: How to Watch Online for Free

-

Bhaag Johnny is a 2015 Bollywood thriller movie starring Kunal Khemu, Zoa Morani and Mandana Karimi. The movie revolves around Johnny, a casanova who is blackmailed by his boss to kill a girl or lose his job. However, he gets a chance to live two lives: one where he commits the crime and another where he refuses and goes on the run. The movie explores the consequences of his choices and how they affect his love life and destiny.

-

Bhaag Johnny movie download in hindi 720p download


Download ✦✦✦ https://byltly.com/2uKxtj



-

If you are looking for Bhaag Johnny movie download in Hindi 720p, you might be disappointed to know that the movie is not available on any legal streaming platforms. The movie was released on Disney+ Hotstar, but it has been removed from the service due to some issues. Therefore, you cannot watch Bhaag Johnny online for free legally.

-

However, there are some illegal websites that claim to offer Bhaag Johnny movie download in Hindi 720p. These websites are not authorized by the makers or distributors of the movie and may contain viruses or malware that can harm your device. Moreover, downloading or streaming movies from such websites is a violation of the Indian Copyright Act and can land you in legal trouble.

-

Therefore, we advise you to stay away from such websites and watch Bhaag Johnny movie online only on Disney+ Hotstar when it becomes available again. You can subscribe to Disney+ Hotstar for a nominal fee and enjoy unlimited access to a vast library of movies, shows, sports and more. You can also watch Bhaag Johnny movie online on your smartphone, laptop, tablet or smart TV with a high-speed internet connection.

-

Bhaag Johnny is a thrilling and entertaining movie that will keep you hooked till the end. The movie has some amazing action sequences, suspenseful twists and turns, and a romantic angle that will make you root for Johnny. The movie also has some catchy songs composed by Mithoon, Yo Yo Honey Singh and Devi Sri Prasad. The movie is directed by Shivam Nair and produced by Bhushan Kumar, Krishan Kumar and Vikram Bhatt.

-

So, what are you waiting for? Watch Bhaag Johnny movie online on Disney+ Hotstar as soon as it becomes available and enjoy this thrilling ride with Johnny.

- -

Bhaag Johnny movie has an interesting plot that explores the concept of parallel lives and alternate realities. The movie is inspired by the German film Run Lola Run (1998), which also had a similar theme of a woman running to save her boyfriend's life in three different scenarios. Bhaag Johnny movie adds a twist to this idea by introducing a genie who gives Johnny the choice to live two lives simultaneously and see the outcomes of his actions.

-

-

Bhaag Johnny movie has a talented cast and crew who have worked hard to make this movie a success. The movie features Kunal Khemu as Johnny, who is known for his comic timing and versatile acting skills. He has previously worked in movies like Golmaal 3, Go Goa Gone and Lootcase. Zoa Morani plays Tanya, Johnny's love interest, who is an aspiring singer. She made her debut with Always Kabhi Kabhi (2011) and also appeared in Zindagi Na Milegi Dobara (2011). Mandana Karimi plays Rachel, the girl who Johnny is supposed to kill. She is an Iranian model and actress who was seen in Roy (2015) and Kyaa Kool Hain Hum 3 (2016). She also participated in Bigg Boss 9.

-

Bhaag Johnny movie is directed by Shivam Nair, who has helmed movies like Maharathi (2008), Ahista Ahista (2006) and Naam Shabana (2017). He has also directed several TV shows like Sea Hawks, CID and Special Squad. The movie is written by Vikram Bhatt, who is a renowned filmmaker and producer of movies like Raaz, 1920, Haunted and more. He also plays the role of the genie in the movie. The movie is produced by Bhushan Kumar, Krishan Kumar and Vikram Bhatt under the banners of T-Series and BVG Films.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fifa 08 Xbox.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fifa 08 Xbox.md deleted file mode 100644 index ce4b1480e2f1c8fcae15f8b33297f613947d395d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fifa 08 Xbox.md +++ /dev/null @@ -1,28 +0,0 @@ - -

FIFA 08 Xbox: A Review of the Classic Soccer Game

-

If you are a fan of soccer games, you may have played or heard of FIFA 08, a football simulation video game developed by EA Canada and published by Electronic Arts under the EA Sports label. FIFA 08 was released on all popular gaming formats in September 2007 in Europe, Australia and Asia, and in October 2007 in North America. The PlayStation 3 and Xbox 360 versions of the game feature an improved game engine with superior graphics and different commentators and are dubbed "next-generation" by EA. In this article, we will focus on the Xbox 360 version of FIFA 08 and review its features, gameplay, and pros and cons.

-

Features of FIFA 08 Xbox

-

FIFA 08 Xbox has many features that make it a realistic and enjoyable soccer game. Some of the main features are:

-

fifa 08 xbox


DOWNLOADhttps://byltly.com/2uKwtZ



- -

Gameplay of FIFA 08 Xbox

-

FIFA 08 Xbox has a smooth and realistic gameplay that simulates the unpredictability and excitement of soccer. The game has various modes that cater to different preferences and skill levels. Some of the main modes are:

- -

Pros and Cons of FIFA 08 Xbox

-

FIFA 08 Xbox is a great soccer game that offers many features and modes for different tastes and preferences. However, it also has some drawbacks that may affect your enjoyment of the game. Here are some of the pros and cons of FIFA 08 Xbox:

- -

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 NO CD Crack [2021].dmg Hack Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 NO CD Crack [2021].dmg Hack Torrent.md deleted file mode 100644 index 769bb96cf9edda4019beb8de4a0ec616908f1d7a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 NO CD Crack [2021].dmg Hack Torrent.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

the age of empires 3 crack free cheat code is super easy to use. in fact, we have built this site to be as easy to use as possible. our intuitive interface and easy to follow prompts should make this the first choice for anyone looking to make the jump from the demo to the full version of the game. you can play the game immediately after downloading.

-

this is the best program to get free age of empires 3 cd key. with this tool, you can download your age of empires 3 cd key for free. we are sure that you will enjoy playing age of empires 3 free. it's time to let our free games download engine work for you.

-

Age of Empires 3 NO CD Crack.dmg hack torrent


DOWNLOAD » https://imgfil.com/2uxXUN



-

anyone can play age of empires 3 cd key for free. if you have the cd key, all you have to do is run the program, choose your cd key and click the generate code button. once you have the cd key, you can download the game from the website and play it for free.

-

age of empires 3 key is an online crack program which allows you to download the game free of cost. with the help of this program, you can get the product key of age of empires 3 for free. the age of empires 3 key is an online crack which allows you to download the game free of cost.

-

this is a powerful tool that will allow you to download free age of empires 3 cd key without having to go through all the hassle of searching for the code online. this tool is completely safe and it will not mess up your system. it is compatible with both 32bit and 64bit versions of windows xp and windows vista.

-

age of empires iii: the warchiefs is the first official expansion pack for the real-time strategy game age of empires iii . it was announced by ensemble studios and microsoft game studios on march 7, 2006.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Cars Fast as Lightning APK for Android and Race with Lightning McQueen.md b/spaces/1phancelerku/anime-remove-background/Download Cars Fast as Lightning APK for Android and Race with Lightning McQueen.md deleted file mode 100644 index 22e3b3a9ede7cfb2b1e329735f1ac879640829b4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Cars Fast as Lightning APK for Android and Race with Lightning McQueen.md +++ /dev/null @@ -1,135 +0,0 @@ -
-

How to Download Cars Fast as Lightning on Android

-

Do you love the Cars movie franchise and want to experience the thrill of racing with your favorite characters? If so, you might want to check out Cars Fast as Lightning, a fun and exciting racing game that lets you play as 20 different characters from the movies, customize your cars and tracks, build your own Radiator Springs, and enjoy animated cutscenes and voice acting. In this article, we will show you how to download and install this game on your android device, as well as some tips and tricks for playing it.

-

What is Cars Fast as Lightning?

-

Cars Fast as Lightning is a racing game based on the Cars movie franchise, developed by Gameloft. The game features a variety of characters from the movies, such as Lightning McQueen, Mater, Francesco Bernoulli, Sally, Doc Hudson, and more. You can choose your favorite character and race against other characters in different locations, such as Radiator Springs, Tokyo, Porto Corsa, London, Paris, and more. You can also customize your cars and tracks with different paint jobs, stickers, accessories, jumps, loops, and props.

-

how to download cars fast as lightning on android


Download >> https://jinyurl.com/2uNPoV



-

Features of the game

-

The game has many features that make it fun and engaging for players of all ages. Some of these features are:

-

Play as 20 different characters

-

You can play as 20 different characters from the Cars movies, each with their own personality, voice, and style. You can also unlock new characters by winning races and cups. Some of the characters you can play as are:

- -

Customize your cars and tracks

-

You can customize your cars and tracks with different paint jobs, stickers, accessories, jumps, loops, and props. You can make your cars look cool and unique with different colors, patterns, decals, spoilers, wheels, exhausts, lights, horns, and more. You can also make your tracks more fun and challenging with different ramps, bridges, tunnels, cacti, dinosaurs, rockets, fireworks, and more.

-

Build your own Radiator Springs

You can build your own Radiator Springs by placing different buildings, landmarks, and decorations. You can create your own version of the town from the movies, or make your own original design. You can also visit other players' towns and see how they have built their own Radiator Springs.

-

Enjoy animated cutscenes and voice acting

-

The game has animated cutscenes and voice acting that make it feel like you are watching a Cars movie. You can see your favorite characters interact with each other, crack jokes, and show their emotions. You can also hear their original voices from the movies, such as Owen Wilson as Lightning McQueen, Larry the Cable Guy as Mater, John Turturro as Francesco Bernoulli, and more.

-

How to download and install the game on your android device

-

If you are interested in playing Cars Fast as Lightning on your android device, you will need to download and install the game from the Google Play Store. Here are the requirements and steps for doing so.

-

Requirements for the game

-

Before you download and install the game, you will need to make sure that your device meets the following requirements:

-

Android version 4.0 or higher

-

The game requires Android version 4.0 or higher to run smoothly. You can check your device's Android version by going to Settings > About phone > Software information.

-

How to install cars fast as lightning on android
-How to get cars fast as lightning for android
-How to play cars fast as lightning on android
-How to download cars fast as lightning apk for android
-How to download cars fast as lightning game on android
-How to download and install cars fast as lightning on android
-How to download cars fast as lightning mod apk for android
-How to download cars fast as lightning hack for android
-How to download cars fast as lightning latest version for android
-How to download cars fast as lightning offline for android
-How to download cars fast as lightning from play store on android
-How to download cars fast as lightning without internet on android
-How to download cars fast as lightning unlimited money for android
-How to download cars fast as lightning cheats for android
-How to download cars fast as lightning free for android
-How to download cars fast as lightning full version for android
-How to download cars fast as lightning in pc for android
-How to download cars fast as lightning in bluestacks for android
-How to download cars fast as lightning in laptop for android
-How to download cars fast as lightning in windows 10 for android
-How to download cars fast as lightning disney pixar game for android
-How to download cars fast as lightning gameloft game for android
-How to download cars fast as lightning racing game for android
-How to download cars fast as lightning city building game for android
-How to download cars fast as lightning 3d game for android
-How to download cars fast as lightning with nitro boost for android
-How to download cars fast as lightning with stunts for android
-How to download cars fast as lightning with Owen Wilson voice for android
-How to download cars fast as lightning with Lightning McQueen character for android
-How to download cars fast as lightning with Mater character for android
-How to download cars fast as lightning with Francesco character for android
-How to download cars fast as lightning with Radiator Springs theme for android
-How to download cars fast as lightning with Rocky Loops theme for android
-How to download cars fast as lightning with Roller Coasters theme for android
-How to download cars fast as lightning with Luigi's Casa Della Tires theme for android
-How to download cars fast as lightning with Fillmore's Taste-In theme for android
-How to download cars fast as lightning with 20 Cars characters for android
-How to download cars fast as lightning with 30 town buildings for android
-How to download cars fast as lightning with animated cutscenes for android
-How to download cars fast as lightning with voice acting for android
-How to download cars fast as lightning with easy controls for android
-How to download cars fast as lightning with high-quality graphics for android
-How to download cars fast as lightning with fun animations for android
-How to download cars fast as lightning with kids-friendly gameplay for android
-How to download cars fast as lightning with fans-favorite gameplay for android
-How to download cars fast as lightning with free arcade gameplay for android
-How to download cars fast as lightning with customizable gameplay for android
-How to download cars fast as lightning with virtual currency gameplay for android
-How to download cars fast as lightning with in-app purchases gameplay for android

-

At least 1 GB of free storage space

-

The game takes up about 1 GB of storage space on your device. You will need to have at least that much free space available to download and install the game. You can check your device's storage space by going to Settings > Storage.

-

A stable internet connection

-

The game requires a stable internet connection to download and play. You will need to have a Wi-Fi or mobile data connection that is fast and reliable. You can check your device's internet connection by going to Settings > Network & internet.

-

Steps to download and install the game

-

Once you have made sure that your device meets the requirements, you can follow these steps to download and install the game:

-

Go to the Google Play Store app on your device

-

The Google Play Store app is where you can find and download apps and games for your android device. You can access it by tapping on its icon on your home screen or app drawer.

-

Search for Cars Fast as Lightning or use this link

-

You can search for Cars Fast as Lightning by typing its name in the search bar at the top of the app. Alternatively, you can use this link to go directly to the game's page on the Google Play Store.

-

Tap on the Install button and wait for the download to finish

Tap on the Install button and wait for the download to finish

-

Once you have found the game on the Google Play Store, you can tap on the Install button to start the download process. You will see a progress bar that shows how much of the game has been downloaded. You will need to wait for the download to finish before you can install and play the game.

-

Tap on the Open button and enjoy the game

-

After the download is complete, you will see an Open button that lets you launch the game. You can tap on it to start the game and enjoy racing with your favorite Cars characters. You can also find the game's icon on your home screen or app drawer and tap on it to open the game anytime.

-

Tips and tricks for playing the game

-

Now that you have downloaded and installed Cars Fast as Lightning on your android device, you might want to know some tips and tricks for playing the game. Here are some of them:

-

How to win races and earn coins

-

Races are the main mode of gameplay in Cars Fast as Lightning. You can race against other characters in different locations and try to beat them to the finish line. You can also earn coins by winning races, which you can use to customize your cars and tracks. Here are some tips for winning races and earning coins:

-

Tap on the screen to accelerate and release to drift

-

The game has a simple control scheme that lets you control your car's speed and direction. You can tap on the screen to accelerate and release to drift. Drifting helps you turn corners faster and avoid obstacles. You can also use drifting to perform stunts and tricks, which we will discuss later.

-

Collect lightning bolts and use them to boost your speed

Collect lightning bolts and use them to boost your speed -

As you race, you will see lightning bolts on the track. These are power-ups that can help you boost your speed and gain an advantage over your opponents. You can collect them by driving over them or by performing stunts and tricks. You can use them by tapping on the lightning icon on the bottom right corner of the screen. You can also save them for later by tapping on the pause icon next to the lightning icon.

-

Perform stunts and tricks to fill up your turbo meter

-

Another way to boost your speed is by filling up your turbo meter. You can do this by performing stunts and tricks on the track, such as jumping, flipping, spinning, and drifting. You will see a blue bar on the top left corner of the screen that shows how much turbo you have. When it is full, you can tap on it to activate turbo mode, which makes your car go faster and glow with sparks. You can also use turbo mode to smash through obstacles and opponents.

-

Upgrade your cars and tracks to improve your performance

-

You can use the coins you earn from winning races to upgrade your cars and tracks. You can improve your car's speed, acceleration, handling, and nitro by buying new parts and accessories. You can also improve your track's difficulty, length, and fun factor by buying new props and decorations. Upgrading your cars and tracks can help you win more races and earn more coins.

-

How to unlock new characters and locations

-

The game has a lot of characters and locations to unlock and explore. You can unlock new characters by winning races and cups, and new locations by completing missions and challenges. Here are some tips for unlocking new characters and locations:

-

Complete missions and challenges to earn stars

Complete missions and challenges to earn stars -

Missions and challenges are tasks that you can complete to earn stars. Stars are used to unlock new cups and tournaments, which in turn unlock new characters and tracks. You can see your missions and challenges by tapping on the map icon on the bottom left corner of the screen. You can also see how many stars you have and how many you need to unlock the next cup or tournament by tapping on the trophy icon on the top right corner of the screen.

-

Use stars to unlock new cups and tournaments

-

Cups and tournaments are series of races that you can compete in to win prizes and unlock new characters and tracks. You can access them by tapping on the cup icon on the bottom right corner of the screen. You will see different cups and tournaments with different themes, such as Radiator Springs Cup, Tokyo Cup, World Grand Prix, and more. You will need a certain number of stars to unlock each cup or tournament. You will also need to have a specific character to enter each cup or tournament.

-

Win races and cups to unlock new characters and tracks

-

Winning races and cups is the main way to unlock new characters and tracks. You will see a lock icon on the characters and tracks that you have not unlocked yet. You will need to win a specific race or cup to unlock them. For example, you will need to win the Radiator Springs Cup to unlock Mater, or the Tokyo Cup to unlock Shu Todoroki. You will also see a star icon on the characters and tracks that you have unlocked but not played yet. You can tap on them to play as them or race on them.

-

Visit other players' towns and race against them

-

You can also unlock new characters and tracks by visiting other players' towns and racing against them. You can access this feature by tapping on the social icon on the top left corner of the screen. You will see a list of your friends who play the game, as well as random players from around the world. You can tap on their names to visit their towns and see how they have built their own Radiator Springs. You can also tap on the race icon next to their names to challenge them to a race. You can win coins, stars, and sometimes new characters and tracks by racing against other players.

-

Conclusion and FAQs

-

Cars Fast as Lightning is a fun and exciting racing game that lets you play as 20 different characters from the Cars movie franchise, customize your cars and tracks, build your own Radiator Springs, and enjoy animated cutscenes and voice acting. You can download and install this game on your android device by following the steps we have shown you in this article. You can also improve your skills and experience by following the tips and tricks we have shared with you. We hope you enjoy playing this game as much as we do.

-

If you have any questions or doubts about this game, you might find the answers in these FAQs:

-
- - - - - - - - - -
Q: How do I save my progress in the game?
A: The game automatically saves your progress every time you finish a race or make a change in your town. You can also manually save your progress by tapping on the settings icon on the top right corner of the screen and then tapping on the save icon.
Q: How do I restore my progress if I lose it or change my device?
A: You can restore your progress by connecting your game to your Facebook account. You can do this by tapping on the settings icon on the top right corner of the screen and then tapping on the connect icon. This will allow you to sync your progress across different devices and recover it if you lose it.
Q: How do I get more coins without spending real money?
A: You can get more coins by winning races, completing missions and challenges, visiting other players' towns, and watching ads. You can also get free coins by tapping on the gift icon on the top right corner of the screen and claiming your daily reward.
Q: How do I change the language of the game?
A: You can change the language of the game by tapping on the settings icon on the top right corner of the screen and then tapping on the language icon. You will see a list of available languages that you can choose from, such as English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, and more.
Q: How do I contact the support team if I have a problem with the game?
A: You can contact the support team by tapping on the settings icon on the top right corner of the screen and then tapping on the help icon. You will see a list of frequently asked questions and answers that might solve your problem. If you still need help, you can tap on the contact us icon and fill out a form with your name, email, device model, game version, and description of your problem. You can also attach a screenshot or a video of your problem if you have one. The support team will get back to you as soon as possible.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Euro Truck Simulator 2 and Customize Your Truck with Tons of Tuning Options.md b/spaces/1phancelerku/anime-remove-background/Download Euro Truck Simulator 2 and Customize Your Truck with Tons of Tuning Options.md deleted file mode 100644 index ec0ce464aa902178f24f5073108fdd99efd3b327..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Euro Truck Simulator 2 and Customize Your Truck with Tons of Tuning Options.md +++ /dev/null @@ -1,133 +0,0 @@ -
-

How to Download Euro Truck Simulator 2

-

Have you ever dreamed of becoming a truck driver and traveling across Europe? If so, you might want to check out Euro Truck Simulator 2, a popular simulation game that lets you do just that. Euro Truck Simulator 2 is a game that gives you the chance to become a real truck driver from the comfort of your home. Featuring licensed trucks with countless customization options and advanced driving physics, the game delivers an unparalleled driving experience which has put it in the spot of the most popular truck driving simulator on the market. In this article, we will show you how to download Euro Truck Simulator 2 and enjoy its amazing features.

-

Why You Should Play Euro Truck Simulator 2

-

Euro Truck Simulator 2 is not only about driving - it's also about exploring, managing, and customizing. Here are some of the reasons why you should play this game:

-

download euro truck simulator 2


Downloadhttps://jinyurl.com/2uNJNs



- -

The Benefits of Driving Across Europe

-

One of the best things about Euro Truck Simulator 2 is that it allows you to explore different countries and cultures in Europe. You can learn about their history, geography, architecture, cuisine, and more. You can also admire their unique attractions such as the Eiffel Tower, the Colosseum, the Big Ben, and more. You can also experience different driving rules and regulations, such as speed limits, traffic signs, tolls, and fines. Driving across Europe is a great way to broaden your horizons and have fun at the same time.

-

The Challenges of Running Your Own Trucking Business

-

Another aspect of Euro Truck Simulator 2 is that it lets you run your own trucking business. You can start from scratch and work your way up to become a successful entrepreneur. You can buy new trucks, upgrade them, hire drivers, assign them routes, and monitor their performance. You can also manage your finances, loans, expenses, and income. You can compete with other companies and try to get the best contracts and reputation. Running your own trucking business is a rewarding and challenging experience that will test your skills and strategy.

-

The Customization Options for Your Trucks

-

One of the most enjoyable features of Euro Truck Simulator 2 is that it allows you to customize your trucks in a variety of ways. You can choose from over 40 licensed truck models from famous brands such as Volvo, Scania, MAN, DAF, Renault, and more. You can also tune your trucks to improve their performance, such as engine power, fuel efficiency, braking, suspension, and more. You can also paint your trucks with different colors and patterns, and add accessories such as lights, horns, exhausts, bumpers, and more. You can even create your own custom decals and logos to make your trucks stand out. Customizing your trucks is a fun and creative way to express yourself and show off your style.

-

Where to Download Euro Truck Simulator 2

-

Now that you know why you should play Euro Truck Simulator 2, you might be wondering where to download it. There are several platforms and sources where you can get the game, each with its own advantages and disadvantages. Here are some of the most popular ones:

-

Steam

-

Steam is one of the most popular and reliable platforms where you can download Euro Truck Simulator 2. Steam is a digital distribution service that offers a large library of games, including Euro Truck Simulator 2 and its expansions. Steam also provides automatic updates, cloud saving, achievements, multiplayer support, community features, and more. Steam is easy to use and secure, and you can access your games from any device with your Steam account.

-

How to Install Euro Truck Simulator 2 from Steam

-

To install Euro Truck Simulator 2 from Steam, you need to follow these steps:

-

download euro truck simulator 2 free
-download euro truck simulator 2 full version
-download euro truck simulator 2 mods
-download euro truck simulator 2 for pc
-download euro truck simulator 2 apk
-download euro truck simulator 2 demo
-download euro truck simulator 2 multiplayer
-download euro truck simulator 2 crack
-download euro truck simulator 2 steam
-download euro truck simulator 2 android
-download euro truck simulator 2 mac
-download euro truck simulator 2 dlc
-download euro truck simulator 2 highly compressed
-download euro truck simulator 2 latest version
-download euro truck simulator 2 online
-download euro truck simulator 2 torrent
-download euro truck simulator 2 update
-download euro truck simulator 2 windows 10
-download euro truck simulator 2 bus mod
-download euro truck simulator 2 map editor
-download euro truck simulator 2 save game
-download euro truck simulator 2 product key
-download euro truck simulator 2 activation code
-download euro truck simulator 2 going east
-download euro truck simulator 2 scandinavia
-download euro truck simulator 2 vive la france
-download euro truck simulator 2 italia
-download euro truck simulator 2 road to the black sea
-download euro truck simulator 2 beyond the baltic sea
-download euro truck simulator 2 promods
-download euro truck simulator 2 trainer
-download euro truck simulator 2 cheat engine
-download euro truck simulator 2 money hack
-download euro truck simulator 2 profile editor
-download euro truck simulator 2 realistic physics mod
-download euro truck simulator 2 graphics mod
-download euro truck simulator 2 sound mod
-download euro truck simulator 2 traffic mod
-download euro truck simulator 2 weather mod
-download euro truck simulator 2 skin pack
-download euro truck simulator 2 car mod
-download euro truck simulator 2 volvo mod
-download euro truck simulator 2 scania mod
-download euro truck simulator 2 mercedes mod
-download euro truck simulator 2 renault mod
-download euro truck simulator 2 man mod
-download euro truck simulator 2 daf mod
-download euro truck simulator 2 iveco mod
-download euro truck simulator 2 trailer mod

-
    -
  1. Create a Steam account if you don't have one already.
  2. -
  3. Download and install the Steam client from the official website.
  4. -
  5. Launch the Steam client and log in with your account.
  6. -
  7. Go to the Store tab and search for Euro Truck Simulator 2.
  8. -
  9. Select the game and click on Add to Cart.
  10. -
  11. Proceed to checkout and choose your payment method.
  12. -
  13. After the payment is confirmed, the game will be added to your Library.
  14. -
  15. Go to your Library and select Euro Truck Simulator 2.
  16. -
  17. Click on Install and choose the destination folder for the game.
  18. -
  19. Wait for the download and installation to finish.
  20. -
  21. Click on Play and enjoy the game!
  22. -
-

Official Website

-

Another option where you can download Euro Truck Simulator 2 is the official website of the game. The official website offers direct downloads of the game and its expansions, as well as news, updates, support, merchandise, and more. The official website also has a blog where you can read about the development of the game and its future plans. The official website is a great source of information and resources for Euro Truck Simulator 2 fans.

-

How to Install Euro Truck Simulator 2 from the Official Website

-

To install Euro Truck Simulator 2 from the official website, you need to follow these steps:

-
    -
  1. Go to the official website of Euro Truck Simulator 2.
  2. -
  3. Click on Buy Now and choose your edition of the game.
  4. -
  5. Select your payment method and complete the purchase.
  6. -
  7. You will receive an email with a link to download the game installer.
  8. -
  9. Download the game installer and run it on your computer.
  10. -
  11. Follow the instructions on the screen to install the game.
  12. -
  13. Activate the game with the product key that you received in your email.
  14. -
  15. Launch the game and enjoy!
  16. -
-

Other , or just have fun with them. You can also join online forums and communities where you can discuss the game, share your experiences, ask for help, give feedback, and more. You can find multiplayer servers and online forums on websites such as TruckersMP, ETS2MP, SCS Software Forum, and more. Connecting with other players is a great way to make new friends and enjoy the game together.

-

Conclusion

-

Euro Truck Simulator 2 is a game that offers you a unique and immersive experience of driving a truck across Europe. You can explore different countries and cultures, run your own business, customize your trucks, and connect with other players. You can download the game from various platforms and sources, such as Steam, the official website, or other sources. However, you should be careful when choosing your source, as some of them may be unsafe or illegal. You can also use mods and community content to enhance your game in many ways. You can find them on websites such as Steam Workshop, ETS2 Mods, ETS2 World, and more. However, you should also backup your game files before using them, as they may not be compatible with your game version or other mods. Euro Truck Simulator 2 is a game that will keep you entertained for hours and hours. If you are looking for a realistic and fun simulation game, you should definitely give it a try.

-

FAQs

-

Here are some of the frequently asked questions about Euro Truck Simulator 2:

-
    -
  1. Q: How much does Euro Truck Simulator 2 cost?
    -A: The base game costs $19.99 on Steam and the official website. However, you can also buy bundles that include the game and its expansions for a discounted price. You can also wait for sales and promotions that offer the game for a lower price.
  2. -
  3. Q: What are the system requirements for Euro Truck Simulator 2?
    -A: The minimum system requirements for Euro Truck Simulator 2 are: -The recommended system requirements for Euro Truck Simulator 2 are:
  4. -
  5. Q: How many trucks are there in Euro Truck Simulator 2?
    -A: There are over 40 licensed truck models from famous brands such as Volvo, Scania, MAN, DAF, Renault, and more. You can also download mods that add more trucks to the game.
  6. -
  7. Q: How many countries are there in Euro Truck Simulator 2?
    -A: The base game includes 13 countries in Europe: Austria, Belgium, Czech Republic, France, Germany, Italy, Luxembourg, Netherlands, Poland, Slovakia, Switzerland, Hungary, and United Kingdom. You can also buy expansions that add more countries to the game, such as Scandinavia, Going East!, Vive la France!, Italia, Beyond the Baltic Sea, Road to the Black Sea, and Iberia.
  8. -
  9. Q: How do I update Euro Truck Simulator 2?
    -A: If you have downloaded the game from Steam or the official website, you will receive automatic updates whenever there is a new version of the game available. However, if you have downloaded the game from other sources , you may need to manually check for updates on the source's website or download the latest version of the game. However, you should be careful when updating the game from other sources, as they may not be compatible with your game version or other mods.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Nebulous io Mod Apk Now and Unlock Unlimited Plasma and All Features.md b/spaces/1phancelerku/anime-remove-background/Download Nebulous io Mod Apk Now and Unlock Unlimited Plasma and All Features.md deleted file mode 100644 index 0db94efcd787e0f47babcb94c1c472690658d9d6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Nebulous io Mod Apk Now and Unlock Unlimited Plasma and All Features.md +++ /dev/null @@ -1,82 +0,0 @@ -
-

Download Nebulous IO Mod APK Unlimited Plasma - A Fun and Addictive Mobile Game

-

If you are looking for a fun and addictive mobile game that will keep you entertained for hours, then you should try Nebulous IO. Nebulous IO is a multiplayer online game where you control a blob and try to grow bigger by eating other blobs. Sounds simple, right? Well, not so fast. There are also other players who want to eat you, as well as viruses, black holes, and other obstacles that can make your life difficult. In this article, we will tell you everything you need to know about Nebulous IO, and how you can download Nebulous IO Mod APK Unlimited Plasma to enjoy the game with more features and advantages.

-

What is Nebulous IO?

-

Nebulous IO is a mobile game that was inspired by the popular web game Agar.io. The game was developed by Simplicial Software and released in 2015. The game has over 10 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game is compatible with Android 4.1 and up devices.

-

download nebulous io mod apk unlimited plasma


Download ⚹⚹⚹ https://jinyurl.com/2uNUin



-

Features of Nebulous IO

-

Nebulous IO has many features that make it an enjoyable and challenging game. Some of these features are:

- -

How to play Nebulous IO

-

The gameplay of Nebulous IO is simple but addictive. You start as a small blob and you have to move around the map and eat smaller blobs to grow bigger. You can also split your blob into smaller pieces to move faster or to escape from bigger blobs. However, be careful not to get eaten by bigger blobs or get trapped by viruses or black holes. The goal is to become the biggest blob on the server and dominate the leaderboard.

-

Why download Nebulous IO Mod APK Unlimited Plasma?

-

Nebulous IO is a free game that you can download from Google Play Store or App Store. However, if you want to enjoy the game with more features and advantages, you should download Nebulous IO Mod APK Unlimited Plasma. This is a modified version of the game that gives you unlimited plasma, which is the in-game currency that you can use to buy skins, items, clan tokens, and more. With unlimited plasma, you can unlock all the skins and items that you want and customize your blob however you like. You can also create your own clan and invite your friends to join you.

-

Benefits of Nebulous IO Mod APK Unlimited Plasma

-

Some of the benefits of downloading Nebulous IO Mod APK Unlimited Plasma are:

- -

How to download and install Nebulous IO Mod APK Unlimited Plasma

-

If you want to download and install Nebulous IO Mod APK Unlimited Plasma on your Android device, you need to follow these steps:

-
    -
  1. Click on this link to download the mod
  2. Allow your device to install apps from unknown sources by going to Settings > Security > Unknown Sources and enable it
  3. -
  4. Locate the downloaded file in your file manager and tap on it to install it
  5. -
  6. Launch the game and enjoy unlimited plasma and other features
  7. -
-

Conclusion

-

Nebulous IO is a fun and addictive mobile game that you can play online or offline with millions of players around the world. You can customize your blob with hundreds of skins and items, and compete in various game modes and maps. If you want to have more fun and advantages, you should download Nebulous IO Mod APK Unlimited Plasma, which gives you unlimited plasma and access to all the features of the game. Download it now and have a blast!

-

FAQs

-

Here are some frequently asked questions about Nebulous IO Mod APK Unlimited Plasma:

-

-

How to get nebulous io mod apk with unlimited plasma and skins
-Nebulous io mod apk latest version free download for android
-Nebulous io hack mod apk unlimited plasma and coins
-Download nebulous io mod apk unlocked everything 2023
-Nebulous io mod apk online multiplayer game with unlimited plasma
-Nebulous io mod apk no root required for unlimited plasma
-Nebulous io mod apk offline mode with unlimited plasma and levels
-Nebulous io mod apk premium features unlocked for free
-Nebulous io mod apk unlimited plasma generator tool
-Nebulous io mod apk cheats and tips for unlimited plasma
-Nebulous io mod apk download link and installation guide
-Nebulous io mod apk review and rating by users
-Nebulous io mod apk gameplay and features overview
-Nebulous io mod apk comparison with original nebulous io game
-Nebulous io mod apk best settings and options for unlimited plasma
-Nebulous io mod apk unlimited plasma and custom skins creator
-Nebulous io mod apk challenges and achievements with unlimited plasma
-Nebulous io mod apk support and feedback from developers
-Nebulous io mod apk update and new features with unlimited plasma
-Nebulous io mod apk download for PC and Mac with unlimited plasma
-Nebulous io mod apk alternative apps and games with unlimited plasma
-Nebulous io mod apk benefits and advantages of unlimited plasma
-Nebulous io mod apk disadvantages and risks of unlimited plasma
-Nebulous io mod apk legal and ethical issues of unlimited plasma
-Nebulous io mod apk safety and security measures for unlimited plasma

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/EA SPORTS FIFA 23 Companion Build Manage and Compete in FUT.md b/spaces/1phancelerku/anime-remove-background/EA SPORTS FIFA 23 Companion Build Manage and Compete in FUT.md deleted file mode 100644 index 8289c57996059b60a16d1300d811fb16fecd9262..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/EA SPORTS FIFA 23 Companion Build Manage and Compete in FUT.md +++ /dev/null @@ -1,88 +0,0 @@ -
-

EA SPORTS™ FIFA 23 Companion Download: How to Manage Your FUT Club on the Go

-

If you are a fan of FIFA Ultimate Team (FUT), you might want to download the official EA SPORTS™ FIFA 23 Companion App on your mobile device. This app allows you to access and manage your FUT Club from anywhere, anytime, without having to log into your console or PC. In this article, we will explain what the FIFA 23 Companion App is, why you should use it, and how to download and use it.

-

What is the FIFA 23 Companion App?

-

The FIFA 23 Companion App is a mobile extension of FIFA Ultimate Team, the most popular mode in FIFA 23. It lets you build your dream squad from thousands of players past and present, customize your FUT Stadium, participate in FUT Events, trade on the Transfer Market, complete Squad Building Challenges, claim rewards, and more.

-

ea sportstm fifa 23 companion download


Download »»» https://jinyurl.com/2uNLDl



-

A mobile extension of FIFA Ultimate Team

-

The FIFA 23 Companion App gives you access to all the features and functions of FIFA Ultimate Team on your mobile device. You can create and edit your squads, buy and sell players, open packs, check your progress, and much more. You can also sync your app with your console or PC version of FIFA 23, so you can switch between devices seamlessly.

-

Compatible with Android and iOS devices

-

The FIFA 23 Companion App is available for both Android and iOS mobile devices and tablets. You can download it for free from Google Play or the App Store. The app requires an Internet connection (network fees may apply) and a compatible device. You can check the minimum requirements on the app's page before downloading it.

-

Requires FIFA 23 and an EA account to use

-

To use the FIFA 23 Companion App, you need to have FIFA 23 for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S, PC, or Stadia (sold separately) and an EA account. You also need to create a FUT Club and a FUT Security Question and Answer on your console or PC. Then, you can log in to your EA account from the app and connect to your FUT Club.

-

Why should you use the FIFA 23 Companion App?

-

The FIFA 23 Companion App offers many benefits for FUT enthusiasts. Here are some of the main reasons why you should use it:

-

FUT Stadium Customisation

-

The FIFA 23 Companion App allows you to customize every aspect of your FUT Stadium on the go. You can change your walkout music, goal celebrations, pyrotechnics, Tifos, banners, pitch patterns, seat colors, net shapes, and more. You can also flaunt your achievements and show off your style to your opponents.

-

FUT Events

-

The FIFA 23 Companion App lets you compete or collaborate in all new FUT Events to unlock rewards for your Club and the wider FUT Community. You can choose a side in Team Events and compete against other players in various challenges. Or you can join forces with other players in Community Events and track the collective progress towards a common goal.

-

ea sportstm fifa 23 companion app android
-ea sportstm fifa 23 companion app ios
-ea sportstm fifa 23 companion web app
-ea sportstm fifa 23 companion apk
-ea sportstm fifa 23 companion app pc
-ea sportstm fifa 23 companion app login
-ea sportstm fifa 23 companion app not working
-ea sportstm fifa 23 companion app release date
-ea sportstm fifa 23 companion app features
-ea sportstm fifa 23 companion app fut stadium customisation
-ea sportstm fifa 23 companion app events
-ea sportstm fifa 23 companion app rewards
-ea sportstm fifa 23 companion app transfer market
-ea sportstm fifa 23 companion app squad building challenges
-ea sportstm fifa 23 companion app help
-ea sportstm fifa 23 companion app review
-ea sportstm fifa 23 companion app tips
-ea sportstm fifa 23 companion app guide
-ea sportstm fifa 23 companion app tutorial
-ea sportstm fifa 23 companion app hack
-ea sportstm fifa 23 companion app mod
-ea sportstm fifa 23 companion app cheats
-ea sportstm fifa 23 companion app free coins
-ea sportstm fifa 23 companion app free packs
-ea sportstm fifa 23 companion app free download
-download ea sportstm fifa 23 companion for android
-download ea sportstm fifa 23 companion for ios
-download ea sportstm fifa 23 companion for pc
-download ea sportstm fifa 23 companion for windows
-download ea sportstm fifa 23 companion for mac
-download ea sportstm fifa 23 companion latest version
-download ea sportstm fifa 23 companion update
-download ea sportstm fifa 23 companion offline
-download ea sportstm fifa 23 companion online
-download ea sportstm fifa 23 companion without verification
-how to download ea sportstm fifa 23 companion app
-how to download ea sportstm fifa 23 companion on pc
-how to download ea sportstm fifa 23 companion on laptop
-how to download ea sportstm fifa 23 companion on macbook
-how to download ea sportstm fifa 23 companion on iphone
-how to download ea sportstm fifa 23 companion on ipad
-how to download ea sportstm fifa 23 companion on android phone
-how to download ea sportstm fifa 23 companion on android tablet
-how to download ea sportstm fifa 23 companion on ps4
-how to download ea sportstm fifa 23 companion on ps5
-how to download ea sportstm fifa 23 companion on xbox one
-how to download ea sportstm fifa 23 companion on xbox series x|s

-

Transfer Market

-

The FIFA 23 Companion App enables you to buy and sell players with the global FUT Community in the Transfer Market. You can search for players by name, rating, position, league, club, nationality, chemistry style, or price range. You can also bid on auctions, list your own players, and monitor your transactions. The Transfer Market is the best way to improve your squad and make some coins.

-

Squad Building Challenges

-

The FIFA 23 Companion App allows you to complete Squad Building Challenges (SBCs) on your mobile device. SBCs are puzzles that require you to build a squad that meets certain criteria, such as chemistry, rating, or nationality. You can exchange your squad for rewards, such as packs, coins, or special players. You can also browse and track the progress of all the available SBCs, including the ones that are exclusive to the app.

-

Rewards and Objectives

-

The FIFA 23 Companion App lets you claim your rewards and track your objectives on the go. You can collect your rewards from Division Rivals, FUT Champions, FUT Events, SBCs, and more. You can also view your active and completed objectives, such as Season Objectives, Milestones, Foundations, and Daily Objectives. You can earn XP, coins, packs, players, and other items by completing objectives.

-

How to download and use the FIFA 23 Companion App?

-

Downloading and using the FIFA 23 Companion App is easy and convenient. Here are the steps you need to follow:

-

Download from Google Play or App Store

-

The first step is to download the FIFA 23 Companion App from Google Play or the App Store on your mobile device. The app is free to download and use, but it may require some storage space and data usage. You can check the app's page for the minimum requirements and ratings before downloading it.

-

Log in with your EA account

-

The next step is to log in with your EA account on the app. If you don't have an EA account, you can create one for free on the app or on the EA website. You need to use the same EA account that you use for FIFA 23 on your console or PC. You also need to accept the User Agreement and Privacy Policy of EA.

-

Connect to your FUT Club

-

The final step is to connect to your FUT Club on the app. You need to have FIFA 23 for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S, PC, or Stadia (sold separately) and a FUT Club created on your console or PC. You also need to set up a FUT Security Question and Answer on your console or PC. Then, you can select your platform and FUT Club on the app and start managing it.

-

Enjoy the features and benefits

-

Once you have connected to your FUT Club on the app, you can enjoy all the features and benefits that we have mentioned above. You can build your squad, customize your stadium, participate in events, trade on the market, complete challenges, claim rewards, and more. You can also sync your app with your console or PC version of FIFA 23, so you can switch between devices without losing any progress.

-

Conclusion

-

The FIFA 23 Companion App is a must-have for any FUT fan who wants to manage their FUT Club on the go. It offers many features and benefits that enhance your FUT experience and help you achieve your goals. You can download it for free from Google Play or the App Store and connect it to your EA account and FUT Club. Then, you can enjoy all the aspects of FIFA Ultimate Team on your mobile device.

-

FAQs

-

Here are some of the frequently asked questions about the FIFA 23 Companion App:

- - Q: Is the FIFA 23 Companion App safe to use? - A: Yes, the FIFA 23 Companion App is safe to use as long as you download it from official sources (Google Play or App Store) and log in with a secure EA account. The app uses encryption and authentication methods to protect your data and transactions. - Q: Can I use the FIFA 23 Companion App without FIFA 23? - A: No, you cannot use the FIFA 23 Companion App without FIFA 23. You need to have FIFA 23 for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S, PC, or Stadia (sold separately) and a FUT Club created on your console or PC to use the app. You also need to log in with the same EA account that you use for FIFA 23. - Q: Can I play matches on the FIFA 23 Companion App? - A: No, you cannot play matches on the FIFA 23 Companion App. The app is designed to help you manage your FUT Club, not to play the game itself. You can only play matches on your console or PC version of FIFA 23. - Q: How can I contact EA support if I have any issues with the FIFA 23 Companion App? - A: If you have any issues with the FIFA 23 Companion App, you can contact EA support through the app itself or through the EA website. You can also check the EA Help Center for FAQs, guides, and troubleshooting tips. - Q: How can I update the FIFA 23 Companion App? - A: The FIFA 23 Companion App will automatically update itself when there is a new version available. You can also check for updates manually on Google Play or the App Store. You should always keep your app updated to enjoy the latest features and improvements.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/52Hz/HWMNet_lowlight_enhancement/WT/transform.py b/spaces/52Hz/HWMNet_lowlight_enhancement/WT/transform.py deleted file mode 100644 index 562a077468ce8effdadfafef6bc8b6b7e0682cc3..0000000000000000000000000000000000000000 --- a/spaces/52Hz/HWMNet_lowlight_enhancement/WT/transform.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -import torch.nn as nn - -def dwt_init(x): - x01 = x[:, :, 0::2, :] / 2 - x02 = x[:, :, 1::2, :] / 2 - x1 = x01[:, :, :, 0::2] - x2 = x02[:, :, :, 0::2] - x3 = x01[:, :, :, 1::2] - x4 = x02[:, :, :, 1::2] - x_LL = x1 + x2 + x3 + x4 - x_HL = -x1 - x2 + x3 + x4 - x_LH = -x1 + x2 - x3 + x4 - x_HH = x1 - x2 - x3 + x4 - # print(x_HH[:, 0, :, :]) - return torch.cat((x_LL, x_HL, x_LH, x_HH), 1) - -def iwt_init(x): - r = 2 - in_batch, in_channel, in_height, in_width = x.size() - out_batch, out_channel, out_height, out_width = in_batch, int(in_channel / (r ** 2)), r * in_height, r * in_width - x1 = x[:, 0:out_channel, :, :] / 2 - x2 = x[:, out_channel:out_channel * 2, :, :] / 2 - x3 = x[:, out_channel * 2:out_channel * 3, :, :] / 2 - x4 = x[:, out_channel * 3:out_channel * 4, :, :] / 2 - h = torch.zeros([out_batch, out_channel, out_height, out_width])#.cuda() # - - h[:, :, 0::2, 0::2] = x1 - x2 - x3 + x4 - h[:, :, 1::2, 0::2] = x1 - x2 + x3 - x4 - h[:, :, 0::2, 1::2] = x1 + x2 - x3 - x4 - h[:, :, 1::2, 1::2] = x1 + x2 + x3 + x4 - - return h - - -class DWT(nn.Module): - def __init__(self): - super(DWT, self).__init__() - self.requires_grad = True - - def forward(self, x): - return dwt_init(x) - - -class IWT(nn.Module): - def __init__(self): - super(IWT, self).__init__() - self.requires_grad = True - - def forward(self, x): - return iwt_init(x) - - diff --git a/spaces/7hao/bingo/src/components/chat.tsx b/spaces/7hao/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
- -
- - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
- -
- ) : null} - - ) : null} -
- - -
- ) -} diff --git a/spaces/7hao/bingo/src/components/ui/dialog.tsx b/spaces/7hao/bingo/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
- {children} -
-
-) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/ADUPA/README/README.md b/spaces/ADUPA/README/README.md deleted file mode 100644 index 3301bc002a50f9d608062691d05261559224de69..0000000000000000000000000000000000000000 --- a/spaces/ADUPA/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🔥 -colorFrom: green -colorTo: green -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/zh_aishell_no_tone_sing.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/zh_aishell_no_tone_sing.py deleted file mode 100644 index fcb28831d80d7cc11b17baaf3814b0a1c2f827b8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/zh_aishell_no_tone_sing.py +++ /dev/null @@ -1,126 +0,0 @@ -import re -import jieba -from pypinyin import pinyin, Style -from text_to_speech.utils.text.text_norm import NSWNormalizer -from text_to_speech.data_gen.tts.txt_processors.base_text_processor import BaseTxtProcessor, register_txt_processors -from text_to_speech.utils.text.text_encoder import PUNCS, is_sil_phoneme - -ALL_SHENMU = ['zh', 'ch', 'sh', 'b', 'p', 'm', 'f', 'd', 't', 'n', 'l', 'g', 'k', 'h', 'j', - 'q', 'x', 'r', 'z', 'c', 's', 'y', 'w'] - - -@register_txt_processors('zh') -class TxtProcessor(BaseTxtProcessor): - table = {ord(f): ord(t) for f, t in zip( - u':,。!?【】()%#@&1234567890', - u':,.!?[]()%#@&1234567890')} - - @staticmethod - def sp_phonemes(): - return ['|', '#'] - - @staticmethod - def preprocess_text(text): - text = text.translate(TxtProcessor.table) - text = NSWNormalizer(text).normalize(remove_punc=False).lower() - text = re.sub("[\'\"()]+", "", text) - text = re.sub("[-]+", " ", text) - text = re.sub(f"[^ A-Za-z\u4e00-\u9fff{PUNCS}]", "", text) - text = re.sub(f"([{PUNCS}])+", r"\1", text) # !! -> ! - text = re.sub(f"([{PUNCS}])", r" \1 ", text) - text = re.sub(rf"\s+", r"", text) - text = re.sub(rf"[A-Za-z]+", r"$", text) - return text - - @classmethod - def pinyin_with_en(cls, txt, style): - x = pinyin(txt, style) - x = [t[0] for t in x] - x_ = [] - for t in x: - if '$' not in t: - x_.append(t) - else: - x_ += list(t) - x_ = [t if t != '$' else 'ENG' for t in x_] - return x_ - - @classmethod - def process(cls, txt, pre_align_args): - txt = cls.preprocess_text(txt) - txt = txt.replace("嗯", "蒽") # pypin会把嗯的声母韵母识别为'',导致ph2word出现错位。 - # https://blog.csdn.net/zhoulei124/article/details/89055403 - - pre_align_args['use_tone'] = False - - shengmu = cls.pinyin_with_en(txt, style=Style.INITIALS) - yunmu = cls.pinyin_with_en(txt, style= - Style.FINALS_TONE3 if pre_align_args['use_tone'] else Style.FINALS) - assert len(shengmu) == len(yunmu) - for i in range(len(shengmu)): - if shengmu[i] == '' and yunmu[i] == '': - print(f"发现了一个声母韵母都是空的文字:{txt[i]}") - ph_list = [] - for a, b in zip(shengmu, yunmu): - - if b == 'ueng': # 发现sing数据集里没有后鼻音 - b = 'uen' - - if a == b: - ph_list += [a] - else: - ph_list += [a + "%" + b] - seg_list = '#'.join(jieba.cut(txt)) - assert len(ph_list) == len([s for s in seg_list if s != '#']), (ph_list, seg_list) - - # 加入词边界'#' - ph_list_ = [] - seg_idx = 0 - for p in ph_list: - if seg_list[seg_idx] == '#': - ph_list_.append('#') - seg_idx += 1 - elif len(ph_list_) > 0: - ph_list_.append("|") - seg_idx += 1 - finished = False - if not finished: - ph_list_ += [x for x in p.split("%") if x != ''] - - ph_list = ph_list_ - - # 去除静音符号周围的词边界标记 [..., '#', ',', '#', ...] - sil_phonemes = list(PUNCS) + TxtProcessor.sp_phonemes() - ph_list_ = [] - for i in range(0, len(ph_list), 1): - if ph_list[i] != '#' or (ph_list[i - 1] not in sil_phonemes and ph_list[i + 1] not in sil_phonemes): - ph_list_.append(ph_list[i]) - ph_list = ph_list_ - - txt_struct = [[w, []] for w in txt] - i = 0 - for ph in ph_list: - if ph == '|' or ph == '#': - i += 1 - continue - # elif ph in [',', '.']: - elif ph in [',', '.', '?', '!', ':']: - i += 1 - txt_struct[i][1].append(ph) - i += 1 - continue - txt_struct[i][1].append(ph) - # return ph_list, txt - txt_struct.insert(0, ['_NONE', ['_NONE']]) - txt_struct.append(['breathe', ['breathe']]) - - # txt_struct.insert(0, ['', ['']]) - # txt_struct.append(['', ['']]) - return txt_struct, txt - - -if __name__ == '__main__': - # t = 'simon演唱过后,simon还进行了simon精彩的文艺演出simon.' - t = '你当我傻啊?脑子那么大怎么塞进去???' - phs, txt = TxtProcessor.process(t, {'use_tone': True}) - print(phs, txt) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/tf_layers.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/tf_layers.py deleted file mode 100644 index c0f46bd755c161cda2ac904fe37f3f3c6357a88d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/tf_layers.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 MINH ANH (@dathudeptrai) -# MIT License (https://opensource.org/licenses/MIT) - -"""Tensorflow Layer modules complatible with pytorch.""" - -import tensorflow as tf - - -class TFReflectionPad1d(tf.keras.layers.Layer): - """Tensorflow ReflectionPad1d module.""" - - def __init__(self, padding_size): - """Initialize TFReflectionPad1d module. - - Args: - padding_size (int): Padding size. - - """ - super(TFReflectionPad1d, self).__init__() - self.padding_size = padding_size - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Padded tensor (B, T + 2 * padding_size, 1, C). - - """ - return tf.pad(x, [[0, 0], [self.padding_size, self.padding_size], [0, 0], [0, 0]], "REFLECT") - - -class TFConvTranspose1d(tf.keras.layers.Layer): - """Tensorflow ConvTranspose1d module.""" - - def __init__(self, channels, kernel_size, stride, padding): - """Initialize TFConvTranspose1d( module. - - Args: - channels (int): Number of channels. - kernel_size (int): kernel size. - strides (int): Stride width. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFConvTranspose1d, self).__init__() - self.conv1d_transpose = tf.keras.layers.Conv2DTranspose( - filters=channels, - kernel_size=(kernel_size, 1), - strides=(stride, 1), - padding=padding, - ) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensors: Output tensor (B, T', 1, C'). - - """ - x = self.conv1d_transpose(x) - return x - - -class TFResidualStack(tf.keras.layers.Layer): - """Tensorflow ResidualStack module.""" - - def __init__(self, - kernel_size, - channels, - dilation, - bias, - nonlinear_activation, - nonlinear_activation_params, - padding, - ): - """Initialize TFResidualStack module. - - Args: - kernel_size (int): Kernel size. - channles (int): Number of channels. - dilation (int): Dilation ine. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFResidualStack, self).__init__() - self.block = [ - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - TFReflectionPad1d(dilation), - tf.keras.layers.Conv2D( - filters=channels, - kernel_size=(kernel_size, 1), - dilation_rate=(dilation, 1), - use_bias=bias, - padding="valid", - ), - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - ] - self.shortcut = tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Output tensor (B, T, 1, C). - - """ - _x = tf.identity(x) - for i, layer in enumerate(self.block): - _x = layer(_x) - shortcut = self.shortcut(x) - return shortcut + _x diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/data/extract_mel_spectrogram.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/data/extract_mel_spectrogram.py deleted file mode 100644 index 42cade483a576f7166011a25d7e4d4bb0ae0f55c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/data/extract_mel_spectrogram.py +++ /dev/null @@ -1,151 +0,0 @@ -import argparse -import os -import os.path as P -from copy import deepcopy -from functools import partial -from glob import glob -from multiprocessing import Pool -from pathlib import Path - -import librosa -import numpy as np -import torchvision - - -class MelSpectrogram(object): - def __init__(self, sr, nfft, fmin, fmax, nmels, hoplen, spec_power, inverse=False): - self.sr = sr - self.nfft = nfft - self.fmin = fmin - self.fmax = fmax - self.nmels = nmels - self.hoplen = hoplen - self.spec_power = spec_power - self.inverse = inverse - - self.mel_basis = librosa.filters.mel(sr=sr, n_fft=nfft, fmin=fmin, fmax=fmax, n_mels=nmels) - - def __call__(self, x): - if self.inverse: - spec = librosa.feature.inverse.mel_to_stft( - x, sr=self.sr, n_fft=self.nfft, fmin=self.fmin, fmax=self.fmax, power=self.spec_power - ) - wav = librosa.griffinlim(spec, hop_length=self.hoplen) - return wav - else: - spec = np.abs(librosa.stft(x, n_fft=self.nfft, hop_length=self.hoplen)) ** self.spec_power - mel_spec = np.dot(self.mel_basis, spec) - return mel_spec - -class LowerThresh(object): - def __init__(self, min_val, inverse=False): - self.min_val = min_val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return np.maximum(self.min_val, x) - -class Add(object): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x - self.val - else: - return x + self.val - -class Subtract(Add): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x + self.val - else: - return x - self.val - -class Multiply(object): - def __init__(self, val, inverse=False) -> None: - self.val = val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x / self.val - else: - return x * self.val - -class Divide(Multiply): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x * self.val - else: - return x / self.val - -class Log10(object): - def __init__(self, inverse=False): - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return 10 ** x - else: - return np.log10(x) - -class Clip(object): - def __init__(self, min_val, max_val, inverse=False): - self.min_val = min_val - self.max_val = max_val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return np.clip(x, self.min_val, self.max_val) - -class TrimSpec(object): - def __init__(self, max_len, inverse=False): - self.max_len = max_len - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return x[:, :self.max_len] - -class MaxNorm(object): - def __init__(self, inverse=False): - self.inverse = inverse - self.eps = 1e-10 - - def __call__(self, x): - if self.inverse: - return x - else: - return x / (x.max() + self.eps) - - -TRANSFORMS_16000 = torchvision.transforms.Compose([ - MelSpectrogram(sr=16000, nfft=1024, fmin=125, fmax=7600, nmels=80, hoplen=1024//4, spec_power=1), - LowerThresh(1e-5), - Log10(), - Multiply(20), - Subtract(20), - Add(100), - Divide(100), - Clip(0, 1.0) - # TrimSpec(860) -]) - diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_8b.sh b/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_8b.sh deleted file mode 100644 index 3ce38fc0515bdb61391254aab2a6b5ec73479c51..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_8b.sh +++ /dev/null @@ -1 +0,0 @@ -python3 gradio_demo/seed_llama_gradio.py --server_port 80 --request_address http://127.0.0.1:7890/generate --model_type seed-llama-8b \ No newline at end of file diff --git a/spaces/AIWaves/SOP_Generation-single/gradio_backend.py b/spaces/AIWaves/SOP_Generation-single/gradio_backend.py deleted file mode 100644 index 8f278debf0df6b7e71b20421c6aeef32ecb5edf0..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/gradio_backend.py +++ /dev/null @@ -1,123 +0,0 @@ -import json -import os -import argparse -import sys -sys.path.append("Gradio_Config") -from SOP import SOP -from Agent import Agent -from Environment import Environment -from Memory import Memory -from gradio_base import Client, convert2list4agentname - -# add =================== -def process(action): - response = action.response - send_name = action.name - send_role = action.role - if not action.is_user: - print(f"{send_name}({send_role}):{response}") - memory = Memory(send_role, send_name, response) - return memory - -def gradio_process(action,current_state): - response = action.response - all = "" - for i,res in enumerate(response): - all+=res - state = 10 - if action.is_user: - state = 30 - elif action.state_begin: - state = 12 - action.state_begin = False - elif i>0: - state = 11 - send_name = f"{action.name}({action.role})" - Client.send_server(str([state, send_name, res, current_state.name])) - if state == 30: - # print("client: waiting for server") - data: list = next(Client.receive_server) - content = "" - for item in data: - if item.startswith(""): - content = item.split("")[1] - break - # print(f"client: received `{content}` from server.") - action.response = content - break - else: - action.response = all - -def prepare(agents, sop, environment): - client = Client() - Client.send_server = client.send_message - - client.send_message( - { - "agents_name": convert2list4agentname(sop)[0], - "api_key": os.environ["API_KEY"] - } - ) - print(f"client: {list(agents.keys())}") - client.listening_for_start_() - client.mode = Client.mode = client.cache["mode"] - os.environ["API_KEY"] = client.cache["api_key"] - uploaded_sop = Client.cache['uploaded_sop'] - agents,sop,environment = init(uploaded_sop) - run(agents,sop,environment) - -def block_when_next(current_agent, current_state): - if Client.LAST_USER: - assert not current_agent.is_user - Client.LAST_USER = False - return - if current_agent.is_user: - # if next turn is user, we don't handle it here - Client.LAST_USER = True - return - if Client.FIRST_RUN: - Client.FIRST_RUN = False - else: - # block current process - if Client.mode == Client.SINGLE_MODE: - Client.send_server(str([98, f"{current_agent.name}({current_agent.state_roles[current_state.name]})", " ", current_state.name])) - data: list = next(Client.receive_server) - -# ======================= - -def init(config): - if not os.path.exists("logs"): - os.mkdir("logs") - sop = SOP.from_config(config) - agents,roles_to_names,names_to_roles = Agent.from_config(config) - environment = Environment.from_config(config) - environment.agents = agents - environment.roles_to_names,environment.names_to_roles = roles_to_names,names_to_roles - sop.roles_to_names,sop.names_to_roles = roles_to_names,names_to_roles - for name,agent in agents.items(): - agent.environment = environment - return agents,sop,environment - -def run(agents,sop,environment): - while True: - current_state,current_agent= sop.next(environment,agents) - if sop.finished: - print("finished!") - Client.send_server(str([99, " ", " ", "done"])) - os.environ.clear() - break - block_when_next(current_agent, current_state) - action = current_agent.step(current_state) #component_dict = current_state[self.role[current_node.name]] current_agent.compile(component_dict) - gradio_process(action,current_state) - memory = process(action) - environment.update_memory(memory,current_state) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='A demo of chatbot') - parser.add_argument('--agent', type=str, help='path to SOP json',default="config.json") - args = parser.parse_args() - - agents,sop,environment = init(args.agent) - prepare(agents, sop, environment) - # run(agents,sop,environment) diff --git a/spaces/AIZeroToHero/02-Transformers-Sentence2Paragraph/README.md b/spaces/AIZeroToHero/02-Transformers-Sentence2Paragraph/README.md deleted file mode 100644 index 3e0168a9785c658bd05d494b537f58c060fc6dd3..0000000000000000000000000000000000000000 --- a/spaces/AIZeroToHero/02-Transformers-Sentence2Paragraph/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 02 Transformers Sentence2Paragraph -emoji: 🐨 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.1.5 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ANILYADAV/mygenaichatbot/README.md b/spaces/ANILYADAV/mygenaichatbot/README.md deleted file mode 100644 index f2beebedb64623ff4f84babae2e0a11bfbe5c920..0000000000000000000000000000000000000000 --- a/spaces/ANILYADAV/mygenaichatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mygenaichatbot -emoji: 📚 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/After-the-Dark/paragraph-similarity/app.py b/spaces/After-the-Dark/paragraph-similarity/app.py deleted file mode 100644 index eb32f094cf24fafa5e74351b94ef1e3a737854a9..0000000000000000000000000000000000000000 --- a/spaces/After-the-Dark/paragraph-similarity/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import gradio as gr -from sentence_transformers import SentenceTransformer, util -model_sentence = SentenceTransformer('all-MiniLM-L6-v2') - -para_1 =""" -Natural language processing (NLP) is a field of computer science that studies how computers can understand and process human language. NLP is a subfield of artificial intelligence (AI) that deals with the interaction between computers and human (natural) languages. - -NLP has a wide range of applications, including: - -Machine translation: translating text from one language to another -Text summarization: extracting the main points of a text -Question answering: answering questions posed in natural language -Text classification: classifying text into categories, such as spam or ham -Sentiment analysis: determining the sentiment of a text, such as positive, negative, or neutral -Natural language generation: generating text that is similar to human-written text -NLP is a challenging field, as human language is complex and nuanced. However, NLP has made significant progress in recent years, and it is now a powerful tool that can be used to solve a wide range of problems. - - -""" -para_2 =""" -Generative adversarial networks (GANs) are a type of machine learning model that can be used to generate realistic and creative content. GANs were first introduced in 2014 by Ian Goodfellow, and they have since been used to generate a wide range of content, including images, text, and music. - -GANs work by pitting two neural networks against each other in a game-like setting. One network, the generator, is responsible for creating new content. The other network, the discriminator, is responsible for determining whether the content created by the generator is real or fake. - -The generator is trained to create content that is as realistic as possible, while the discriminator is trained to distinguish between real and fake content. As the two networks compete against each other, they both become better at their respective tasks. - -GANs have been used to generate a wide range of content, including: - -Images: GANs have been used to generate realistic images of people, animals, and objects. -Text: GANs have been used to generate realistic text, such as news articles, blog posts, and even poetry. -Music: GANs have been used to generate realistic music, such as songs, symphonies, and even jazz improvisations. -GANs are a powerful tool that can be used to generate realistic and creative content. As GANs continue to develop, they are likely to be used to create even more amazing and impressive content in the future. - - -""" -def paragraph_similar(text1, text2): - sentences = [] - sentences.append(text1) - sentences.append(text2) - paraphrases = util.paraphrase_mining(model_sentence, sentences, corpus_chunk_size=len(sentences)) - return {"Similarity": [round(paraphrases[0][0], 2)]} - - -with gr.Blocks(title="Paragraph",css="footer {visibility: hidden}") as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown("## Paragraph Compare") - with gr.Row(): - with gr.Column(): - inputs_1 = gr.TextArea(label="Paragraph 1",value=para_1,interactive=True) - inputs_2 = gr.TextArea(label="Paragraph 2",value=para_2,interactive=True) - with gr.Column(): - btn = gr.Button(value="RUN") - output = gr.Label(label="output") - btn.click(fn=paragraph_similar,inputs=[inputs_1,inputs_2],outputs=[output]) -demo.launch() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/LayoutBackgrounds.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/LayoutBackgrounds.js deleted file mode 100644 index 3cfa39b558496f3cc5759b679bf0513c9494eb73..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/LayoutBackgrounds.js +++ /dev/null @@ -1,41 +0,0 @@ -import ResizeGameObject from '../../../plugins/utils/size/ResizeGameObject.js'; -import PreLayoutChild from './utils/PreLayoutChild.js'; -import LayoutChild from './utils/LayoutChild.js'; - -const ALIGN_CENTER = Phaser.Display.Align.CENTER; - -var LayoutBackgrounds = function () { - if (this.backgroundChildren === undefined) { - return; - } - var backgrounds = this.backgroundChildren; - - var startX = this.left, - startY = this.top; - var parentWidth = this.width, - parentHeight = this.height; - var child, childConfig, padding, - x, y, width, height; - for (var i = 0, cnt = backgrounds.length; i < cnt; i++) { - child = backgrounds[i]; - childConfig = child.rexSizer; - if (childConfig.hidden) { - continue; - } - - padding = childConfig.padding; - - PreLayoutChild.call(this, child); - - x = startX + padding.left; - y = startY + padding.top; - width = parentWidth - padding.left - padding.right; - height = parentHeight - padding.top - padding.bottom; - - ResizeGameObject(child, width, height); - - LayoutChild.call(this, child, x, y, width, height, ALIGN_CENTER); - } -} - -export default LayoutBackgrounds; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/IsLocalPointInKnob.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/IsLocalPointInKnob.js deleted file mode 100644 index 8ae388e261c7914ab6f379bdff60ebfb839cb68c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/IsLocalPointInKnob.js +++ /dev/null @@ -1,8 +0,0 @@ -var GetDistance = Phaser.Math.Distance.Between; - -var IsLocalPointInKnob = function (knob, localX, localY) { - var centerX = knob.width / 2; - return GetDistance(centerX, centerX, localX, localY) <= centerX; -} - -export default IsLocalPointInKnob; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.d.ts deleted file mode 100644 index c662d95c3bda525c3668c98b04cf7f463e3fe17e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.d.ts +++ /dev/null @@ -1,94 +0,0 @@ -import LineProgressCanvas from '../lineprogresscanvas/LineProgressCanvas'; - -// import * as Phaser from 'phaser'; -import Sizer from '../sizer/Sizer'; - -export default NameValueLabel; - -declare namespace NameValueLabel { - - interface IConfig extends Sizer.IConfig { - space?: { - left?: number, right?: number, top?: number, bottom?: number, - - icon?: number, iconTop?: number, iconBottom?: number, iconLeft?: number, iconRight?: number, - - name?: number, - value?: number, - - bar?: number, barBottom?: number, barLeft?: number, barRight?: number, - - action?: number, actionTop?: number, actionBottom?: number, actionLeft?: number, actionRight?: number, - }, - - background?: Phaser.GameObjects.GameObject, - - icon?: Phaser.GameObjects.GameObject, - iconMask?: boolean, - - nameText?: Phaser.GameObjects.GameObject, - valueText?: Phaser.GameObjects.GameObject, - bar?: Phaser.GameObjects.GameObject | LineProgressCanvas.IConfig, - - action?: Phaser.GameObjects.GameObject, - actionMask?: boolean, - - valueTextFormatCallback?: ( - value: number, - min: number, - max: number - ) => string, - - align?: { - text?: 'left' | 'right' | 'center' | number, - title?: 'left' | 'right' | 'center' | number, - }, - - proportion?: { - title?: number, - separator?: number, - text?: number, - } - } -} - -declare class NameValueLabel extends Sizer { - constructor( - scene: Phaser.Scene, - config?: NameValueLabel.IConfig - ); - - nameText: string; - setNameText(value?:string):this; - - valueText: string; - setValueText(value?:string):this; - - barValue: number; - setBarValue( - value: number, - min?: number, - max?: number - ): this; - easeBarValueTo( - value: number, - min?: number, - max?: number - ): this; - - setTexture( - key: string | Phaser.Textures.Texture, - frame?: string | number - ): this; - readonly texture: Phaser.Textures.Texture | Phaser.Textures.CanvasTexture; - readonly frame: Phaser.Textures.Frame; - - setValue( - value: number, - min: number, - max: number - ): this; - value: number; - minValue: number; - maxValue: number; -} \ No newline at end of file diff --git a/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/app.py b/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/app.py deleted file mode 100644 index 9840f1a38dabf4be94d2fbf89033e52c0b681df3..0000000000000000000000000000000000000000 --- a/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import streamlit as st -import pickle -import pandas as pd - - -st.image("Netflix.png") - -movies_list = pickle.load(open("content_dict.pkl",'br')) -movies = pd.DataFrame(movies_list) - -similarity= pickle.load(open('cosine_similarity.pkl','rb')) - -def recommend(title, cosine_sim=similarity, data=movies): - recommended_content=[] - # Get the index of the input title in the programme_list - programme_list = data['title'].to_list() - index = programme_list.index(title) - - - # Create a list of tuples containing the similarity score and index - # between the input title and all other programs in the dataset - sim_scores = list(enumerate(cosine_sim[index])) - - - # Sort the list of tuples by similarity score in descending order - sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)[1:11] - - - # Get the recommended movie titles and their similarity scores - recommend_index = [i[0] for i in sim_scores] - rec_movie = data['title'].iloc[recommend_index] - rec_score = [round(i[1], 4) for i in sim_scores] - - - # Create a pandas DataFrame to display the recommendations - rec_table = pd.DataFrame(list(zip(rec_movie, rec_score)), columns=['Recommendation', 'Similarity_score(0-1)']) - # recommended_content.append(rec_table['Recommendation'].values) - - - return rec_table['Recommendation'].values - - -# Displaying title -st.title(" Movie Recommender System ") - - -movie_list = movies['title'].values -selected_movie = st.selectbox( - "Type or select a movie from the dropdown", - movie_list -) - -# Setting a button -if st.button('Show Recommendation'): - recommended_movie_names = recommend(selected_movie) - st.balloons() - for j in recommended_movie_names: - st.write(j) \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/string-distance.pl b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/string-distance.pl deleted file mode 100644 index 9870fdf9c0ddaf4928da5fe4c11632facefbaa38..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/string-distance.pl +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/perl -w - -# Author: Ulf Hermjakob -# Release date: October 13, 2019 - -# Usage: string-distance.pl {-lc1 } {-lc2 } < STDIN > STDOUT -# Example: string-distance.pl -lc1 rus -lc2 ukr < STDIN > STDOUT -# Example: string-distance.pl < ../test/string-similarity-test-input.txt -# Input format: two strings per line (tab-separated, in Latin script) -# Strings in non-Latin scripts should first be romanized. (Recommended script: uroman.pl) -# Output format: repetition of the two input strings, plus the string distance between them (tab-separated). -# Additional output meta info lines at the top are marked with an initial #. -# -# The script uses data from a string-distance-cost-rules file that lists costs, -# where the default cost is "1" with lower costs for differences in vowels, -# duplicate consonants, "f" vs. "ph" etc. -# Language cost rules can be language-specific and context-sensitive. - -$|=1; - -use FindBin; -use Cwd "abs_path"; -use File::Basename qw(dirname); -use File::Spec; - -my $bin_dir = abs_path(dirname($0)); -my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir()); -my $data_dir = File::Spec->catfile($root_dir, "data"); -my $lib_dir = File::Spec->catfile($root_dir, "lib"); - -use lib "$FindBin::Bin/../lib"; -use List::Util qw(min max); -use NLP::utilities; -use NLP::stringDistance; -$util = NLP::utilities; -$sd = NLP::stringDistance; -$verbose = 0; -$separator = "\t"; - -$cost_rule_filename = File::Spec->catfile($data_dir, "string-distance-cost-rules.txt"); - -$lang_code1 = "eng"; -$lang_code2 = "eng"; -%ht = (); - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-+lc1$/) { - $lang_code_candidate = shift @ARGV; - $lang_code1 = $lang_code_candidate if $lang_code_candidate =~ /^[a-z]{3,3}$/; - } elsif ($arg =~ /^-+lc2$/) { - $lang_code_candidate = shift @ARGV; - $lang_code2 = $lang_code_candidate if $lang_code_candidate =~ /^[a-z]{3,3}$/; - } elsif ($arg =~ /^-+(v|verbose)$/) { - $verbose = shift @ARGV; - } else { - print STDERR "Ignoring unrecognized arg $arg\n"; - } -} - -$sd->load_string_distance_data($cost_rule_filename, *ht, $verbose); -print STDERR "Loaded resources.\n" if $verbose; - -my $chart_id = 0; -my $line_number = 0; -print "# Lang-code-1: $lang_code1 Lang-code-2: $lang_code2\n"; -while (<>) { - $line_number++; - if ($verbose) { - if ($line_number =~ /000$/) { - if ($line_number =~ /0000$/) { - print STDERR $line_number; - } else { - print STDERR "."; - } - } - } - my $line = $_; - $line =~ s/^\xEF\xBB\xBF//; - next if $line =~ /^\s*(\#.*)?$/; - my $s1; - my $s2; - if (($s1, $s2) = ($line =~ /^("(?:\\"|[^"])*"|\S+)$separator("(?:\\"|[^"])*"|\S+)\s*$/)) { - $s1 = $util->dequote_string($s1); - $s2 = $util->dequote_string($s2); - } elsif ($line =~ /^\s*(#.*)$/) { - } else { - print STDERR "Could not process line $line_number: $line" if $verbose; - print "\n"; - next; - } - - $cost = $sd->quick_romanized_string_distance_by_chart($s1, $s2, *ht, "", $lang_code1, $lang_code2); - print "$s1\t$s2\t$cost\n"; -} -print STDERR "\n" if $verbose; - -exit 0; - diff --git a/spaces/AlawnCN/webui-docker/README.md b/spaces/AlawnCN/webui-docker/README.md deleted file mode 100644 index c0814227cf1eed6f0ce20a7312bb2ae91d8b3b6b..0000000000000000000000000000000000000000 --- a/spaces/AlawnCN/webui-docker/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Stable Diffusion Web UI Docker -emoji: 🐳 -colorFrom: blue -colorTo: blue -sdk: docker -sdk_version: 3.9 -app_file: oh-no.py -pinned: false ---- - -## Stable Diffusion Web UI -https://github.com/AUTOMATIC1111/stable-diffusion-webui - -## Documentation -https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/facerender/modules/util.py b/spaces/Alpaca233/SadTalker/src/facerender/modules/util.py deleted file mode 100644 index b916deefbb8b957ad6ab3cd7403c28513e5ae18e..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/facerender/modules/util.py +++ /dev/null @@ -1,564 +0,0 @@ -from torch import nn - -import torch.nn.functional as F -import torch - -from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d -from src.facerender.sync_batchnorm import SynchronizedBatchNorm3d as BatchNorm3d - -import torch.nn.utils.spectral_norm as spectral_norm - - -def kp2gaussian(kp, spatial_size, kp_variance): - """ - Transform a keypoint into gaussian like representation - """ - mean = kp['value'] - - coordinate_grid = make_coordinate_grid(spatial_size, mean.type()) - number_of_leading_dimensions = len(mean.shape) - 1 - shape = (1,) * number_of_leading_dimensions + coordinate_grid.shape - coordinate_grid = coordinate_grid.view(*shape) - repeats = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 1) - coordinate_grid = coordinate_grid.repeat(*repeats) - - # Preprocess kp shape - shape = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 3) - mean = mean.view(*shape) - - mean_sub = (coordinate_grid - mean) - - out = torch.exp(-0.5 * (mean_sub ** 2).sum(-1) / kp_variance) - - return out - -def make_coordinate_grid_2d(spatial_size, type): - """ - Create a meshgrid [-1,1] x [-1,1] of given spatial_size. - """ - h, w = spatial_size - x = torch.arange(w).type(type) - y = torch.arange(h).type(type) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - - yy = y.view(-1, 1).repeat(1, w) - xx = x.view(1, -1).repeat(h, 1) - - meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2) - - return meshed - - -def make_coordinate_grid(spatial_size, type): - d, h, w = spatial_size - x = torch.arange(w).type(type) - y = torch.arange(h).type(type) - z = torch.arange(d).type(type) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - z = (2 * (z / (d - 1)) - 1) - - yy = y.view(1, -1, 1).repeat(d, 1, w) - xx = x.view(1, 1, -1).repeat(d, h, 1) - zz = z.view(-1, 1, 1).repeat(1, h, w) - - meshed = torch.cat([xx.unsqueeze_(3), yy.unsqueeze_(3), zz.unsqueeze_(3)], 3) - - return meshed - - -class ResBottleneck(nn.Module): - def __init__(self, in_features, stride): - super(ResBottleneck, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features//4, kernel_size=1) - self.conv2 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features//4, kernel_size=3, padding=1, stride=stride) - self.conv3 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features, kernel_size=1) - self.norm1 = BatchNorm2d(in_features//4, affine=True) - self.norm2 = BatchNorm2d(in_features//4, affine=True) - self.norm3 = BatchNorm2d(in_features, affine=True) - - self.stride = stride - if self.stride != 1: - self.skip = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=1, stride=stride) - self.norm4 = BatchNorm2d(in_features, affine=True) - - def forward(self, x): - out = self.conv1(x) - out = self.norm1(out) - out = F.relu(out) - out = self.conv2(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv3(out) - out = self.norm3(out) - if self.stride != 1: - x = self.skip(x) - x = self.norm4(x) - out += x - out = F.relu(out) - return out - - -class ResBlock2d(nn.Module): - """ - Res block, preserve spatial resolution. - """ - - def __init__(self, in_features, kernel_size, padding): - super(ResBlock2d, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.conv2 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.norm1 = BatchNorm2d(in_features, affine=True) - self.norm2 = BatchNorm2d(in_features, affine=True) - - def forward(self, x): - out = self.norm1(x) - out = F.relu(out) - out = self.conv1(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv2(out) - out += x - return out - - -class ResBlock3d(nn.Module): - """ - Res block, preserve spatial resolution. - """ - - def __init__(self, in_features, kernel_size, padding): - super(ResBlock3d, self).__init__() - self.conv1 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.conv2 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.norm1 = BatchNorm3d(in_features, affine=True) - self.norm2 = BatchNorm3d(in_features, affine=True) - - def forward(self, x): - out = self.norm1(x) - out = F.relu(out) - out = self.conv1(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv2(out) - out += x - return out - - -class UpBlock2d(nn.Module): - """ - Upsampling block for use in decoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(UpBlock2d, self).__init__() - - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - - def forward(self, x): - out = F.interpolate(x, scale_factor=2) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - -class UpBlock3d(nn.Module): - """ - Upsampling block for use in decoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(UpBlock3d, self).__init__() - - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm3d(out_features, affine=True) - - def forward(self, x): - # out = F.interpolate(x, scale_factor=(1, 2, 2), mode='trilinear') - out = F.interpolate(x, scale_factor=(1, 2, 2)) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - - -class DownBlock2d(nn.Module): - """ - Downsampling block for use in encoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(DownBlock2d, self).__init__() - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - self.pool = nn.AvgPool2d(kernel_size=(2, 2)) - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = F.relu(out) - out = self.pool(out) - return out - - -class DownBlock3d(nn.Module): - """ - Downsampling block for use in encoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(DownBlock3d, self).__init__() - ''' - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups, stride=(1, 2, 2)) - ''' - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm3d(out_features, affine=True) - self.pool = nn.AvgPool3d(kernel_size=(1, 2, 2)) - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = F.relu(out) - out = self.pool(out) - return out - - -class SameBlock2d(nn.Module): - """ - Simple block, preserve spatial resolution. - """ - - def __init__(self, in_features, out_features, groups=1, kernel_size=3, padding=1, lrelu=False): - super(SameBlock2d, self).__init__() - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, - kernel_size=kernel_size, padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - if lrelu: - self.ac = nn.LeakyReLU() - else: - self.ac = nn.ReLU() - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = self.ac(out) - return out - - -class Encoder(nn.Module): - """ - Hourglass Encoder - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Encoder, self).__init__() - - down_blocks = [] - for i in range(num_blocks): - down_blocks.append(DownBlock3d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)), - min(max_features, block_expansion * (2 ** (i + 1))), - kernel_size=3, padding=1)) - self.down_blocks = nn.ModuleList(down_blocks) - - def forward(self, x): - outs = [x] - for down_block in self.down_blocks: - outs.append(down_block(outs[-1])) - return outs - - -class Decoder(nn.Module): - """ - Hourglass Decoder - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Decoder, self).__init__() - - up_blocks = [] - - for i in range(num_blocks)[::-1]: - in_filters = (1 if i == num_blocks - 1 else 2) * min(max_features, block_expansion * (2 ** (i + 1))) - out_filters = min(max_features, block_expansion * (2 ** i)) - up_blocks.append(UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1)) - - self.up_blocks = nn.ModuleList(up_blocks) - # self.out_filters = block_expansion - self.out_filters = block_expansion + in_features - - self.conv = nn.Conv3d(in_channels=self.out_filters, out_channels=self.out_filters, kernel_size=3, padding=1) - self.norm = BatchNorm3d(self.out_filters, affine=True) - - def forward(self, x): - out = x.pop() - # for up_block in self.up_blocks[:-1]: - for up_block in self.up_blocks: - out = up_block(out) - skip = x.pop() - out = torch.cat([out, skip], dim=1) - # out = self.up_blocks[-1](out) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - - -class Hourglass(nn.Module): - """ - Hourglass architecture. - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Hourglass, self).__init__() - self.encoder = Encoder(block_expansion, in_features, num_blocks, max_features) - self.decoder = Decoder(block_expansion, in_features, num_blocks, max_features) - self.out_filters = self.decoder.out_filters - - def forward(self, x): - return self.decoder(self.encoder(x)) - - -class KPHourglass(nn.Module): - """ - Hourglass architecture. - """ - - def __init__(self, block_expansion, in_features, reshape_features, reshape_depth, num_blocks=3, max_features=256): - super(KPHourglass, self).__init__() - - self.down_blocks = nn.Sequential() - for i in range(num_blocks): - self.down_blocks.add_module('down'+ str(i), DownBlock2d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)), - min(max_features, block_expansion * (2 ** (i + 1))), - kernel_size=3, padding=1)) - - in_filters = min(max_features, block_expansion * (2 ** num_blocks)) - self.conv = nn.Conv2d(in_channels=in_filters, out_channels=reshape_features, kernel_size=1) - - self.up_blocks = nn.Sequential() - for i in range(num_blocks): - in_filters = min(max_features, block_expansion * (2 ** (num_blocks - i))) - out_filters = min(max_features, block_expansion * (2 ** (num_blocks - i - 1))) - self.up_blocks.add_module('up'+ str(i), UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1)) - - self.reshape_depth = reshape_depth - self.out_filters = out_filters - - def forward(self, x): - out = self.down_blocks(x) - out = self.conv(out) - bs, c, h, w = out.shape - out = out.view(bs, c//self.reshape_depth, self.reshape_depth, h, w) - out = self.up_blocks(out) - - return out - - - -class AntiAliasInterpolation2d(nn.Module): - """ - Band-limited downsampling, for better preservation of the input signal. - """ - def __init__(self, channels, scale): - super(AntiAliasInterpolation2d, self).__init__() - sigma = (1 / scale - 1) / 2 - kernel_size = 2 * round(sigma * 4) + 1 - self.ka = kernel_size // 2 - self.kb = self.ka - 1 if kernel_size % 2 == 0 else self.ka - - kernel_size = [kernel_size, kernel_size] - sigma = [sigma, sigma] - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid( - [ - torch.arange(size, dtype=torch.float32) - for size in kernel_size - ] - ) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= torch.exp(-(mgrid - mean) ** 2 / (2 * std ** 2)) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer('weight', kernel) - self.groups = channels - self.scale = scale - inv_scale = 1 / scale - self.int_inv_scale = int(inv_scale) - - def forward(self, input): - if self.scale == 1.0: - return input - - out = F.pad(input, (self.ka, self.kb, self.ka, self.kb)) - out = F.conv2d(out, weight=self.weight, groups=self.groups) - out = out[:, :, ::self.int_inv_scale, ::self.int_inv_scale] - - return out - - -class SPADE(nn.Module): - def __init__(self, norm_nc, label_nc): - super().__init__() - - self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False) - nhidden = 128 - - self.mlp_shared = nn.Sequential( - nn.Conv2d(label_nc, nhidden, kernel_size=3, padding=1), - nn.ReLU()) - self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1) - self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1) - - def forward(self, x, segmap): - normalized = self.param_free_norm(x) - segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest') - actv = self.mlp_shared(segmap) - gamma = self.mlp_gamma(actv) - beta = self.mlp_beta(actv) - out = normalized * (1 + gamma) + beta - return out - - -class SPADEResnetBlock(nn.Module): - def __init__(self, fin, fout, norm_G, label_nc, use_se=False, dilation=1): - super().__init__() - # Attributes - self.learned_shortcut = (fin != fout) - fmiddle = min(fin, fout) - self.use_se = use_se - # create conv layers - self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=dilation, dilation=dilation) - self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=dilation, dilation=dilation) - if self.learned_shortcut: - self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False) - # apply spectral norm if specified - if 'spectral' in norm_G: - self.conv_0 = spectral_norm(self.conv_0) - self.conv_1 = spectral_norm(self.conv_1) - if self.learned_shortcut: - self.conv_s = spectral_norm(self.conv_s) - # define normalization layers - self.norm_0 = SPADE(fin, label_nc) - self.norm_1 = SPADE(fmiddle, label_nc) - if self.learned_shortcut: - self.norm_s = SPADE(fin, label_nc) - - def forward(self, x, seg1): - x_s = self.shortcut(x, seg1) - dx = self.conv_0(self.actvn(self.norm_0(x, seg1))) - dx = self.conv_1(self.actvn(self.norm_1(dx, seg1))) - out = x_s + dx - return out - - def shortcut(self, x, seg1): - if self.learned_shortcut: - x_s = self.conv_s(self.norm_s(x, seg1)) - else: - x_s = x - return x_s - - def actvn(self, x): - return F.leaky_relu(x, 2e-1) - -class audio2image(nn.Module): - def __init__(self, generator, kp_extractor, he_estimator_video, he_estimator_audio, train_params): - super().__init__() - # Attributes - self.generator = generator - self.kp_extractor = kp_extractor - self.he_estimator_video = he_estimator_video - self.he_estimator_audio = he_estimator_audio - self.train_params = train_params - - def headpose_pred_to_degree(self, pred): - device = pred.device - idx_tensor = [idx for idx in range(66)] - idx_tensor = torch.FloatTensor(idx_tensor).to(device) - pred = F.softmax(pred) - degree = torch.sum(pred*idx_tensor, 1) * 3 - 99 - - return degree - - def get_rotation_matrix(self, yaw, pitch, roll): - yaw = yaw / 180 * 3.14 - pitch = pitch / 180 * 3.14 - roll = roll / 180 * 3.14 - - roll = roll.unsqueeze(1) - pitch = pitch.unsqueeze(1) - yaw = yaw.unsqueeze(1) - - roll_mat = torch.cat([torch.ones_like(roll), torch.zeros_like(roll), torch.zeros_like(roll), - torch.zeros_like(roll), torch.cos(roll), -torch.sin(roll), - torch.zeros_like(roll), torch.sin(roll), torch.cos(roll)], dim=1) - roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3) - - pitch_mat = torch.cat([torch.cos(pitch), torch.zeros_like(pitch), torch.sin(pitch), - torch.zeros_like(pitch), torch.ones_like(pitch), torch.zeros_like(pitch), - -torch.sin(pitch), torch.zeros_like(pitch), torch.cos(pitch)], dim=1) - pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3) - - yaw_mat = torch.cat([torch.cos(yaw), -torch.sin(yaw), torch.zeros_like(yaw), - torch.sin(yaw), torch.cos(yaw), torch.zeros_like(yaw), - torch.zeros_like(yaw), torch.zeros_like(yaw), torch.ones_like(yaw)], dim=1) - yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3) - - rot_mat = torch.einsum('bij,bjk,bkm->bim', roll_mat, pitch_mat, yaw_mat) - - return rot_mat - - def keypoint_transformation(self, kp_canonical, he): - kp = kp_canonical['value'] # (bs, k, 3) - yaw, pitch, roll = he['yaw'], he['pitch'], he['roll'] - t, exp = he['t'], he['exp'] - - yaw = self.headpose_pred_to_degree(yaw) - pitch = self.headpose_pred_to_degree(pitch) - roll = self.headpose_pred_to_degree(roll) - - rot_mat = self.get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3) - - # keypoint rotation - kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp) - - - - # keypoint translation - t = t.unsqueeze_(1).repeat(1, kp.shape[1], 1) - kp_t = kp_rotated + t - - # add expression deviation - exp = exp.view(exp.shape[0], -1, 3) - kp_transformed = kp_t + exp - - return {'value': kp_transformed} - - def forward(self, source_image, target_audio): - pose_source = self.he_estimator_video(source_image) - pose_generated = self.he_estimator_audio(target_audio) - kp_canonical = self.kp_extractor(source_image) - kp_source = self.keypoint_transformation(kp_canonical, pose_source) - kp_transformed_generated = self.keypoint_transformation(kp_canonical, pose_generated) - generated = self.generator(source_image, kp_source=kp_source, kp_driving=kp_transformed_generated) - return generated \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/controlnet_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/controlnet_flax.py deleted file mode 100644 index a826df48e41a632454c513877ec55be7f86089f9..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/controlnet_flax.py +++ /dev/null @@ -1,394 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Optional, Tuple, Union - -import flax -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict - -from ..configuration_utils import ConfigMixin, flax_register_to_config -from ..utils import BaseOutput -from .embeddings_flax import FlaxTimestepEmbedding, FlaxTimesteps -from .modeling_flax_utils import FlaxModelMixin -from .unet_2d_blocks_flax import ( - FlaxCrossAttnDownBlock2D, - FlaxDownBlock2D, - FlaxUNetMidBlock2DCrossAttn, -) - - -@flax.struct.dataclass -class FlaxControlNetOutput(BaseOutput): - """ - The output of [`FlaxControlNetModel`]. - - Args: - down_block_res_samples (`jnp.ndarray`): - mid_block_res_sample (`jnp.ndarray`): - """ - - down_block_res_samples: jnp.ndarray - mid_block_res_sample: jnp.ndarray - - -class FlaxControlNetConditioningEmbedding(nn.Module): - conditioning_embedding_channels: int - block_out_channels: Tuple[int] = (16, 32, 96, 256) - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv_in = nn.Conv( - self.block_out_channels[0], - kernel_size=(3, 3), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - blocks = [] - for i in range(len(self.block_out_channels) - 1): - channel_in = self.block_out_channels[i] - channel_out = self.block_out_channels[i + 1] - conv1 = nn.Conv( - channel_in, - kernel_size=(3, 3), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - blocks.append(conv1) - conv2 = nn.Conv( - channel_out, - kernel_size=(3, 3), - strides=(2, 2), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - blocks.append(conv2) - self.blocks = blocks - - self.conv_out = nn.Conv( - self.conditioning_embedding_channels, - kernel_size=(3, 3), - padding=((1, 1), (1, 1)), - kernel_init=nn.initializers.zeros_init(), - bias_init=nn.initializers.zeros_init(), - dtype=self.dtype, - ) - - def __call__(self, conditioning): - embedding = self.conv_in(conditioning) - embedding = nn.silu(embedding) - - for block in self.blocks: - embedding = block(embedding) - embedding = nn.silu(embedding) - - embedding = self.conv_out(embedding) - - return embedding - - -@flax_register_to_config -class FlaxControlNetModel(nn.Module, FlaxModelMixin, ConfigMixin): - r""" - A ControlNet model. - - This model inherits from [`FlaxModelMixin`]. Check the superclass documentation for it’s generic methods - implemented for all models (such as downloading or saving). - - This model is also a Flax Linen [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its - general usage and behavior. - - Inherent JAX features such as the following are supported: - - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - sample_size (`int`, *optional*): - The size of the input sample. - in_channels (`int`, *optional*, defaults to 4): - The number of channels in the input sample. - down_block_types (`Tuple[str]`, *optional*, defaults to `("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")`): - The tuple of downsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): - The number of layers per block. - attention_head_dim (`int` or `Tuple[int]`, *optional*, defaults to 8): - The dimension of the attention heads. - num_attention_heads (`int` or `Tuple[int]`, *optional*): - The number of attention heads. - cross_attention_dim (`int`, *optional*, defaults to 768): - The dimension of the cross attention features. - dropout (`float`, *optional*, defaults to 0): - Dropout probability for down, up and bottleneck blocks. - flip_sin_to_cos (`bool`, *optional*, defaults to `True`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - controlnet_conditioning_channel_order (`str`, *optional*, defaults to `rgb`): - The channel order of conditional image. Will convert to `rgb` if it's `bgr`. - conditioning_embedding_out_channels (`tuple`, *optional*, defaults to `(16, 32, 96, 256)`): - The tuple of output channel for each block in the `conditioning_embedding` layer. - """ - sample_size: int = 32 - in_channels: int = 4 - down_block_types: Tuple[str] = ( - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "DownBlock2D", - ) - only_cross_attention: Union[bool, Tuple[bool]] = False - block_out_channels: Tuple[int] = (320, 640, 1280, 1280) - layers_per_block: int = 2 - attention_head_dim: Union[int, Tuple[int]] = 8 - num_attention_heads: Optional[Union[int, Tuple[int]]] = None - cross_attention_dim: int = 1280 - dropout: float = 0.0 - use_linear_projection: bool = False - dtype: jnp.dtype = jnp.float32 - flip_sin_to_cos: bool = True - freq_shift: int = 0 - controlnet_conditioning_channel_order: str = "rgb" - conditioning_embedding_out_channels: Tuple[int] = (16, 32, 96, 256) - - def init_weights(self, rng: jax.random.KeyArray) -> FrozenDict: - # init input tensors - sample_shape = (1, self.in_channels, self.sample_size, self.sample_size) - sample = jnp.zeros(sample_shape, dtype=jnp.float32) - timesteps = jnp.ones((1,), dtype=jnp.int32) - encoder_hidden_states = jnp.zeros((1, 1, self.cross_attention_dim), dtype=jnp.float32) - controlnet_cond_shape = (1, 3, self.sample_size * 8, self.sample_size * 8) - controlnet_cond = jnp.zeros(controlnet_cond_shape, dtype=jnp.float32) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.init(rngs, sample, timesteps, encoder_hidden_states, controlnet_cond)["params"] - - def setup(self): - block_out_channels = self.block_out_channels - time_embed_dim = block_out_channels[0] * 4 - - # If `num_attention_heads` is not defined (which is the case for most models) - # it will default to `attention_head_dim`. This looks weird upon first reading it and it is. - # The reason for this behavior is to correct for incorrectly named variables that were introduced - # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 - # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking - # which is why we correct for the naming here. - num_attention_heads = self.num_attention_heads or self.attention_head_dim - - # input - self.conv_in = nn.Conv( - block_out_channels[0], - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - # time - self.time_proj = FlaxTimesteps( - block_out_channels[0], flip_sin_to_cos=self.flip_sin_to_cos, freq_shift=self.config.freq_shift - ) - self.time_embedding = FlaxTimestepEmbedding(time_embed_dim, dtype=self.dtype) - - self.controlnet_cond_embedding = FlaxControlNetConditioningEmbedding( - conditioning_embedding_channels=block_out_channels[0], - block_out_channels=self.conditioning_embedding_out_channels, - ) - - only_cross_attention = self.only_cross_attention - if isinstance(only_cross_attention, bool): - only_cross_attention = (only_cross_attention,) * len(self.down_block_types) - - if isinstance(num_attention_heads, int): - num_attention_heads = (num_attention_heads,) * len(self.down_block_types) - - # down - down_blocks = [] - controlnet_down_blocks = [] - - output_channel = block_out_channels[0] - - controlnet_block = nn.Conv( - output_channel, - kernel_size=(1, 1), - padding="VALID", - kernel_init=nn.initializers.zeros_init(), - bias_init=nn.initializers.zeros_init(), - dtype=self.dtype, - ) - controlnet_down_blocks.append(controlnet_block) - - for i, down_block_type in enumerate(self.down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - if down_block_type == "CrossAttnDownBlock2D": - down_block = FlaxCrossAttnDownBlock2D( - in_channels=input_channel, - out_channels=output_channel, - dropout=self.dropout, - num_layers=self.layers_per_block, - num_attention_heads=num_attention_heads[i], - add_downsample=not is_final_block, - use_linear_projection=self.use_linear_projection, - only_cross_attention=only_cross_attention[i], - dtype=self.dtype, - ) - else: - down_block = FlaxDownBlock2D( - in_channels=input_channel, - out_channels=output_channel, - dropout=self.dropout, - num_layers=self.layers_per_block, - add_downsample=not is_final_block, - dtype=self.dtype, - ) - - down_blocks.append(down_block) - - for _ in range(self.layers_per_block): - controlnet_block = nn.Conv( - output_channel, - kernel_size=(1, 1), - padding="VALID", - kernel_init=nn.initializers.zeros_init(), - bias_init=nn.initializers.zeros_init(), - dtype=self.dtype, - ) - controlnet_down_blocks.append(controlnet_block) - - if not is_final_block: - controlnet_block = nn.Conv( - output_channel, - kernel_size=(1, 1), - padding="VALID", - kernel_init=nn.initializers.zeros_init(), - bias_init=nn.initializers.zeros_init(), - dtype=self.dtype, - ) - controlnet_down_blocks.append(controlnet_block) - - self.down_blocks = down_blocks - self.controlnet_down_blocks = controlnet_down_blocks - - # mid - mid_block_channel = block_out_channels[-1] - self.mid_block = FlaxUNetMidBlock2DCrossAttn( - in_channels=mid_block_channel, - dropout=self.dropout, - num_attention_heads=num_attention_heads[-1], - use_linear_projection=self.use_linear_projection, - dtype=self.dtype, - ) - - self.controlnet_mid_block = nn.Conv( - mid_block_channel, - kernel_size=(1, 1), - padding="VALID", - kernel_init=nn.initializers.zeros_init(), - bias_init=nn.initializers.zeros_init(), - dtype=self.dtype, - ) - - def __call__( - self, - sample, - timesteps, - encoder_hidden_states, - controlnet_cond, - conditioning_scale: float = 1.0, - return_dict: bool = True, - train: bool = False, - ) -> Union[FlaxControlNetOutput, Tuple]: - r""" - Args: - sample (`jnp.ndarray`): (batch, channel, height, width) noisy inputs tensor - timestep (`jnp.ndarray` or `float` or `int`): timesteps - encoder_hidden_states (`jnp.ndarray`): (batch_size, sequence_length, hidden_size) encoder hidden states - controlnet_cond (`jnp.ndarray`): (batch, channel, height, width) the conditional input tensor - conditioning_scale: (`float`) the scale factor for controlnet outputs - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] instead of a - plain tuple. - train (`bool`, *optional*, defaults to `False`): - Use deterministic functions and disable dropout when not training. - - Returns: - [`~models.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. - When returning a tuple, the first element is the sample tensor. - """ - channel_order = self.controlnet_conditioning_channel_order - if channel_order == "bgr": - controlnet_cond = jnp.flip(controlnet_cond, axis=1) - - # 1. time - if not isinstance(timesteps, jnp.ndarray): - timesteps = jnp.array([timesteps], dtype=jnp.int32) - elif isinstance(timesteps, jnp.ndarray) and len(timesteps.shape) == 0: - timesteps = timesteps.astype(dtype=jnp.float32) - timesteps = jnp.expand_dims(timesteps, 0) - - t_emb = self.time_proj(timesteps) - t_emb = self.time_embedding(t_emb) - - # 2. pre-process - sample = jnp.transpose(sample, (0, 2, 3, 1)) - sample = self.conv_in(sample) - - controlnet_cond = jnp.transpose(controlnet_cond, (0, 2, 3, 1)) - controlnet_cond = self.controlnet_cond_embedding(controlnet_cond) - sample += controlnet_cond - - # 3. down - down_block_res_samples = (sample,) - for down_block in self.down_blocks: - if isinstance(down_block, FlaxCrossAttnDownBlock2D): - sample, res_samples = down_block(sample, t_emb, encoder_hidden_states, deterministic=not train) - else: - sample, res_samples = down_block(sample, t_emb, deterministic=not train) - down_block_res_samples += res_samples - - # 4. mid - sample = self.mid_block(sample, t_emb, encoder_hidden_states, deterministic=not train) - - # 5. contronet blocks - controlnet_down_block_res_samples = () - for down_block_res_sample, controlnet_block in zip(down_block_res_samples, self.controlnet_down_blocks): - down_block_res_sample = controlnet_block(down_block_res_sample) - controlnet_down_block_res_samples += (down_block_res_sample,) - - down_block_res_samples = controlnet_down_block_res_samples - - mid_block_res_sample = self.controlnet_mid_block(sample) - - # 6. scaling - down_block_res_samples = [sample * conditioning_scale for sample in down_block_res_samples] - mid_block_res_sample *= conditioning_scale - - if not return_dict: - return (down_block_res_samples, mid_block_res_sample) - - return FlaxControlNetOutput( - down_block_res_samples=down_block_res_samples, mid_block_res_sample=mid_block_res_sample - ) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint_legacy.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index 235aa32f7338579210520c675b3776b830cbe3da..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,97 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import numpy as np - -from diffusers import OnnxStableDiffusionInpaintPipelineLegacy -from diffusers.utils.testing_utils import ( - is_onnx_available, - load_image, - load_numpy, - nightly, - require_onnxruntime, - require_torch_gpu, -) - - -if is_onnx_available(): - import onnxruntime as ort - - -@nightly -@require_onnxruntime -@require_torch_gpu -class StableDiffusionOnnxInpaintLegacyPipelineIntegrationTests(unittest.TestCase): - @property - def gpu_provider(self): - return ( - "CUDAExecutionProvider", - { - "gpu_mem_limit": "15000000000", # 15GB - "arena_extend_strategy": "kSameAsRequested", - }, - ) - - @property - def gpu_options(self): - options = ort.SessionOptions() - options.enable_mem_pattern = False - return options - - def test_inference(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo_mask.png" - ) - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/red_cat_sitting_on_a_park_bench_onnx.npy" - ) - - # using the PNDM scheduler by default - pipe = OnnxStableDiffusionInpaintPipelineLegacy.from_pretrained( - "CompVis/stable-diffusion-v1-4", - revision="onnx", - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - pipe.set_progress_bar_config(disable=None) - - prompt = "A red cat sitting on a park bench" - - generator = np.random.RandomState(0) - output = pipe( - prompt=prompt, - image=init_image, - mask_image=mask_image, - strength=0.75, - guidance_scale=7.5, - num_inference_steps=15, - generator=generator, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (512, 512, 3) - assert np.abs(expected_image - image).max() < 1e-2 diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_table.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_table.py deleted file mode 100644 index e9f290988916c29b270523897454b21172d91839..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_table.py +++ /dev/null @@ -1,185 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import collections -import importlib.util -import os -import re - - -# All paths are set with the intent you should run this script from the root of the repo with the command -# python utils/check_table.py -TRANSFORMERS_PATH = "src/diffusers" -PATH_TO_DOCS = "docs/source/en" -REPO_PATH = "." - - -def _find_text_in_file(filename, start_prompt, end_prompt): - """ - Find the text in `filename` between a line beginning with `start_prompt` and before `end_prompt`, removing empty - lines. - """ - with open(filename, "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - # Find the start prompt. - start_index = 0 - while not lines[start_index].startswith(start_prompt): - start_index += 1 - start_index += 1 - - end_index = start_index - while not lines[end_index].startswith(end_prompt): - end_index += 1 - end_index -= 1 - - while len(lines[start_index]) <= 1: - start_index += 1 - while len(lines[end_index]) <= 1: - end_index -= 1 - end_index += 1 - return "".join(lines[start_index:end_index]), start_index, end_index, lines - - -# Add here suffixes that are used to identify models, separated by | -ALLOWED_MODEL_SUFFIXES = "Model|Encoder|Decoder|ForConditionalGeneration" -# Regexes that match TF/Flax/PT model names. -_re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") -_re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") -# Will match any TF or Flax model too so need to be in an else branch afterthe two previous regexes. -_re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") - - -# This is to make sure the diffusers module imported is the one in the repo. -spec = importlib.util.spec_from_file_location( - "diffusers", - os.path.join(TRANSFORMERS_PATH, "__init__.py"), - submodule_search_locations=[TRANSFORMERS_PATH], -) -diffusers_module = spec.loader.load_module() - - -# Thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python -def camel_case_split(identifier): - "Split a camelcased `identifier` into words." - matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) - return [m.group(0) for m in matches] - - -def _center_text(text, width): - text_length = 2 if text == "✅" or text == "❌" else len(text) - left_indent = (width - text_length) // 2 - right_indent = width - text_length - left_indent - return " " * left_indent + text + " " * right_indent - - -def get_model_table_from_auto_modules(): - """Generates an up-to-date model table from the content of the auto modules.""" - # Dictionary model names to config. - config_mapping_names = diffusers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES - model_name_to_config = { - name: config_mapping_names[code] - for code, name in diffusers_module.MODEL_NAMES_MAPPING.items() - if code in config_mapping_names - } - model_name_to_prefix = {name: config.replace("ConfigMixin", "") for name, config in model_name_to_config.items()} - - # Dictionaries flagging if each model prefix has a slow/fast tokenizer, backend in PT/TF/Flax. - slow_tokenizers = collections.defaultdict(bool) - fast_tokenizers = collections.defaultdict(bool) - pt_models = collections.defaultdict(bool) - tf_models = collections.defaultdict(bool) - flax_models = collections.defaultdict(bool) - - # Let's lookup through all diffusers object (once). - for attr_name in dir(diffusers_module): - lookup_dict = None - if attr_name.endswith("Tokenizer"): - lookup_dict = slow_tokenizers - attr_name = attr_name[:-9] - elif attr_name.endswith("TokenizerFast"): - lookup_dict = fast_tokenizers - attr_name = attr_name[:-13] - elif _re_tf_models.match(attr_name) is not None: - lookup_dict = tf_models - attr_name = _re_tf_models.match(attr_name).groups()[0] - elif _re_flax_models.match(attr_name) is not None: - lookup_dict = flax_models - attr_name = _re_flax_models.match(attr_name).groups()[0] - elif _re_pt_models.match(attr_name) is not None: - lookup_dict = pt_models - attr_name = _re_pt_models.match(attr_name).groups()[0] - - if lookup_dict is not None: - while len(attr_name) > 0: - if attr_name in model_name_to_prefix.values(): - lookup_dict[attr_name] = True - break - # Try again after removing the last word in the name - attr_name = "".join(camel_case_split(attr_name)[:-1]) - - # Let's build that table! - model_names = list(model_name_to_config.keys()) - model_names.sort(key=str.lower) - columns = ["Model", "Tokenizer slow", "Tokenizer fast", "PyTorch support", "TensorFlow support", "Flax Support"] - # We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side). - widths = [len(c) + 2 for c in columns] - widths[0] = max([len(name) for name in model_names]) + 2 - - # Build the table per se - table = "|" + "|".join([_center_text(c, w) for c, w in zip(columns, widths)]) + "|\n" - # Use ":-----:" format to center-aligned table cell texts - table += "|" + "|".join([":" + "-" * (w - 2) + ":" for w in widths]) + "|\n" - - check = {True: "✅", False: "❌"} - for name in model_names: - prefix = model_name_to_prefix[name] - line = [ - name, - check[slow_tokenizers[prefix]], - check[fast_tokenizers[prefix]], - check[pt_models[prefix]], - check[tf_models[prefix]], - check[flax_models[prefix]], - ] - table += "|" + "|".join([_center_text(l, w) for l, w in zip(line, widths)]) + "|\n" - return table - - -def check_model_table(overwrite=False): - """Check the model table in the index.rst is consistent with the state of the lib and maybe `overwrite`.""" - current_table, start_index, end_index, lines = _find_text_in_file( - filename=os.path.join(PATH_TO_DOCS, "index.md"), - start_prompt="", - ) - new_table = get_model_table_from_auto_modules() - - if current_table != new_table: - if overwrite: - with open(os.path.join(PATH_TO_DOCS, "index.md"), "w", encoding="utf-8", newline="\n") as f: - f.writelines(lines[:start_index] + [new_table] + lines[end_index:]) - else: - raise ValueError( - "The model table in the `index.md` has not been updated. Run `make fix-copies` to fix this." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") - args = parser.parse_args() - - check_model_table(args.fix_and_overwrite) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py deleted file mode 100644 index d873dceb7e4efdf8d1e7d282badfe9b7118426b9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py +++ /dev/null @@ -1,46 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class CascadeRCNN(TwoStageDetector): - r"""Implementation of `Cascade R-CNN: Delving into High Quality Object - Detection `_""" - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(CascadeRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def show_result(self, data, result, **kwargs): - """Show prediction results of the detector. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if self.with_mask: - ms_bbox_result, ms_segm_result = result - if isinstance(ms_bbox_result, dict): - result = (ms_bbox_result['ensemble'], - ms_segm_result['ensemble']) - else: - if isinstance(result, dict): - result = result['ensemble'] - return super(CascadeRCNN, self).show_result(data, result, **kwargs) diff --git a/spaces/Anew1007/extras/README.md b/spaces/Anew1007/extras/README.md deleted file mode 100644 index 26145498355bebc5337e5f04940e5073fad22978..0000000000000000000000000000000000000000 --- a/spaces/Anew1007/extras/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: extras -emoji: 🧊 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -license: mit -duplicated_from: doctord98/extras ---- -Fixed Server.JS Latest 2023/08/16 \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/share.py b/spaces/Anonymous-sub/Rerender/ControlNet/share.py deleted file mode 100644 index 463af08fb936d650b5dd2e66183661181c34a3d6..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/share.py +++ /dev/null @@ -1,8 +0,0 @@ -import config -from cldm.hack import disable_verbosity, enable_sliced_attention - - -disable_verbosity() - -if config.save_memory: - enable_sliced_attention() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/ansitowin32.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/ansitowin32.py deleted file mode 100644 index abf209e60c7c4a9b1ae57452e36b383969848c2e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/ansitowin32.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import re -import sys -import os - -from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL -from .winterm import enable_vt_processing, WinTerm, WinColor, WinStyle -from .win32 import windll, winapi_test - - -winterm = None -if windll is not None: - winterm = WinTerm() - - -class StreamWrapper(object): - ''' - Wraps a stream (such as stdout), acting as a transparent proxy for all - attribute access apart from method 'write()', which is delegated to our - Converter instance. - ''' - def __init__(self, wrapped, converter): - # double-underscore everything to prevent clashes with names of - # attributes on the wrapped stream object. - self.__wrapped = wrapped - self.__convertor = converter - - def __getattr__(self, name): - return getattr(self.__wrapped, name) - - def __enter__(self, *args, **kwargs): - # special method lookup bypasses __getattr__/__getattribute__, see - # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit - # thus, contextlib magic methods are not proxied via __getattr__ - return self.__wrapped.__enter__(*args, **kwargs) - - def __exit__(self, *args, **kwargs): - return self.__wrapped.__exit__(*args, **kwargs) - - def __setstate__(self, state): - self.__dict__ = state - - def __getstate__(self): - return self.__dict__ - - def write(self, text): - self.__convertor.write(text) - - def isatty(self): - stream = self.__wrapped - if 'PYCHARM_HOSTED' in os.environ: - if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): - return True - try: - stream_isatty = stream.isatty - except AttributeError: - return False - else: - return stream_isatty() - - @property - def closed(self): - stream = self.__wrapped - try: - return stream.closed - # AttributeError in the case that the stream doesn't support being closed - # ValueError for the case that the stream has already been detached when atexit runs - except (AttributeError, ValueError): - return True - - -class AnsiToWin32(object): - ''' - Implements a 'write()' method which, on Windows, will strip ANSI character - sequences from the text, and if outputting to a tty, will convert them into - win32 function calls. - ''' - ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer - ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command - - def __init__(self, wrapped, convert=None, strip=None, autoreset=False): - # The wrapped stream (normally sys.stdout or sys.stderr) - self.wrapped = wrapped - - # should we reset colors to defaults after every .write() - self.autoreset = autoreset - - # create the proxy wrapping our output stream - self.stream = StreamWrapper(wrapped, self) - - on_windows = os.name == 'nt' - # We test if the WinAPI works, because even if we are on Windows - # we may be using a terminal that doesn't support the WinAPI - # (e.g. Cygwin Terminal). In this case it's up to the terminal - # to support the ANSI codes. - conversion_supported = on_windows and winapi_test() - try: - fd = wrapped.fileno() - except Exception: - fd = -1 - system_has_native_ansi = not on_windows or enable_vt_processing(fd) - have_tty = not self.stream.closed and self.stream.isatty() - need_conversion = conversion_supported and not system_has_native_ansi - - # should we strip ANSI sequences from our output? - if strip is None: - strip = need_conversion or not have_tty - self.strip = strip - - # should we should convert ANSI sequences into win32 calls? - if convert is None: - convert = need_conversion and have_tty - self.convert = convert - - # dict of ansi codes to win32 functions and parameters - self.win32_calls = self.get_win32_calls() - - # are we wrapping stderr? - self.on_stderr = self.wrapped is sys.stderr - - def should_wrap(self): - ''' - True if this class is actually needed. If false, then the output - stream will not be affected, nor will win32 calls be issued, so - wrapping stdout is not actually required. This will generally be - False on non-Windows platforms, unless optional functionality like - autoreset has been requested using kwargs to init() - ''' - return self.convert or self.strip or self.autoreset - - def get_win32_calls(self): - if self.convert and winterm: - return { - AnsiStyle.RESET_ALL: (winterm.reset_all, ), - AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), - AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), - AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), - AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), - AnsiFore.RED: (winterm.fore, WinColor.RED), - AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), - AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), - AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), - AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), - AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), - AnsiFore.WHITE: (winterm.fore, WinColor.GREY), - AnsiFore.RESET: (winterm.fore, ), - AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), - AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), - AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), - AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), - AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), - AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), - AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), - AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), - AnsiBack.BLACK: (winterm.back, WinColor.BLACK), - AnsiBack.RED: (winterm.back, WinColor.RED), - AnsiBack.GREEN: (winterm.back, WinColor.GREEN), - AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), - AnsiBack.BLUE: (winterm.back, WinColor.BLUE), - AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), - AnsiBack.CYAN: (winterm.back, WinColor.CYAN), - AnsiBack.WHITE: (winterm.back, WinColor.GREY), - AnsiBack.RESET: (winterm.back, ), - AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), - AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), - AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), - AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), - AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), - AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), - AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), - AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), - } - return dict() - - def write(self, text): - if self.strip or self.convert: - self.write_and_convert(text) - else: - self.wrapped.write(text) - self.wrapped.flush() - if self.autoreset: - self.reset_all() - - - def reset_all(self): - if self.convert: - self.call_win32('m', (0,)) - elif not self.strip and not self.stream.closed: - self.wrapped.write(Style.RESET_ALL) - - - def write_and_convert(self, text): - ''' - Write the given text to our wrapped stream, stripping any ANSI - sequences from the text, and optionally converting them into win32 - calls. - ''' - cursor = 0 - text = self.convert_osc(text) - for match in self.ANSI_CSI_RE.finditer(text): - start, end = match.span() - self.write_plain_text(text, cursor, start) - self.convert_ansi(*match.groups()) - cursor = end - self.write_plain_text(text, cursor, len(text)) - - - def write_plain_text(self, text, start, end): - if start < end: - self.wrapped.write(text[start:end]) - self.wrapped.flush() - - - def convert_ansi(self, paramstring, command): - if self.convert: - params = self.extract_params(command, paramstring) - self.call_win32(command, params) - - - def extract_params(self, command, paramstring): - if command in 'Hf': - params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) - while len(params) < 2: - # defaults: - params = params + (1,) - else: - params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) - if len(params) == 0: - # defaults: - if command in 'JKm': - params = (0,) - elif command in 'ABCD': - params = (1,) - - return params - - - def call_win32(self, command, params): - if command == 'm': - for param in params: - if param in self.win32_calls: - func_args = self.win32_calls[param] - func = func_args[0] - args = func_args[1:] - kwargs = dict(on_stderr=self.on_stderr) - func(*args, **kwargs) - elif command in 'J': - winterm.erase_screen(params[0], on_stderr=self.on_stderr) - elif command in 'K': - winterm.erase_line(params[0], on_stderr=self.on_stderr) - elif command in 'Hf': # cursor position - absolute - winterm.set_cursor_position(params, on_stderr=self.on_stderr) - elif command in 'ABCD': # cursor position - relative - n = params[0] - # A - up, B - down, C - forward, D - back - x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] - winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) - - - def convert_osc(self, text): - for match in self.ANSI_OSC_RE.finditer(text): - start, end = match.span() - text = text[:start] + text[end:] - paramstring, command = match.groups() - if command == BEL: - if paramstring.count(";") == 1: - params = paramstring.split(";") - # 0 - change title and icon (we will only change title) - # 1 - change icon (we don't support this) - # 2 - change title - if params[0] in '02': - winterm.set_title(params[1]) - return text - - - def flush(self): - self.wrapped.flush() diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/checkpoint/c2_model_loading.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/checkpoint/c2_model_loading.py deleted file mode 100644 index 8c8d181bd7200bd3fd38446e743f8f16780d6e76..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/checkpoint/c2_model_loading.py +++ /dev/null @@ -1,407 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import re -from typing import Dict, List -import torch -from tabulate import tabulate - - -def convert_basic_c2_names(original_keys): - """ - Apply some basic name conversion to names in C2 weights. - It only deals with typical backbone models. - - Args: - original_keys (list[str]): - Returns: - list[str]: The same number of strings matching those in original_keys. - """ - layer_keys = copy.deepcopy(original_keys) - layer_keys = [ - {"pred_b": "linear_b", "pred_w": "linear_w"}.get(k, k) for k in layer_keys - ] # some hard-coded mappings - - layer_keys = [k.replace("_", ".") for k in layer_keys] - layer_keys = [re.sub("\\.b$", ".bias", k) for k in layer_keys] - layer_keys = [re.sub("\\.w$", ".weight", k) for k in layer_keys] - # Uniform both bn and gn names to "norm" - layer_keys = [re.sub("bn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.bias$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.rm", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.mean$", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.riv$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.var$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.gamma$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.beta$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.bias$", "norm.bias", k) for k in layer_keys] - - # stem - layer_keys = [re.sub("^res\\.conv1\\.norm\\.", "conv1.norm.", k) for k in layer_keys] - # to avoid mis-matching with "conv1" in other components (e.g. detection head) - layer_keys = [re.sub("^conv1\\.", "stem.conv1.", k) for k in layer_keys] - - # layer1-4 is used by torchvision, however we follow the C2 naming strategy (res2-5) - # layer_keys = [re.sub("^res2.", "layer1.", k) for k in layer_keys] - # layer_keys = [re.sub("^res3.", "layer2.", k) for k in layer_keys] - # layer_keys = [re.sub("^res4.", "layer3.", k) for k in layer_keys] - # layer_keys = [re.sub("^res5.", "layer4.", k) for k in layer_keys] - - # blocks - layer_keys = [k.replace(".branch1.", ".shortcut.") for k in layer_keys] - layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys] - layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys] - layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys] - - # DensePose substitutions - layer_keys = [re.sub("^body.conv.fcn", "body_conv_fcn", k) for k in layer_keys] - layer_keys = [k.replace("AnnIndex.lowres", "ann_index_lowres") for k in layer_keys] - layer_keys = [k.replace("Index.UV.lowres", "index_uv_lowres") for k in layer_keys] - layer_keys = [k.replace("U.lowres", "u_lowres") for k in layer_keys] - layer_keys = [k.replace("V.lowres", "v_lowres") for k in layer_keys] - return layer_keys - - -def convert_c2_detectron_names(weights): - """ - Map Caffe2 Detectron weight names to Detectron2 names. - - Args: - weights (dict): name -> tensor - - Returns: - dict: detectron2 names -> tensor - dict: detectron2 names -> C2 names - """ - logger = logging.getLogger(__name__) - logger.info("Renaming Caffe2 weights ......") - original_keys = sorted(weights.keys()) - layer_keys = copy.deepcopy(original_keys) - - layer_keys = convert_basic_c2_names(layer_keys) - - # -------------------------------------------------------------------------- - # RPN hidden representation conv - # -------------------------------------------------------------------------- - # FPN case - # In the C2 model, the RPN hidden layer conv is defined for FPN level 2 and then - # shared for all other levels, hence the appearance of "fpn2" - layer_keys = [ - k.replace("conv.rpn.fpn2", "proposal_generator.rpn_head.conv") for k in layer_keys - ] - # Non-FPN case - layer_keys = [k.replace("conv.rpn", "proposal_generator.rpn_head.conv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # RPN box transformation conv - # -------------------------------------------------------------------------- - # FPN case (see note above about "fpn2") - layer_keys = [ - k.replace("rpn.bbox.pred.fpn2", "proposal_generator.rpn_head.anchor_deltas") - for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits.fpn2", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - # Non-FPN case - layer_keys = [ - k.replace("rpn.bbox.pred", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - - # -------------------------------------------------------------------------- - # Fast R-CNN box head - # -------------------------------------------------------------------------- - layer_keys = [re.sub("^bbox\\.pred", "bbox_pred", k) for k in layer_keys] - layer_keys = [re.sub("^cls\\.score", "cls_score", k) for k in layer_keys] - layer_keys = [re.sub("^fc6\\.", "box_head.fc1.", k) for k in layer_keys] - layer_keys = [re.sub("^fc7\\.", "box_head.fc2.", k) for k in layer_keys] - # 4conv1fc head tensor names: head_conv1_w, head_conv1_gn_s - layer_keys = [re.sub("^head\\.conv", "box_head.conv", k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # FPN lateral and output convolutions - # -------------------------------------------------------------------------- - def fpn_map(name): - """ - Look for keys with the following patterns: - 1) Starts with "fpn.inner." - Example: "fpn.inner.res2.2.sum.lateral.weight" - Meaning: These are lateral pathway convolutions - 2) Starts with "fpn.res" - Example: "fpn.res2.2.sum.weight" - Meaning: These are FPN output convolutions - """ - splits = name.split(".") - norm = ".norm" if "norm" in splits else "" - if name.startswith("fpn.inner."): - # splits example: ['fpn', 'inner', 'res2', '2', 'sum', 'lateral', 'weight'] - stage = int(splits[2][len("res") :]) - return "fpn_lateral{}{}.{}".format(stage, norm, splits[-1]) - elif name.startswith("fpn.res"): - # splits example: ['fpn', 'res2', '2', 'sum', 'weight'] - stage = int(splits[1][len("res") :]) - return "fpn_output{}{}.{}".format(stage, norm, splits[-1]) - return name - - layer_keys = [fpn_map(k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # Mask R-CNN mask head - # -------------------------------------------------------------------------- - # roi_heads.StandardROIHeads case - layer_keys = [k.replace(".[mask].fcn", "mask_head.mask_fcn") for k in layer_keys] - layer_keys = [re.sub("^\\.mask\\.fcn", "mask_head.mask_fcn", k) for k in layer_keys] - layer_keys = [k.replace("mask.fcn.logits", "mask_head.predictor") for k in layer_keys] - # roi_heads.Res5ROIHeads case - layer_keys = [k.replace("conv5.mask", "mask_head.deconv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Keypoint R-CNN head - # -------------------------------------------------------------------------- - # interestingly, the keypoint head convs have blob names that are simply "conv_fcnX" - layer_keys = [k.replace("conv.fcn", "roi_heads.keypoint_head.conv_fcn") for k in layer_keys] - layer_keys = [ - k.replace("kps.score.lowres", "roi_heads.keypoint_head.score_lowres") for k in layer_keys - ] - layer_keys = [k.replace("kps.score.", "roi_heads.keypoint_head.score.") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Done with replacements - # -------------------------------------------------------------------------- - assert len(set(layer_keys)) == len(layer_keys) - assert len(original_keys) == len(layer_keys) - - new_weights = {} - new_keys_to_original_keys = {} - for orig, renamed in zip(original_keys, layer_keys): - new_keys_to_original_keys[renamed] = orig - if renamed.startswith("bbox_pred.") or renamed.startswith("mask_head.predictor."): - # remove the meaningless prediction weight for background class - new_start_idx = 4 if renamed.startswith("bbox_pred.") else 1 - new_weights[renamed] = weights[orig][new_start_idx:] - logger.info( - "Remove prediction weight for background class in {}. The shape changes from " - "{} to {}.".format( - renamed, tuple(weights[orig].shape), tuple(new_weights[renamed].shape) - ) - ) - elif renamed.startswith("cls_score."): - # move weights of bg class from original index 0 to last index - logger.info( - "Move classification weights for background class in {} from index 0 to " - "index {}.".format(renamed, weights[orig].shape[0] - 1) - ) - new_weights[renamed] = torch.cat([weights[orig][1:], weights[orig][:1]]) - else: - new_weights[renamed] = weights[orig] - - return new_weights, new_keys_to_original_keys - - -# Note the current matching is not symmetric. -# it assumes model_state_dict will have longer names. -def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_conversion=True): - """ - Match names between the two state-dict, and returns a new chkpt_state_dict with names - converted to match model_state_dict with heuristics. The returned dict can be later - loaded with fvcore checkpointer. - If `c2_conversion==True`, `ckpt_state_dict` is assumed to be a Caffe2 - model and will be renamed at first. - - Strategy: suppose that the models that we will create will have prefixes appended - to each of its keys, for example due to an extra level of nesting that the original - pre-trained weights from ImageNet won't contain. For example, model.state_dict() - might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains - res2.conv1.weight. We thus want to match both parameters together. - For that, we look for each model weight, look among all loaded keys if there is one - that is a suffix of the current weight name, and use it if that's the case. - If multiple matches exist, take the one with longest size - of the corresponding name. For example, for the same model as before, the pretrained - weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case, - we want to match backbone[0].body.conv1.weight to conv1.weight, and - backbone[0].body.res2.conv1.weight to res2.conv1.weight. - """ - model_keys = sorted(model_state_dict.keys()) - if c2_conversion: - ckpt_state_dict, original_keys = convert_c2_detectron_names(ckpt_state_dict) - # original_keys: the name in the original dict (before renaming) - else: - original_keys = {x: x for x in ckpt_state_dict.keys()} - ckpt_keys = sorted(ckpt_state_dict.keys()) - - def match(a, b): - # Matched ckpt_key should be a complete (starts with '.') suffix. - # For example, roi_heads.mesh_head.whatever_conv1 does not match conv1, - # but matches whatever_conv1 or mesh_head.whatever_conv1. - return a == b or a.endswith("." + b) - - # get a matrix of string matches, where each (i, j) entry correspond to the size of the - # ckpt_key string, if it matches - match_matrix = [len(j) if match(i, j) else 0 for i in model_keys for j in ckpt_keys] - match_matrix = torch.as_tensor(match_matrix).view(len(model_keys), len(ckpt_keys)) - # use the matched one with longest size in case of multiple matches - max_match_size, idxs = match_matrix.max(1) - # remove indices that correspond to no-match - idxs[max_match_size == 0] = -1 - - logger = logging.getLogger(__name__) - # matched_pairs (matched checkpoint key --> matched model key) - matched_keys = {} - result_state_dict = {} - for idx_model, idx_ckpt in enumerate(idxs.tolist()): - if idx_ckpt == -1: - continue - key_model = model_keys[idx_model] - key_ckpt = ckpt_keys[idx_ckpt] - value_ckpt = ckpt_state_dict[key_ckpt] - shape_in_model = model_state_dict[key_model].shape - - if shape_in_model != value_ckpt.shape: - logger.warning( - "Shape of {} in checkpoint is {}, while shape of {} in model is {}.".format( - key_ckpt, value_ckpt.shape, key_model, shape_in_model - ) - ) - logger.warning( - "{} will not be loaded. Please double check and see if this is desired.".format( - key_ckpt - ) - ) - continue - - assert key_model not in result_state_dict - result_state_dict[key_model] = value_ckpt - if key_ckpt in matched_keys: # already added to matched_keys - logger.error( - "Ambiguity found for {} in checkpoint!" - "It matches at least two keys in the model ({} and {}).".format( - key_ckpt, key_model, matched_keys[key_ckpt] - ) - ) - raise ValueError("Cannot match one checkpoint key to multiple keys in the model.") - - matched_keys[key_ckpt] = key_model - - # logging: - matched_model_keys = sorted(matched_keys.values()) - if len(matched_model_keys) == 0: - logger.warning("No weights in checkpoint matched with model.") - return ckpt_state_dict - common_prefix = _longest_common_prefix(matched_model_keys) - rev_matched_keys = {v: k for k, v in matched_keys.items()} - original_keys = {k: original_keys[rev_matched_keys[k]] for k in matched_model_keys} - - model_key_groups = _group_keys_by_module(matched_model_keys, original_keys) - table = [] - memo = set() - for key_model in matched_model_keys: - if key_model in memo: - continue - if key_model in model_key_groups: - group = model_key_groups[key_model] - memo |= set(group) - shapes = [tuple(model_state_dict[k].shape) for k in group] - table.append( - ( - _longest_common_prefix([k[len(common_prefix) :] for k in group]) + "*", - _group_str([original_keys[k] for k in group]), - " ".join([str(x).replace(" ", "") for x in shapes]), - ) - ) - else: - key_checkpoint = original_keys[key_model] - shape = str(tuple(model_state_dict[key_model].shape)) - table.append((key_model[len(common_prefix) :], key_checkpoint, shape)) - table_str = tabulate( - table, tablefmt="pipe", headers=["Names in Model", "Names in Checkpoint", "Shapes"] - ) - logger.info( - "Following weights matched with " - + (f"submodule {common_prefix[:-1]}" if common_prefix else "model") - + ":\n" - + table_str - ) - - unmatched_ckpt_keys = [k for k in ckpt_keys if k not in set(matched_keys.keys())] - for k in unmatched_ckpt_keys: - result_state_dict[k] = ckpt_state_dict[k] - return result_state_dict - - -def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]): - """ - Params in the same submodule are grouped together. - - Args: - keys: names of all parameters - original_names: mapping from parameter name to their name in the checkpoint - - Returns: - dict[name -> all other names in the same group] - """ - - def _submodule_name(key): - pos = key.rfind(".") - if pos < 0: - return None - prefix = key[: pos + 1] - return prefix - - all_submodules = [_submodule_name(k) for k in keys] - all_submodules = [x for x in all_submodules if x] - all_submodules = sorted(all_submodules, key=len) - - ret = {} - for prefix in all_submodules: - group = [k for k in keys if k.startswith(prefix)] - if len(group) <= 1: - continue - original_name_lcp = _longest_common_prefix_str([original_names[k] for k in group]) - if len(original_name_lcp) == 0: - # don't group weights if original names don't share prefix - continue - - for k in group: - if k in ret: - continue - ret[k] = group - return ret - - -def _longest_common_prefix(names: List[str]) -> str: - """ - ["abc.zfg", "abc.zef"] -> "abc." - """ - names = [n.split(".") for n in names] - m1, m2 = min(names), max(names) - ret = [a for a, b in zip(m1, m2) if a == b] - ret = ".".join(ret) + "." if len(ret) else "" - return ret - - -def _longest_common_prefix_str(names: List[str]) -> str: - m1, m2 = min(names), max(names) - lcp = [a for a, b in zip(m1, m2) if a == b] - lcp = "".join(lcp) - return lcp - - -def _group_str(names: List[str]) -> str: - """ - Turn "common1", "common2", "common3" into "common{1,2,3}" - """ - lcp = _longest_common_prefix_str(names) - rest = [x[len(lcp) :] for x in names] - rest = "{" + ",".join(rest) + "}" - ret = lcp + rest - - # add some simplification for BN specifically - ret = ret.replace("bn_{beta,running_mean,running_var,gamma}", "bn_*") - ret = ret.replace("bn_beta,bn_running_mean,bn_running_var,bn_gamma", "bn_*") - return ret diff --git a/spaces/BAAI/AltDiffusion-m9/style.css b/spaces/BAAI/AltDiffusion-m9/style.css deleted file mode 100644 index d954ce678fed7d0f33bdc6af6764b73e06d6e78a..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion-m9/style.css +++ /dev/null @@ -1,81 +0,0 @@ -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - /* border-color: black; */ - /* background: black; */ - background: rgb(60, 145, 238); -} -/* input[type='range'] { - accent-color: rgb(60, 145, 238); -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} */ -.container { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -/* .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} */ -.footer { - margin-bottom: 45px; - margin-top: 20px; - /* text-align: center; */ - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.footer>p>h4 { - font-size: .20rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - font-weight: bold; -} -.dark .footer { - /* border-color: #303030; */ - border-color: rgb(60, 145, 238); -} -.dark .footer>p { - /* background: #0b0f19; */ - background: rgb(60, 145, 238); -} -.prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} \ No newline at end of file diff --git a/spaces/BAAI/vid2vid-zero/vid2vid_zero/models/resnet_2d.py b/spaces/BAAI/vid2vid-zero/vid2vid_zero/models/resnet_2d.py deleted file mode 100644 index c6ed4dc106a18f6b243284f63ca455a8d4524c3d..0000000000000000000000000000000000000000 --- a/spaces/BAAI/vid2vid-zero/vid2vid_zero/models/resnet_2d.py +++ /dev/null @@ -1,209 +0,0 @@ -# Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from einops import rearrange - - -class InflatedConv3d(nn.Conv2d): - def forward(self, x): - video_length = x.shape[2] - - x = rearrange(x, "b c f h w -> (b f) c h w") - x = super().forward(x) - x = rearrange(x, "(b f) c h w -> b c f h w", f=video_length) - - return x - - -class Upsample2D(nn.Module): - def __init__(self, channels, use_conv=False, use_conv_transpose=False, out_channels=None, name="conv"): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_conv_transpose = use_conv_transpose - self.name = name - - conv = None - if use_conv_transpose: - raise NotImplementedError - elif use_conv: - conv = InflatedConv3d(self.channels, self.out_channels, 3, padding=1) - - if name == "conv": - self.conv = conv - else: - self.Conv2d_0 = conv - - def forward(self, hidden_states, output_size=None): - assert hidden_states.shape[1] == self.channels - - if self.use_conv_transpose: - raise NotImplementedError - - # Cast to float32 to as 'upsample_nearest2d_out_frame' op does not support bfloat16 - dtype = hidden_states.dtype - if dtype == torch.bfloat16: - hidden_states = hidden_states.to(torch.float32) - - # upsample_nearest_nhwc fails with large batch sizes. see https://github.com/huggingface/diffusers/issues/984 - if hidden_states.shape[0] >= 64: - hidden_states = hidden_states.contiguous() - - # if `output_size` is passed we force the interpolation output - # size and do not make use of `scale_factor=2` - if output_size is None: - hidden_states = F.interpolate(hidden_states, scale_factor=[1.0, 2.0, 2.0], mode="nearest") - else: - hidden_states = F.interpolate(hidden_states, size=output_size, mode="nearest") - - # If the input is bfloat16, we cast back to bfloat16 - if dtype == torch.bfloat16: - hidden_states = hidden_states.to(dtype) - - if self.use_conv: - if self.name == "conv": - hidden_states = self.conv(hidden_states) - else: - hidden_states = self.Conv2d_0(hidden_states) - - return hidden_states - - -class Downsample2D(nn.Module): - def __init__(self, channels, use_conv=False, out_channels=None, padding=1, name="conv"): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.padding = padding - stride = 2 - self.name = name - - if use_conv: - conv = InflatedConv3d(self.channels, self.out_channels, 3, stride=stride, padding=padding) - else: - raise NotImplementedError - - if name == "conv": - self.Conv2d_0 = conv - self.conv = conv - elif name == "Conv2d_0": - self.conv = conv - else: - self.conv = conv - - def forward(self, hidden_states): - assert hidden_states.shape[1] == self.channels - if self.use_conv and self.padding == 0: - raise NotImplementedError - - assert hidden_states.shape[1] == self.channels - hidden_states = self.conv(hidden_states) - - return hidden_states - - -class ResnetBlock2D(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - conv_shortcut=False, - dropout=0.0, - temb_channels=512, - groups=32, - groups_out=None, - pre_norm=True, - eps=1e-6, - non_linearity="swish", - time_embedding_norm="default", - output_scale_factor=1.0, - use_in_shortcut=None, - ): - super().__init__() - self.pre_norm = pre_norm - self.pre_norm = True - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - self.time_embedding_norm = time_embedding_norm - self.output_scale_factor = output_scale_factor - - if groups_out is None: - groups_out = groups - - self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=in_channels, eps=eps, affine=True) - - self.conv1 = InflatedConv3d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - - if temb_channels is not None: - if self.time_embedding_norm == "default": - time_emb_proj_out_channels = out_channels - elif self.time_embedding_norm == "scale_shift": - time_emb_proj_out_channels = out_channels * 2 - else: - raise ValueError(f"unknown time_embedding_norm : {self.time_embedding_norm} ") - - self.time_emb_proj = torch.nn.Linear(temb_channels, time_emb_proj_out_channels) - else: - self.time_emb_proj = None - - self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels, eps=eps, affine=True) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = InflatedConv3d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - - if non_linearity == "swish": - self.nonlinearity = lambda x: F.silu(x) - elif non_linearity == "mish": - self.nonlinearity = Mish() - elif non_linearity == "silu": - self.nonlinearity = nn.SiLU() - - self.use_in_shortcut = self.in_channels != self.out_channels if use_in_shortcut is None else use_in_shortcut - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = InflatedConv3d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, input_tensor, temb): - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None, None] - - if temb is not None and self.time_embedding_norm == "default": - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - - if temb is not None and self.time_embedding_norm == "scale_shift": - scale, shift = torch.chunk(temb, 2, dim=1) - hidden_states = hidden_states * (1 + scale) + shift - - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = (input_tensor + hidden_states) / self.output_scale_factor - - return output_tensor - - -class Mish(torch.nn.Module): - def forward(self, hidden_states): - return hidden_states * torch.tanh(torch.nn.functional.softplus(hidden_states)) diff --git a/spaces/Badaleeloveashley/badaleeloveashley/README.md b/spaces/Badaleeloveashley/badaleeloveashley/README.md deleted file mode 100644 index 4fd16a8695a9bb6eee4e8b2bf3154ede7d271a2a..0000000000000000000000000000000000000000 --- a/spaces/Badaleeloveashley/badaleeloveashley/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Badaleeloveashley -emoji: 🚀 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Banbri/zcvzcv/postcss.config.js b/spaces/Banbri/zcvzcv/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Banbri/zcvzcv/src/app/interface/grid/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/grid/index.tsx deleted file mode 100644 index 83bdf555fc742405b59e5e15d9052e918c0e9713..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/grid/index.tsx +++ /dev/null @@ -1,26 +0,0 @@ -"use client" - -import { ReactNode } from "react" - -import { cn } from "@/lib/utils" -import { useStore } from "@/app/store" - -export function Grid({ children, className }: { children: ReactNode; className: string }) { - const zoomLevel = useStore(state => state.zoomLevel) - - return ( -
- {children} -
- ) -} - diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/models.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/models.py deleted file mode 100644 index 5e4b2e72383efaee1fae4f5c42e3db2c627e4190..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange_Beta/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/models_onnx.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/models_onnx.py deleted file mode 100644 index b945eac8e59aac38fbd166da49eda01e2b8f4bd4..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange_Beta/infer_pack/models_onnx.py +++ /dev/null @@ -1,818 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if self.gin_channels == 256: - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/BeeMon/dreambooth-training/README.md b/spaces/BeeMon/dreambooth-training/README.md deleted file mode 100644 index 29f12037c9f880e438297e73306cd2b02f027eb2..0000000000000000000000000000000000000000 --- a/spaces/BeeMon/dreambooth-training/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Dreambooth Training -emoji: ☁️ -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit -duplicated_from: multimodalart/dreambooth-training ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Brawlhalla Steamunlocked.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Brawlhalla Steamunlocked.md deleted file mode 100644 index db419f6bd01b12daba6fac463fa1907f0a3f5beb..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis Brawlhalla Steamunlocked.md +++ /dev/null @@ -1,64 +0,0 @@ - -

Descarga gratuita de Brawlhalla Steamunlocked: Una guía para los fans de Platform Fighter

-

Si usted está buscando un juego de lucha divertido y emocionante que se puede jugar de forma gratuita en su PC, es posible que desee echa un vistazo a Brawlhalla. Este juego es un luchador de plataformas que soporta el juego cruzado en varios dispositivos, incluyendo PC, PlayStation, Xbox, Nintendo Switch, iOS y Android. En este artículo, te diremos qué es Brawlhalla, cómo descargarlo gratis en Steam, cómo desbloquear más contenido en el juego y cómo mejorar tus habilidades como luchador.

-

descargar gratis brawlhalla steamunlocked


Download Zip ––– https://bltlly.com/2v6K1g



-

¿Qué es Brawlhalla?

-

Brawlhalla es un juego de lucha de plataformas 2D que fue desarrollado por Blue Mammoth Games y publicado por Ubisoft. Fue lanzado en 2017 después de un período beta que comenzó en 2015. El juego ha sido elogiado por su mecánica de juego simple pero profunda, su colorido y diverso elenco de personajes, sus frecuentes actualizaciones y eventos, y su falta de ventajas de pago para ganar.

-

Un luchador de plataforma de juego libre con soporte de juego cruzado

-

Uno de los principales puntos de venta de Brawlhalla es que es libre de jugar. Usted no necesita pagar nada para descargar y jugar el juego en cualquier plataforma. Tampoco necesitas un servicio de suscripción online como PlayStation Plus o Nintendo Switch Online para jugar online con otros jugadores. El juego también soporta cross-play en todas las plataformas, lo que significa que puedes jugar con o contra cualquier persona que tenga el juego en cualquier dispositivo.

-

Una lista de más de 50 leyendas y actualizaciones frecuentes

- -

Una variedad de modos de juego y características

-

Brawlhalla también ofrece una variedad de modos de juego y características para mantenerte entretenido y desafiado. Puedes jugar online o localmente con hasta 8 jugadores en varios modos como Free-for-All, 1v1, 2v2, Brawlball, Kung Foot, Capture the Flag, Horde y más. También puedes personalizar tus partidos con diferentes configuraciones, mapas y modificadores. También puede unirse a los partidos clasificados y subir las tablas de clasificación, o participar en torneos y eventos para recompensas y gloria. El juego también tiene un modo para un solo jugador donde puedes luchar contra bots, completar misiones o jugar el modo historia.

-

Cómo descargar Brawlhalla gratis en Steam?

-

Si quieres jugar Brawlhalla en tu PC, puedes descargarlo gratis en Steam. Estos son los pasos para hacerlo:

-

Visita la página de la tienda de Steam y haz clic en "Jugar"

-

Primero, necesitas tener una cuenta de Steam y el cliente de Steam instalado en tu PC. Si aún no los tiene, puede crear una cuenta y descargar el cliente desde https://store.steampowered.com/. Una vez que los tengas, abre el cliente de Steam y busca Brawlhalla en la tienda. Alternativamente, puedes visitar la página de la tienda del juego directamente desde este enlace: https://store.steampowered.com/app/291550/Brawlhalla/. En la página de la tienda, verá un botón que dice "Jugar". Haga clic en él para comenzar a descargar el juego.

-

Instalar el juego y lanzarlo desde su biblioteca

-

Después de hacer clic en el botón "Jugar juego", verá una ventana emergente que le pide que confirme la instalación. Haga clic en "Siguiente" y siga las instrucciones para elegir la carpeta de instalación y aceptar los términos del servicio. El juego comenzará a descargarse e instalarse en su PC. El proceso puede tardar unos minutos dependiendo de su velocidad de Internet y espacio en disco. Una vez completada la instalación, puedes iniciar el juego desde tu biblioteca o desde el acceso directo del escritorio.

-

- -

Cuando inicie el juego por primera vez, se le pedirá que cree una cuenta o enlace a la existente. Puedes usar tu dirección de correo electrónico o tus cuentas de redes sociales como Facebook, Twitter, Google o Apple para registrarte o iniciar sesión. Crear una cuenta te permitirá guardar tu progreso, personalizar tu perfil, acceder a funciones en línea y sincronizar tus datos entre dispositivos. Si ya tienes una cuenta en otra plataforma, puedes vincularla a tu cuenta de Steam y mantener tu progreso y tus compras.

-

¿Cómo desbloquear más contenido en Brawlhalla?

-

Brawlhalla es gratis para jugar, pero también tiene mucho contenido que puedes desbloquear jugando o gastando dinero real. Aquí hay algunas maneras de desbloquear más contenido en Brawlhalla:

-

Gana monedas de oro y mamut jugando partidos y completando misiones

-

La moneda principal en Brawlhalla es gold, que puedes ganar jugando partidas y completando misiones. Puedes usar oro para comprar leyendas, colores y avatares en la tienda del juego. También puede ganar monedas mammoth, que son moneda premium que puede comprar con dinero real o obtener de eventos especiales. Puedes usar monedas gigantescas para comprar pieles, burlas, compañeros y pases de batalla de la tienda.

-

Usa oro para comprar leyendas, colores y avatares

-

Una de las cosas más importantes para desbloquear en Brawlhalla es Legends, que son los personajes que puedes jugar. Hay más de 50 leyendas en el juego, cada una con sus propias habilidades, armas, estadísticas y personalidades. Puedes comprar Leyendas con monedas de oro o de mamut en la tienda. Cada Leyenda cuesta 5400 monedas de oro o 100 monedas de mamut. También puede probar cualquier leyenda de forma gratuita en el modo de entrenamiento o durante rotaciones semanales.

- -

Usa monedas gigantescas para comprar pieles, burlas, compañeros y pases de batalla

-

Si quieres personalizar tus leyendas aún más, puedes usar monedas gigantescas para comprar skins, burlas, , compañeros, y pases de batalla desde la tienda. Las pieles son opciones cosméticas que cambian la apariencia del atuendo, las armas y los efectos de tu leyenda. Puedes comprar pieles con monedas gigantescas en la tienda o desbloquearlas comprando pases de batalla o durante eventos especiales. Las burlas son emotes que puedes usar para expresarte en el juego. Puedes comprar burlas con monedas gigantescas en la tienda o desbloquearlas comprando pases de batalla o durante eventos especiales. Los compañeros son mascotas que te acompañan en el juego. Puedes comprar compañeros con monedas de mamut en la tienda o desbloquearlos comprando pases de batalla o durante eventos especiales. Los pases de batalla son pases de temporada que te dan acceso a recompensas exclusivas como pieles, burlas, compañeros, colores, avatares y más. Puedes comprar pases de batalla con monedas gigantescas en la tienda o conseguirlos gratis jugando el juego.

-

¿Cómo mejorar tus habilidades en Brawlhalla?

-

Brawlhalla es un juego que es fácil de aprender pero difícil de dominar. Si quieres mejorar tus habilidades como luchador, aquí hay algunos consejos y trucos que puedes seguir:

-

Aprende los conceptos básicos de movimiento, recuperación, esquivar y atacar

- -tecla de ataque pesado o el botón X mientras se mantiene pulsada la tecla de flecha hacia abajo o el botón B, que realiza un ataque especial y potente que es único para cada leyenda y arma.

-

Experimenta con diferentes leyendas y armas

-

Otra cosa que necesitas aprender en Brawlhalla es cómo usar diferentes leyendas y armas. Cada leyenda tiene sus propias habilidades, armas, estadísticas y personalidades. Puedes elegir una leyenda que se adapte a tu estilo de juego, preferencias y objetivos. También puedes cambiar entre leyendas para adaptarte a diferentes situaciones, oponentes y modos. Cada leyenda tiene dos armas que pueden usar en batalla. Puede recoger un arma pulsando la tecla de recogida o el botón Z cuando vea un arma en el escenario. También puedes lanzar tu arma presionando la tecla de recogida o el botón Z mientras sostienes un arma, lo que puede ser útil para golpear a tu oponente desde la distancia o desarmarlo. Cada arma tiene su propio movimiento, rango, velocidad y daño. Puedes usar diferentes armas para lidiar con diferentes escenarios, enemigos y estrategias. También puedes combinar tus armas con tus ataques desarmados, tu esquivar y tus ataques característicos para crear combos y cadenas.

-

Practica en modo entrenamiento y mira tutoriales

-

Lo último que tienes que hacer para mejorar tus habilidades en Brawlhalla es practicar y aprender de los demás. Puedes practicar en el modo de entrenamiento, que es un modo donde puedes probar tus habilidades contra un maniquí o un bot. Puedes personalizar los ajustes del modo de entrenamiento, como la leyenda, el arma, el mapa, el daño, la velocidad, los hitboxes y más. Puedes usar el modo de entrenamiento para aprender nuevos movimientos, practicar combos, probar daños y experimentar con diferentes opciones. También puedes ver tutoriales de otros jugadores o de los desarrolladores, que pueden enseñarte consejos, trucos, técnicas y estrategias para jugar a Brawlhalla. Puedes encontrar tutoriales en YouTube, Twitch, Reddit, Discord, o el sitio web oficial de Brawlhalla.

- -

Brawlhalla es un divertido y accesible luchador de plataformas que puedes jugar gratis en Steam. Tiene una gran y diversa lista de leyendas, una variedad de modos de juego y características, y un sistema de juego simple pero profundo. Puede descargarlo ahora y unirse a millones de jugadores en línea o localmente. También puedes desbloquear más contenido en el juego jugando partidos y completando misiones, o gastando dinero real si quieres. También puedes mejorar tus habilidades en Brawlhalla aprendiendo los conceptos básicos de movimiento, recuperación, esquivar y atacar, experimentando con diferentes leyendas y armas, y practicando en modo entrenamiento y viendo tutoriales. Brawlhalla es un juego que te mantendrá entretenido y retado durante horas. ¡Disfruta del juego y conviértete en el mejor luchador del Valhalla!

-

Preguntas frecuentes

-

¿Es Brawlhalla pago a ganar?

-

No, Brawlhalla no es pagar para ganar. Todo el contenido que afecta al juego, como leyendas y armas, se puede desbloquear jugando el juego o gastando oro, que se gana jugando el juego. El único contenido que requiere dinero real para comprar es cosmético, como pieles, burlas, compañeros y pases de batalla. Estos elementos no dan ninguna ventaja de juego y son solo para personalización y expresión.

-

¿Brawlhalla es multiplataforma?

-

Sí, Brawlhalla es multiplataforma. Puedes jugar con o contra cualquier persona que tenga el juego en cualquier dispositivo, incluyendo PC, PlayStation, Xbox, Nintendo Switch, iOS y Android. También puedes sincronizar tu progreso y tus compras entre dispositivos mediante la vinculación de tu cuenta. Para habilitar el cross-play, necesita tener una conexión en línea y activar la opción de cross-play en la configuración.

-

¿Cuántos jugadores pueden jugar Brawlhalla online o localmente?

- -

¿Cuáles son los requisitos del sistema para Brawlhalla en PC?

-

Brawlhalla es un juego que no requiere muchos recursos para ejecutarse en PC. Aquí están los requisitos mínimos y recomendados del sistema para Brawlhalla en PC:

- -MínimoRecomendado -OS: Windows XP/Vista/7/8/10OS: Windows 7/8/10 -Procesador: 2.4 GHz Dual CoreProcesador: 2.8 GHz Quad Core -Memoria: 1 GB de RAMMemoria: 4 GB de RAM -Gráficos: 512 MB VRAMGráficos: 1 GB VRAM -DirectX: Versión 9.0cDirectX: Versión 9.0c -Red: Conexión a Internet de banda anchaRed: Conexión a Internet de banda ancha -Almacenamiento: 350 MB de espacio disponibleAlmacenamiento: 350 MB de espacio disponible - -

¿Dónde puedo encontrar más información sobre Brawlhalla?

-

Si quieres saber más sobre Brawlhalla, puedes visitar el sitio web oficial del juego en https://www.brawlhalla.com/. Allí puedes encontrar noticias, actualizaciones, eventos, torneos, guías, videos y más. También puedes seguir el juego en plataformas de redes sociales como Facebook, Twitter, Instagram, YouTube, Twitch, Reddit, Discord y Steam. También puede ponerse en contacto con los desarrolladores o los administradores de la comunidad si tiene alguna pregunta, comentario o sugerencia.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Betacuckgpt/togethercomputer-GPT-JT-Moderation-6B/README.md b/spaces/Betacuckgpt/togethercomputer-GPT-JT-Moderation-6B/README.md deleted file mode 100644 index 5b94f6ee1c7e09d392ab4a80e47560956a139014..0000000000000000000000000000000000000000 --- a/spaces/Betacuckgpt/togethercomputer-GPT-JT-Moderation-6B/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Togethercomputer GPT JT Moderation 6B -emoji: 🐨 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BetterAPI/BetterChat/src/lib/types/Conversation.ts b/spaces/BetterAPI/BetterChat/src/lib/types/Conversation.ts deleted file mode 100644 index 544da7b9a83aea228fe4046f9b942f860f15f22c..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/types/Conversation.ts +++ /dev/null @@ -1,17 +0,0 @@ -import type { ObjectId } from "mongodb"; -import type { Message } from "./Message"; -import type { Timestamps } from "./Timestamps"; - -export interface Conversation extends Timestamps { - _id: ObjectId; - - // Can be undefined for shared convo then deleted - sessionId: string; - - title: string; - messages: Message[]; - - meta?: { - fromShareId?: string; - }; -} diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/types/UrlDependency.ts b/spaces/BetterAPI/BetterChat_new/src/lib/types/UrlDependency.ts deleted file mode 100644 index a97e60f2876959449df638ee36d84cd59d65bb21..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/types/UrlDependency.ts +++ /dev/null @@ -1,5 +0,0 @@ -/* eslint-disable no-shadow */ -export enum UrlDependency { - ConversationList = "conversation:list", - Settings = "settings:list", -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/table.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/table.py deleted file mode 100644 index 931296bc094f1702b2a168a8de2d79327592855a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/table.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import logging - -logger = logging.getLogger(__name__) - - -def register_table_methods(base_classes, **kwargs): - base_classes.insert(0, TableResource) - - -# This class can be used to add any additional methods we want -# onto a table resource. Ideally to avoid creating a new -# base class for every method we can just update this -# class instead. Just be sure to move the bulk of the -# actual method implementation to another class. -class TableResource: - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def batch_writer(self, overwrite_by_pkeys=None): - """Create a batch writer object. - - This method creates a context manager for writing - objects to Amazon DynamoDB in batch. - - The batch writer will automatically handle buffering and sending items - in batches. In addition, the batch writer will also automatically - handle any unprocessed items and resend them as needed. All you need - to do is call ``put_item`` for any items you want to add, and - ``delete_item`` for any items you want to delete. - - Example usage:: - - with table.batch_writer() as batch: - for _ in range(1000000): - batch.put_item(Item={'HashKey': '...', - 'Otherstuff': '...'}) - # You can also delete_items in a batch. - batch.delete_item(Key={'HashKey': 'SomeHashKey'}) - - :type overwrite_by_pkeys: list(string) - :param overwrite_by_pkeys: De-duplicate request items in buffer - if match new request item on specified primary keys. i.e - ``["partition_key1", "sort_key2", "sort_key3"]`` - - """ - return BatchWriter( - self.name, self.meta.client, overwrite_by_pkeys=overwrite_by_pkeys - ) - - -class BatchWriter: - """Automatically handle batch writes to DynamoDB for a single table.""" - - def __init__( - self, table_name, client, flush_amount=25, overwrite_by_pkeys=None - ): - """ - - :type table_name: str - :param table_name: The name of the table. The class handles - batch writes to a single table. - - :type client: ``botocore.client.Client`` - :param client: A botocore client. Note this client - **must** have the dynamodb customizations applied - to it for transforming AttributeValues into the - wire protocol. What this means in practice is that - you need to use a client that comes from a DynamoDB - resource if you're going to instantiate this class - directly, i.e - ``boto3.resource('dynamodb').Table('foo').meta.client``. - - :type flush_amount: int - :param flush_amount: The number of items to keep in - a local buffer before sending a batch_write_item - request to DynamoDB. - - :type overwrite_by_pkeys: list(string) - :param overwrite_by_pkeys: De-duplicate request items in buffer - if match new request item on specified primary keys. i.e - ``["partition_key1", "sort_key2", "sort_key3"]`` - - """ - self._table_name = table_name - self._client = client - self._items_buffer = [] - self._flush_amount = flush_amount - self._overwrite_by_pkeys = overwrite_by_pkeys - - def put_item(self, Item): - self._add_request_and_process({'PutRequest': {'Item': Item}}) - - def delete_item(self, Key): - self._add_request_and_process({'DeleteRequest': {'Key': Key}}) - - def _add_request_and_process(self, request): - if self._overwrite_by_pkeys: - self._remove_dup_pkeys_request_if_any(request) - self._items_buffer.append(request) - self._flush_if_needed() - - def _remove_dup_pkeys_request_if_any(self, request): - pkey_values_new = self._extract_pkey_values(request) - for item in self._items_buffer: - if self._extract_pkey_values(item) == pkey_values_new: - self._items_buffer.remove(item) - logger.debug( - "With overwrite_by_pkeys enabled, skipping " "request:%s", - item, - ) - - def _extract_pkey_values(self, request): - if request.get('PutRequest'): - return [ - request['PutRequest']['Item'][key] - for key in self._overwrite_by_pkeys - ] - elif request.get('DeleteRequest'): - return [ - request['DeleteRequest']['Key'][key] - for key in self._overwrite_by_pkeys - ] - return None - - def _flush_if_needed(self): - if len(self._items_buffer) >= self._flush_amount: - self._flush() - - def _flush(self): - items_to_send = self._items_buffer[: self._flush_amount] - self._items_buffer = self._items_buffer[self._flush_amount :] - response = self._client.batch_write_item( - RequestItems={self._table_name: items_to_send} - ) - unprocessed_items = response['UnprocessedItems'] - if not unprocessed_items: - unprocessed_items = {} - item_list = unprocessed_items.get(self._table_name, []) - # Any unprocessed_items are immediately added to the - # next batch we send. - self._items_buffer.extend(item_list) - logger.debug( - "Batch write sent %s, unprocessed: %s", - len(items_to_send), - len(self._items_buffer), - ) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, tb): - # When we exit, we need to keep flushing whatever's left - # until there's nothing left in our items buffer. - while self._items_buffer: - self._flush() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/bbcode.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/bbcode.py deleted file mode 100644 index 2be2b4e31292d8364b404f16ef4c654f9d89681a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/bbcode.py +++ /dev/null @@ -1,108 +0,0 @@ -""" - pygments.formatters.bbcode - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - BBcode formatter. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.util import get_bool_opt - -__all__ = ['BBCodeFormatter'] - - -class BBCodeFormatter(Formatter): - """ - Format tokens with BBcodes. These formatting codes are used by many - bulletin boards, so you can highlight your sourcecode with pygments before - posting it there. - - This formatter has no support for background colors and borders, as there - are no common BBcode tags for that. - - Some board systems (e.g. phpBB) don't support colors in their [code] tag, - so you can't use the highlighting together with that tag. - Text in a [code] tag usually is shown with a monospace font (which this - formatter can do with the ``monofont`` option) and no spaces (which you - need for indentation) are removed. - - Additional options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `codetag` - If set to true, put the output into ``[code]`` tags (default: - ``false``) - - `monofont` - If set to true, add a tag to show the code with a monospace font - (default: ``false``). - """ - name = 'BBCode' - aliases = ['bbcode', 'bb'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self._code = get_bool_opt(options, 'codetag', False) - self._mono = get_bool_opt(options, 'monofont', False) - - self.styles = {} - self._make_styles() - - def _make_styles(self): - for ttype, ndef in self.style: - start = end = '' - if ndef['color']: - start += '[color=#%s]' % ndef['color'] - end = '[/color]' + end - if ndef['bold']: - start += '[b]' - end = '[/b]' + end - if ndef['italic']: - start += '[i]' - end = '[/i]' + end - if ndef['underline']: - start += '[u]' - end = '[/u]' + end - # there are no common BBcodes for background-color and border - - self.styles[ttype] = start, end - - def format_unencoded(self, tokensource, outfile): - if self._code: - outfile.write('[code]') - if self._mono: - outfile.write('[font=monospace]') - - lastval = '' - lasttype = None - - for ttype, value in tokensource: - while ttype not in self.styles: - ttype = ttype.parent - if ttype == lasttype: - lastval += value - else: - if lastval: - start, end = self.styles[lasttype] - outfile.write(''.join((start, lastval, end))) - lastval = value - lasttype = ttype - - if lastval: - start, end = self.styles[lasttype] - outfile.write(''.join((start, lastval, end))) - - if self._mono: - outfile.write('[/font]') - if self._code: - outfile.write('[/code]') - if self._code or self._mono: - outfile.write('\n') diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/__init__.py deleted file mode 100644 index ddfcf7f72f31658d75c8128de0732fbbf0e12b15..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -"""Wrappers to call pyproject.toml-based build backend hooks. -""" - -from ._impl import ( - BackendInvalid, - BackendUnavailable, - BuildBackendHookCaller, - HookMissing, - UnsupportedOperation, - default_subprocess_runner, - quiet_subprocess_runner, -) - -__version__ = '1.0.0' -__all__ = [ - 'BackendUnavailable', - 'BackendInvalid', - 'HookMissing', - 'UnsupportedOperation', - 'default_subprocess_runner', - 'quiet_subprocess_runner', - 'BuildBackendHookCaller', -] diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_structures.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_structures.py deleted file mode 100644 index 90a6465f9682c886363eea5327dac64bf623a6ff..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_structures.py +++ /dev/null @@ -1,61 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - - -class InfinityType: - def __repr__(self) -> str: - return "Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return False - - def __le__(self, other: object) -> bool: - return False - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return True - - def __ge__(self, other: object) -> bool: - return True - - def __neg__(self: object) -> "NegativeInfinityType": - return NegativeInfinity - - -Infinity = InfinityType() - - -class NegativeInfinityType: - def __repr__(self) -> str: - return "-Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return True - - def __le__(self, other: object) -> bool: - return True - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return False - - def __ge__(self, other: object) -> bool: - return False - - def __neg__(self: object) -> InfinityType: - return Infinity - - -NegativeInfinity = NegativeInfinityType() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/delete.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/delete.py deleted file mode 100644 index 74084d312a7d603c3016fb424940e775ee2f9333..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/delete.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -from s3transfer.tasks import SubmissionTask, Task - - -class DeleteSubmissionTask(SubmissionTask): - """Task for submitting tasks to execute an object deletion.""" - - def _submit(self, client, request_executor, transfer_future, **kwargs): - """ - :param client: The client associated with the transfer manager - - :type config: s3transfer.manager.TransferConfig - :param config: The transfer config associated with the transfer - manager - - :type osutil: s3transfer.utils.OSUtil - :param osutil: The os utility associated to the transfer manager - - :type request_executor: s3transfer.futures.BoundedExecutor - :param request_executor: The request executor associated with the - transfer manager - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The transfer future associated with the - transfer request that tasks are being submitted for - """ - call_args = transfer_future.meta.call_args - - self._transfer_coordinator.submit( - request_executor, - DeleteObjectTask( - transfer_coordinator=self._transfer_coordinator, - main_kwargs={ - 'client': client, - 'bucket': call_args.bucket, - 'key': call_args.key, - 'extra_args': call_args.extra_args, - }, - is_final=True, - ), - ) - - -class DeleteObjectTask(Task): - def _main(self, client, bucket, key, extra_args): - """ - - :param client: The S3 client to use when calling DeleteObject - - :type bucket: str - :param bucket: The name of the bucket. - - :type key: str - :param key: The name of the object to delete. - - :type extra_args: dict - :param extra_args: Extra arguments to pass to the DeleteObject call. - - """ - client.delete_object(Bucket=bucket, Key=key, **extra_args) diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ATS.py b/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ATS.py deleted file mode 100644 index 2a38426aacf582c6ac1aa8c26cb43e5211375996..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ATS.py +++ /dev/null @@ -1,73 +0,0 @@ -from cgi import test -import xgboost as xgb -import pandas as pd -import pickle as pkl -import numpy as np -import os - -model = 'xgboost_ATS_no_odds_57.3%' - -current_directory = os.path.dirname(os.path.abspath(__file__)) -parent_directory = os.path.dirname(current_directory) -data_directory = os.path.join(parent_directory, 'Data') -model_directory = os.path.join(parent_directory, 'Models') -pickle_directory = os.path.join(parent_directory, 'Pickles') - -file_path = os.path.join(model_directory, f'{model}.json') -xgb_ml = xgb.Booster() -xgb_ml.load_model(file_path) - -file_path = os.path.join(pickle_directory, 'test_games_ATS_no_odds.pkl') -with open(file_path,'rb') as f: - test_games = pkl.load(f).tolist() - -file_path = os.path.join(data_directory, 'gbg_and_odds.csv') -gbg_and_odds = pd.read_csv(file_path) -test_data = gbg_and_odds.loc[gbg_and_odds['game_id'].isin(test_games)] -test_data_matrix = xgb.DMatrix(test_data.drop(columns=['game_id','Home-Team-Win','Home-Team-Cover','Over','Season','home_team','away_team','game_date','Key','Home Score','Away Score','Home Odds Close','Away Odds Close','Home Winnings','Away Winnings','Away Odds','Home Odds']).astype(float).values) - -predicted_probas = xgb_ml.predict(test_data_matrix) -predictions = np.argmax(predicted_probas, axis=1) -test_data['predicted_proba'] = [i[1] for i in predicted_probas] -test_data['prediction'] = predictions -test_data['correct'] = test_data['Home-Team-Cover']==test_data['prediction'] -print(test_data['predicted_proba']) -print(test_data['correct'].mean()) - -bets = test_data.loc[(test_data['predicted_proba']>0.5) | (test_data['predicted_proba']<0.5)] -bets['winnings'] = [0.91 if c==1 else -1 for c in bets['correct']] - -print('Actual') -print(bets.loc[bets['Home-Team-Cover']==1].shape) -print(bets.loc[bets['Home-Team-Cover']==0].shape) -print(bets.loc[bets['Home-Team-Cover']==2].shape) - -print('Predicted') -print(bets.loc[bets['prediction']==1].shape) -print(bets.loc[bets['prediction']==0].shape) -print(bets.loc[bets['prediction']==2].shape) - - -import matplotlib.pyplot as plt -fig = plt.figure(facecolor='black') -ax = fig.add_subplot(1, 1, 1, facecolor='black') - -# Plot data with line color as RGB(0, 128, 0) -ax.plot(bets['winnings'].cumsum().values*100, linewidth=3, color=(0/255, 128/255, 0/255)) - -# Set title and labels -ax.set_title('MARCI 3.0 - Against the Spread', color='white') -ax.set_xlabel('Games Bet On', color='white') -ax.set_ylabel('Return (%)', color='white') - -# Change tick colors to white -ax.tick_params(axis='x', colors='white') -ax.tick_params(axis='y', colors='white') - -# Change axis edge colors -ax.spines['bottom'].set_color('white') -ax.spines['top'].set_color('white') -ax.spines['left'].set_color('white') -ax.spines['right'].set_color('white') - -plt.savefig(f'{model}_dark.png', facecolor='black') \ No newline at end of file diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/models.py b/spaces/CForGETaass/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000 --- a/spaces/CForGETaass/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - device = next(self.parameters()).device # 获取模型所在的设备 - x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device)) - if self.n_speakers > 0: - g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/ban/ban.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/ban/ban.py deleted file mode 100644 index 5bb78347020d4321cd0e518cae8ffd8ccb998405..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/ban/ban.py +++ /dev/null @@ -1,138 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Zhenwei Shao https://github.com/ParadoxZW -# Based on the implementation of paper "Bilinear Attention Neworks", NeurIPS 2018 https://github.com/jnhwkim/ban-vqa) -# -------------------------------------------------------- - -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.utils.weight_norm import weight_norm -import torch, math - -# ------------------------------ -# ----- Weight Normal MLP ------ -# ------------------------------ - -class MLP(nn.Module): - """ - Simple class for non-linear fully connect network - """ - - def __init__(self, dims, act='ReLU', dropout_r=0.0): - super(MLP, self).__init__() - - layers = [] - for i in range(len(dims) - 1): - in_dim = dims[i] - out_dim = dims[i + 1] - if dropout_r > 0: - layers.append(nn.Dropout(dropout_r)) - layers.append(weight_norm(nn.Linear(in_dim, out_dim), dim=None)) - if act != '': - layers.append(getattr(nn, act)()) - - self.mlp = nn.Sequential(*layers) - - def forward(self, x): - return self.mlp(x) - -# ------------------------------ -# ------ Bilinear Connect ------ -# ------------------------------ - -class BC(nn.Module): - """ - Simple class for non-linear bilinear connect network - """ - - def __init__(self, __C, atten=False): - super(BC, self).__init__() - - self.__C = __C - self.v_net = MLP([__C.IMG_FEAT_SIZE, - __C.BA_HIDDEN_SIZE], dropout_r=__C.DROPOUT_R) - self.q_net = MLP([__C.HIDDEN_SIZE, - __C.BA_HIDDEN_SIZE], dropout_r=__C.DROPOUT_R) - if not atten: - self.p_net = nn.AvgPool1d(__C.K_TIMES, stride=__C.K_TIMES) - else: - self.dropout = nn.Dropout(__C.CLASSIFER_DROPOUT_R) # attention - - self.h_mat = nn.Parameter(torch.Tensor( - 1, __C.GLIMPSE, 1, __C.BA_HIDDEN_SIZE).normal_()) - self.h_bias = nn.Parameter( - torch.Tensor(1, __C.GLIMPSE, 1, 1).normal_()) - - def forward(self, v, q): - # low-rank bilinear pooling using einsum - v_ = self.dropout(self.v_net(v)) - q_ = self.q_net(q) - logits = torch.einsum('xhyk,bvk,bqk->bhvq', - (self.h_mat, v_, q_)) + self.h_bias - return logits # b x h_out x v x q - - def forward_with_weights(self, v, q, w): - v_ = self.v_net(v) # b x v x d - q_ = self.q_net(q) # b x q x d - logits = torch.einsum('bvk,bvq,bqk->bk', (v_, w, q_)) - logits = logits.unsqueeze(1) # b x 1 x d - logits = self.p_net(logits).squeeze(1) * self.__C.K_TIMES # sum-pooling - return logits - -# ------------------------------ -# -------- BiAttention --------- -# ------------------------------ - - -class BiAttention(nn.Module): - def __init__(self, __C): - super(BiAttention, self).__init__() - - self.__C = __C - self.logits = weight_norm(BC(__C, True), name='h_mat', dim=None) - - def forward(self, v, q, v_mask=True, logit=False, mask_with=-float('inf')): - v_num = v.size(1) - q_num = q.size(1) - logits = self.logits(v, q) # b x g x v x q - - if v_mask: - mask = (0 == v.abs().sum(2)).unsqueeze( - 1).unsqueeze(3).expand(logits.size()) - logits.data.masked_fill_(mask.data, mask_with) - - if not logit: - p = nn.functional.softmax( - logits.view(-1, self.__C.GLIMPSE, v_num * q_num), 2) - return p.view(-1, self.__C.GLIMPSE, v_num, q_num), logits - - return logits - -# ------------------------------ -# - Bilinear Attention Network - -# ------------------------------ - -class BAN(nn.Module): - def __init__(self, __C): - super(BAN, self).__init__() - - self.__C = __C - self.BiAtt = BiAttention(__C) - b_net = [] - q_prj = [] - c_prj = [] - for i in range(__C.GLIMPSE): - b_net.append(BC(__C)) - q_prj.append(MLP([__C.HIDDEN_SIZE, __C.HIDDEN_SIZE], '', __C.DROPOUT_R)) - self.b_net = nn.ModuleList(b_net) - self.q_prj = nn.ModuleList(q_prj) - - def forward(self, q, v): - att, logits = self.BiAtt(v, q) # b x g x v x q - - for g in range(self.__C.GLIMPSE): - bi_emb = self.b_net[g].forward_with_weights( - v, q, att[:, g, :, :]) # b x l x h - q = self.q_prj[g](bi_emb.unsqueeze(1)) + q - - return q diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_local_bindings.py b/spaces/CVPR/LIVE/pybind11/tests/test_local_bindings.py deleted file mode 100644 index 5460727e1d7ad840f5f2817e9ffbb4e10920b583..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_local_bindings.py +++ /dev/null @@ -1,230 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest - -import env # noqa: F401 - -from pybind11_tests import local_bindings as m - - -def test_load_external(): - """Load a `py::module_local` type that's only registered in an external module""" - import pybind11_cross_module_tests as cm - - assert m.load_external1(cm.ExternalType1(11)) == 11 - assert m.load_external2(cm.ExternalType2(22)) == 22 - - with pytest.raises(TypeError) as excinfo: - assert m.load_external2(cm.ExternalType1(21)) == 21 - assert "incompatible function arguments" in str(excinfo.value) - - with pytest.raises(TypeError) as excinfo: - assert m.load_external1(cm.ExternalType2(12)) == 12 - assert "incompatible function arguments" in str(excinfo.value) - - -def test_local_bindings(): - """Tests that duplicate `py::module_local` class bindings work across modules""" - - # Make sure we can load the second module with the conflicting (but local) definition: - import pybind11_cross_module_tests as cm - - i1 = m.LocalType(5) - assert i1.get() == 4 - assert i1.get3() == 8 - - i2 = cm.LocalType(10) - assert i2.get() == 11 - assert i2.get2() == 12 - - assert not hasattr(i1, 'get2') - assert not hasattr(i2, 'get3') - - # Loading within the local module - assert m.local_value(i1) == 5 - assert cm.local_value(i2) == 10 - - # Cross-module loading works as well (on failure, the type loader looks for - # external module-local converters): - assert m.local_value(i2) == 10 - assert cm.local_value(i1) == 5 - - -def test_nonlocal_failure(): - """Tests that attempting to register a non-local type in multiple modules fails""" - import pybind11_cross_module_tests as cm - - with pytest.raises(RuntimeError) as excinfo: - cm.register_nonlocal() - assert str(excinfo.value) == 'generic_type: type "NonLocalType" is already registered!' - - -def test_duplicate_local(): - """Tests expected failure when registering a class twice with py::local in the same module""" - with pytest.raises(RuntimeError) as excinfo: - m.register_local_external() - import pybind11_tests - assert str(excinfo.value) == ( - 'generic_type: type "LocalExternal" is already registered!' - if hasattr(pybind11_tests, 'class_') else 'test_class not enabled') - - -def test_stl_bind_local(): - import pybind11_cross_module_tests as cm - - v1, v2 = m.LocalVec(), cm.LocalVec() - v1.append(m.LocalType(1)) - v1.append(m.LocalType(2)) - v2.append(cm.LocalType(1)) - v2.append(cm.LocalType(2)) - - # Cross module value loading: - v1.append(cm.LocalType(3)) - v2.append(m.LocalType(3)) - - assert [i.get() for i in v1] == [0, 1, 2] - assert [i.get() for i in v2] == [2, 3, 4] - - v3, v4 = m.NonLocalVec(), cm.NonLocalVec2() - v3.append(m.NonLocalType(1)) - v3.append(m.NonLocalType(2)) - v4.append(m.NonLocal2(3)) - v4.append(m.NonLocal2(4)) - - assert [i.get() for i in v3] == [1, 2] - assert [i.get() for i in v4] == [13, 14] - - d1, d2 = m.LocalMap(), cm.LocalMap() - d1["a"] = v1[0] - d1["b"] = v1[1] - d2["c"] = v2[0] - d2["d"] = v2[1] - assert {i: d1[i].get() for i in d1} == {'a': 0, 'b': 1} - assert {i: d2[i].get() for i in d2} == {'c': 2, 'd': 3} - - -def test_stl_bind_global(): - import pybind11_cross_module_tests as cm - - with pytest.raises(RuntimeError) as excinfo: - cm.register_nonlocal_map() - assert str(excinfo.value) == 'generic_type: type "NonLocalMap" is already registered!' - - with pytest.raises(RuntimeError) as excinfo: - cm.register_nonlocal_vec() - assert str(excinfo.value) == 'generic_type: type "NonLocalVec" is already registered!' - - with pytest.raises(RuntimeError) as excinfo: - cm.register_nonlocal_map2() - assert str(excinfo.value) == 'generic_type: type "NonLocalMap2" is already registered!' - - -def test_mixed_local_global(): - """Local types take precedence over globally registered types: a module with a `module_local` - type can be registered even if the type is already registered globally. With the module, - casting will go to the local type; outside the module casting goes to the global type.""" - import pybind11_cross_module_tests as cm - m.register_mixed_global() - m.register_mixed_local() - - a = [] - a.append(m.MixedGlobalLocal(1)) - a.append(m.MixedLocalGlobal(2)) - a.append(m.get_mixed_gl(3)) - a.append(m.get_mixed_lg(4)) - - assert [x.get() for x in a] == [101, 1002, 103, 1004] - - cm.register_mixed_global_local() - cm.register_mixed_local_global() - a.append(m.MixedGlobalLocal(5)) - a.append(m.MixedLocalGlobal(6)) - a.append(cm.MixedGlobalLocal(7)) - a.append(cm.MixedLocalGlobal(8)) - a.append(m.get_mixed_gl(9)) - a.append(m.get_mixed_lg(10)) - a.append(cm.get_mixed_gl(11)) - a.append(cm.get_mixed_lg(12)) - - assert [x.get() for x in a] == \ - [101, 1002, 103, 1004, 105, 1006, 207, 2008, 109, 1010, 211, 2012] - - -def test_internal_locals_differ(): - """Makes sure the internal local type map differs across the two modules""" - import pybind11_cross_module_tests as cm - assert m.local_cpp_types_addr() != cm.local_cpp_types_addr() - - -@pytest.mark.xfail("env.PYPY") -def test_stl_caster_vs_stl_bind(msg): - """One module uses a generic vector caster from `` while the other - exports `std::vector` via `py:bind_vector` and `py::module_local`""" - import pybind11_cross_module_tests as cm - - v1 = cm.VectorInt([1, 2, 3]) - assert m.load_vector_via_caster(v1) == 6 - assert cm.load_vector_via_binding(v1) == 6 - - v2 = [1, 2, 3] - assert m.load_vector_via_caster(v2) == 6 - with pytest.raises(TypeError) as excinfo: - cm.load_vector_via_binding(v2) == 6 - assert msg(excinfo.value) == """ - load_vector_via_binding(): incompatible function arguments. The following argument types are supported: - 1. (arg0: pybind11_cross_module_tests.VectorInt) -> int - - Invoked with: [1, 2, 3] - """ # noqa: E501 line too long - - -def test_cross_module_calls(): - import pybind11_cross_module_tests as cm - - v1 = m.LocalVec() - v1.append(m.LocalType(1)) - v2 = cm.LocalVec() - v2.append(cm.LocalType(2)) - - # Returning the self pointer should get picked up as returning an existing - # instance (even when that instance is of a foreign, non-local type). - assert m.return_self(v1) is v1 - assert cm.return_self(v2) is v2 - assert m.return_self(v2) is v2 - assert cm.return_self(v1) is v1 - - assert m.LocalVec is not cm.LocalVec - # Returning a copy, on the other hand, always goes to the local type, - # regardless of where the source type came from. - assert type(m.return_copy(v1)) is m.LocalVec - assert type(m.return_copy(v2)) is m.LocalVec - assert type(cm.return_copy(v1)) is cm.LocalVec - assert type(cm.return_copy(v2)) is cm.LocalVec - - # Test the example given in the documentation (which also tests inheritance casting): - mycat = m.Cat("Fluffy") - mydog = cm.Dog("Rover") - assert mycat.get_name() == "Fluffy" - assert mydog.name() == "Rover" - assert m.Cat.__base__.__name__ == "Pet" - assert cm.Dog.__base__.__name__ == "Pet" - assert m.Cat.__base__ is not cm.Dog.__base__ - assert m.pet_name(mycat) == "Fluffy" - assert m.pet_name(mydog) == "Rover" - assert cm.pet_name(mycat) == "Fluffy" - assert cm.pet_name(mydog) == "Rover" - - assert m.MixGL is not cm.MixGL - a = m.MixGL(1) - b = cm.MixGL(2) - assert m.get_gl_value(a) == 11 - assert m.get_gl_value(b) == 12 - assert cm.get_gl_value(a) == 101 - assert cm.get_gl_value(b) == 102 - - c, d = m.MixGL2(3), cm.MixGL2(4) - with pytest.raises(TypeError) as excinfo: - m.get_gl_value(c) - assert "incompatible function arguments" in str(excinfo.value) - with pytest.raises(TypeError) as excinfo: - m.get_gl_value(d) - assert "incompatible function arguments" in str(excinfo.value) diff --git a/spaces/CarlDennis/HYTTS/modules.py b/spaces/CarlDennis/HYTTS/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/analyze_code.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/analyze_code.py deleted file mode 100644 index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/analyze_code.py +++ /dev/null @@ -1,25 +0,0 @@ -"""Code evaluation module.""" -from __future__ import annotations - -from autogpt.llm_utils import call_ai_function - - -def analyze_code(code: str) -> list[str]: - """ - A function that takes in a string and returns a response from create chat - completion api call. - - Parameters: - code (str): Code to be evaluated. - Returns: - A result string from create chat completion. A list of suggestions to - improve the code. - """ - - function_string = "def analyze_code(code: str) -> List[str]:" - args = [code] - description_string = ( - "Analyzes the given code and returns a list of suggestions" " for improvements." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/CikeyQI/Yunzai/Yunzai/CHANGELOG.md b/spaces/CikeyQI/Yunzai/Yunzai/CHANGELOG.md deleted file mode 100644 index e215f2d78d015ca94b7d81f8d8662b9bcaa6c9d0..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/CHANGELOG.md +++ /dev/null @@ -1,32 +0,0 @@ -# 3.1.1 - -* 支持协议端:米游社大别野Bot -* 初步适配原神4.0版本,增加对应资源及信息展示,感谢**Ca(HCO₃)₂**、**@touchscale**、**@teriri7** -* 升级`#探索`内容,支持更多内容展示 **@bangbanbab** -* 增加 `#全部抽卡记录` **@story-x** - -# 3.1.0 - -* 支持协议端:GSUIDCore、微信 -* 重构CK与UID管理逻辑 - * 支持多UID绑定,可绑定多个UID并进行切换 - * 支持原神与星铁UID共存,可针对查询命令分配对应UID - * 新增`#删除uid1`命令,可对`#uid`列表内的绑定UID进行删除 - * 使用sqlite进行ck与uid存储 -* 底层对星铁查询进行支持 **@cvs** - -# 3.0.2 - -* 支持协议端:ComWeChat、ICQQ、QQ频道、KOOK、Telegram、Discord -* 3.6卡池以及图像武器别名等数据更新 **@cvs** -* 将渲染逻辑独立,支持扩展渲染器 **@ikuaki** - -# 3.0.1 - -* 支持多账号,支持协议端:go-cqhttp - -# 3.0.0 - -* 从 Miao-Yunzai 分支 - -# 3.0.0 \ No newline at end of file diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.0d945c8f.js b/spaces/DEEMOSTECH/ChatAvatar/static/js/main.0d945c8f.js deleted file mode 100644 index 82b940cb9fe24c9e00826420a8b73f5c2ca88890..0000000000000000000000000000000000000000 --- a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.0d945c8f.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.0d945c8f.js.LICENSE.txt */ -!function(){var e={498:function(e){e.exports=function(){"use strict";var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var n in t)Object.prototype.hasOwnProperty.call(t,n)&&(e[n]=t[n])},e(t,n)};function t(t,n){if("function"!==typeof n&&null!==n)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");function r(){this.constructor=t}e(t,n),t.prototype=null===n?Object.create(n):(r.prototype=n.prototype,new r)}var n=function(){return n=Object.assign||function(e){for(var t,n=1,r=arguments.length;n0&&i[i.length-1])&&(6===A[0]||2===A[0])){a=0;continue}if(3===A[0]&&(!i||A[1]>i[0]&&A[1]=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},c="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",d="undefined"===typeof Uint8Array?[]:new Uint8Array(256),h=0;h>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},v=function(e){for(var t=e.length,n=[],r=0;r>w,x=(1<>w)+32,S=65536>>B,E=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>w])<<_)+(e&x),this.data[t];if(e<=65535)return t=((t=this.index[b+(e-55296>>w)])<<_)+(e&x),this.data[t];if(e>B),t=this.index[t],t+=e>>w&E,t=((t=this.index[t])<<_)+(e&x),this.data[t];if(e<=1114111)return this.data[this.highValueIndex]}return this.errorValue},e}(),k="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",Q="undefined"===typeof Uint8Array?[]:new Uint8Array(256),L=0;LD?(i.push(!0),a-=D):i.push(!1),-1!==["normal","auto","loose"].indexOf(t)&&-1!==[8208,8211,12316,12448].indexOf(e))return r.push(A),n.push(Y);if(a===H||a===K){if(0===A)return r.push(A),n.push(ue);var o=n[A-1];return-1===Qe.indexOf(o)?(r.push(r[A-1]),n.push(o)):(r.push(A),n.push(ue))}return r.push(A),a===ce?n.push("strict"===t?te:me):a===_e||a===le?n.push(ue):a===be?e>=131072&&e<=196605||e>=196608&&e<=262141?n.push(me):n.push(ue):void n.push(a)})),[r,n,i]},Re=function(e,t,n,r){var i=r[n];if(Array.isArray(e)?-1!==e.indexOf(i):e===i)for(var A=n;A<=r.length;){if((s=r[++A])===t)return!0;if(s!==G)break}if(i===G)for(A=n;A>0;){var a=r[--A];if(Array.isArray(e)?-1!==e.indexOf(a):e===a)for(var o=n;o<=r.length;){var s;if((s=r[++o])===t)return!0;if(s!==G)break}if(a!==G)break}return!1},Pe=function(e,t){for(var n=e;n>=0;){var r=t[n];if(r!==G)return r;n--}return 0},He=function(e,t,n,r,i){if(0===n[r])return Se;var A=r-1;if(Array.isArray(i)&&!0===i[A])return Se;var a=A-1,o=A+1,s=t[A],l=a>=0?t[a]:0,u=t[o];if(s===R&&u===P)return Se;if(-1!==Fe.indexOf(s))return Ce;if(-1!==Fe.indexOf(u))return Se;if(-1!==Te.indexOf(u))return Se;if(Pe(A,t)===V)return Ee;if(Ue.get(e[A])===K)return Se;if((s===de||s===he)&&Ue.get(e[o])===K)return Se;if(s===O||u===O)return Se;if(s===z)return Se;if(-1===[G,j,q].indexOf(s)&&u===z)return Se;if(-1!==[J,Z,$,ie,se].indexOf(u))return Se;if(Pe(A,t)===ne)return Se;if(Re(re,ne,A,t))return Se;if(Re([J,Z],te,A,t))return Se;if(Re(W,W,A,t))return Se;if(s===G)return Ee;if(s===re||u===re)return Se;if(u===Y||s===Y)return Ee;if(-1!==[j,q,te].indexOf(u)||s===X)return Se;if(l===ge&&-1!==De.indexOf(s))return Se;if(s===se&&u===ge)return Se;if(u===ee)return Se;if(-1!==Me.indexOf(u)&&s===Ae||-1!==Me.indexOf(s)&&u===Ae)return Se;if(s===oe&&-1!==[me,de,he].indexOf(u)||-1!==[me,de,he].indexOf(s)&&u===ae)return Se;if(-1!==Me.indexOf(s)&&-1!==ke.indexOf(u)||-1!==ke.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(-1!==[oe,ae].indexOf(s)&&(u===Ae||-1!==[ne,q].indexOf(u)&&t[o+1]===Ae)||-1!==[ne,q].indexOf(s)&&u===Ae||s===Ae&&-1!==[Ae,se,ie].indexOf(u))return Se;if(-1!==[Ae,se,ie,J,Z].indexOf(u))for(var c=A;c>=0;){if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(-1!==[oe,ae].indexOf(u))for(c=-1!==[J,Z].indexOf(s)?a:A;c>=0;){var d;if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(ve===s&&-1!==[ve,ye,fe,pe].indexOf(u)||-1!==[ye,fe].indexOf(s)&&-1!==[ye,we].indexOf(u)||-1!==[we,pe].indexOf(s)&&u===we)return Se;if(-1!==Le.indexOf(s)&&-1!==[ee,ae].indexOf(u)||-1!==Le.indexOf(u)&&s===oe)return Se;if(-1!==Me.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(s===ie&&-1!==Me.indexOf(u))return Se;if(-1!==Me.concat(Ae).indexOf(s)&&u===ne&&-1===xe.indexOf(e[o])||-1!==Me.concat(Ae).indexOf(u)&&s===Z)return Se;if(s===Be&&u===Be){for(var h=n[A],f=1;h>0&&t[--h]===Be;)f++;if(f%2!==0)return Se}return s===de&&u===he?Se:Ee},Ne=function(e,t){t||(t={lineBreak:"normal",wordBreak:"normal"});var n=Ie(e,t.lineBreak),r=n[0],i=n[1],A=n[2];"break-all"!==t.wordBreak&&"break-word"!==t.wordBreak||(i=i.map((function(e){return-1!==[Ae,ue,_e].indexOf(e)?me:e})));var a="keep-all"===t.wordBreak?A.map((function(t,n){return t&&e[n]>=19968&&e[n]<=40959})):void 0;return[r,i,a]},Oe=function(){function e(e,t,n,r){this.codePoints=e,this.required=t===Ce,this.start=n,this.end=r}return e.prototype.slice=function(){return u.apply(void 0,this.codePoints.slice(this.start,this.end))},e}(),Ve=function(e,t){var n=l(e),r=Ne(n,t),i=r[0],A=r[1],a=r[2],o=n.length,s=0,u=0;return{next:function(){if(u>=o)return{done:!0,value:null};for(var e=Se;u=Dt&&e<=57},jt=function(e){return e>=55296&&e<=57343},Xt=function(e){return Wt(e)||e>=Ot&&e<=zt||e>=It&&e<=Pt},qt=function(e){return e>=It&&e<=Nt},Yt=function(e){return e>=Ot&&e<=Kt},Jt=function(e){return qt(e)||Yt(e)},Zt=function(e){return e>=wt},$t=function(e){return e===je||e===Ye||e===Je},en=function(e){return Jt(e)||Zt(e)||e===at},tn=function(e){return en(e)||Wt(e)||e===ot},nn=function(e){return e>=Ut&&e<=Mt||e===Ft||e>=Tt&&e<=kt||e===Qt},rn=function(e,t){return e===qe&&t!==je},An=function(e,t,n){return e===ot?en(t)||rn(t,n):!!en(e)||!(e!==qe||!rn(e,t))},an=function(e,t,n){return e===bt||e===ot?!!Wt(t)||t===Et&&Wt(n):Wt(e===Et?t:e)},on=function(e){var t=0,n=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(n=-1),t++);for(var r=[];Wt(e[t]);)r.push(e[t++]);var i=r.length?parseInt(u.apply(void 0,r),10):0;e[t]===Et&&t++;for(var A=[];Wt(e[t]);)A.push(e[t++]);var a=A.length,o=a?parseInt(u.apply(void 0,A),10):0;e[t]!==Vt&&e[t]!==Rt||t++;var s=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(s=-1),t++);for(var l=[];Wt(e[t]);)l.push(e[t++]);var c=l.length?parseInt(u.apply(void 0,l),10):0;return n*(i+o*Math.pow(10,-a))*Math.pow(10,s*c)},sn={type:2},ln={type:3},un={type:4},cn={type:13},dn={type:8},hn={type:21},fn={type:9},pn={type:10},gn={type:11},mn={type:12},vn={type:14},yn={type:23},wn={type:1},Bn={type:25},_n={type:24},bn={type:26},xn={type:27},Cn={type:28},Sn={type:29},En={type:31},Un={type:32},Mn=function(){function e(){this._value=[]}return e.prototype.write=function(e){this._value=this._value.concat(l(e))},e.prototype.read=function(){for(var e=[],t=this.consumeToken();t!==Un;)e.push(t),t=this.consumeToken();return e},e.prototype.consumeToken=function(){var e=this.consumeCodePoint();switch(e){case Ze:return this.consumeStringToken(Ze);case et:var t=this.peekCodePoint(0),n=this.peekCodePoint(1),r=this.peekCodePoint(2);if(tn(t)||rn(n,r)){var i=An(t,n,r)?Ge:ze;return{type:5,value:this.consumeName(),flags:i}}break;case tt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),cn;break;case rt:return this.consumeStringToken(rt);case it:return sn;case At:return ln;case _t:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),vn;break;case bt:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case xt:return un;case ot:var A=e,a=this.peekCodePoint(0),o=this.peekCodePoint(1);if(an(A,a,o))return this.reconsumeCodePoint(e),this.consumeNumericToken();if(An(A,a,o))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();if(a===ot&&o===ut)return this.consumeCodePoint(),this.consumeCodePoint(),_n;break;case Et:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case Xe:if(this.peekCodePoint(0)===_t)for(this.consumeCodePoint();;){var s=this.consumeCodePoint();if(s===_t&&(s=this.consumeCodePoint())===Xe)return this.consumeToken();if(s===Lt)return this.consumeToken()}break;case Ct:return bn;case St:return xn;case lt:if(this.peekCodePoint(0)===st&&this.peekCodePoint(1)===ot&&this.peekCodePoint(2)===ot)return this.consumeCodePoint(),this.consumeCodePoint(),Bn;break;case ct:var l=this.peekCodePoint(0),c=this.peekCodePoint(1),d=this.peekCodePoint(2);if(An(l,c,d))return{type:7,value:this.consumeName()};break;case dt:return Cn;case qe:if(rn(e,this.peekCodePoint(0)))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();break;case ht:return Sn;case ft:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),dn;break;case pt:return gn;case mt:return mn;case Ht:case Gt:var h=this.peekCodePoint(0),f=this.peekCodePoint(1);return h!==bt||!Xt(f)&&f!==gt||(this.consumeCodePoint(),this.consumeUnicodeRangeToken()),this.reconsumeCodePoint(e),this.consumeIdentLikeToken();case vt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),fn;if(this.peekCodePoint(0)===vt)return this.consumeCodePoint(),hn;break;case yt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),pn;break;case Lt:return Un}return $t(e)?(this.consumeWhiteSpace(),En):Wt(e)?(this.reconsumeCodePoint(e),this.consumeNumericToken()):en(e)?(this.reconsumeCodePoint(e),this.consumeIdentLikeToken()):{type:6,value:u(e)}},e.prototype.consumeCodePoint=function(){var e=this._value.shift();return"undefined"===typeof e?-1:e},e.prototype.reconsumeCodePoint=function(e){this._value.unshift(e)},e.prototype.peekCodePoint=function(e){return e>=this._value.length?-1:this._value[e]},e.prototype.consumeUnicodeRangeToken=function(){for(var e=[],t=this.consumeCodePoint();Xt(t)&&e.length<6;)e.push(t),t=this.consumeCodePoint();for(var n=!1;t===gt&&e.length<6;)e.push(t),t=this.consumeCodePoint(),n=!0;if(n)return{type:30,start:parseInt(u.apply(void 0,e.map((function(e){return e===gt?Dt:e}))),16),end:parseInt(u.apply(void 0,e.map((function(e){return e===gt?zt:e}))),16)};var r=parseInt(u.apply(void 0,e),16);if(this.peekCodePoint(0)===ot&&Xt(this.peekCodePoint(1))){this.consumeCodePoint(),t=this.consumeCodePoint();for(var i=[];Xt(t)&&i.length<6;)i.push(t),t=this.consumeCodePoint();return{type:30,start:r,end:parseInt(u.apply(void 0,i),16)}}return{type:30,start:r,end:r}},e.prototype.consumeIdentLikeToken=function(){var e=this.consumeName();return"url"===e.toLowerCase()&&this.peekCodePoint(0)===it?(this.consumeCodePoint(),this.consumeUrlToken()):this.peekCodePoint(0)===it?(this.consumeCodePoint(),{type:19,value:e}):{type:20,value:e}},e.prototype.consumeUrlToken=function(){var e=[];if(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt)return{type:22,value:""};var t=this.peekCodePoint(0);if(t===rt||t===Ze){var n=this.consumeStringToken(this.consumeCodePoint());return 0===n.type&&(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At)?(this.consumeCodePoint(),{type:22,value:n.value}):(this.consumeBadUrlRemnants(),yn)}for(;;){var r=this.consumeCodePoint();if(r===Lt||r===At)return{type:22,value:u.apply(void 0,e)};if($t(r))return this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At?(this.consumeCodePoint(),{type:22,value:u.apply(void 0,e)}):(this.consumeBadUrlRemnants(),yn);if(r===Ze||r===rt||r===it||nn(r))return this.consumeBadUrlRemnants(),yn;if(r===qe){if(!rn(r,this.peekCodePoint(0)))return this.consumeBadUrlRemnants(),yn;e.push(this.consumeEscapedCodePoint())}else e.push(r)}},e.prototype.consumeWhiteSpace=function(){for(;$t(this.peekCodePoint(0));)this.consumeCodePoint()},e.prototype.consumeBadUrlRemnants=function(){for(;;){var e=this.consumeCodePoint();if(e===At||e===Lt)return;rn(e,this.peekCodePoint(0))&&this.consumeEscapedCodePoint()}},e.prototype.consumeStringSlice=function(e){for(var t=5e4,n="";e>0;){var r=Math.min(t,e);n+=u.apply(void 0,this._value.splice(0,r)),e-=r}return this._value.shift(),n},e.prototype.consumeStringToken=function(e){for(var t="",n=0;;){var r=this._value[n];if(r===Lt||void 0===r||r===e)return{type:0,value:t+=this.consumeStringSlice(n)};if(r===je)return this._value.splice(0,n),wn;if(r===qe){var i=this._value[n+1];i!==Lt&&void 0!==i&&(i===je?(t+=this.consumeStringSlice(n),n=-1,this._value.shift()):rn(r,i)&&(t+=this.consumeStringSlice(n),t+=u(this.consumeEscapedCodePoint()),n=-1))}n++}},e.prototype.consumeNumber=function(){var e=[],t=Ke,n=this.peekCodePoint(0);for(n!==bt&&n!==ot||e.push(this.consumeCodePoint());Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0);var r=this.peekCodePoint(1);if(n===Et&&Wt(r))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0),r=this.peekCodePoint(1);var i=this.peekCodePoint(2);if((n===Vt||n===Rt)&&((r===bt||r===ot)&&Wt(i)||Wt(r)))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());return[on(e),t]},e.prototype.consumeNumericToken=function(){var e=this.consumeNumber(),t=e[0],n=e[1],r=this.peekCodePoint(0),i=this.peekCodePoint(1),A=this.peekCodePoint(2);return An(r,i,A)?{type:15,number:t,flags:n,unit:this.consumeName()}:r===nt?(this.consumeCodePoint(),{type:16,number:t,flags:n}):{type:17,number:t,flags:n}},e.prototype.consumeEscapedCodePoint=function(){var e=this.consumeCodePoint();if(Xt(e)){for(var t=u(e);Xt(this.peekCodePoint(0))&&t.length<6;)t+=u(this.consumeCodePoint());$t(this.peekCodePoint(0))&&this.consumeCodePoint();var n=parseInt(t,16);return 0===n||jt(n)||n>1114111?Bt:n}return e===Lt?Bt:e},e.prototype.consumeName=function(){for(var e="";;){var t=this.consumeCodePoint();if(tn(t))e+=u(t);else{if(!rn(t,this.peekCodePoint(0)))return this.reconsumeCodePoint(t),e;e+=u(this.consumeEscapedCodePoint())}}},e}(),Fn=function(){function e(e){this._tokens=e}return e.create=function(t){var n=new Mn;return n.write(t),new e(n.read())},e.parseValue=function(t){return e.create(t).parseComponentValue()},e.parseValues=function(t){return e.create(t).parseComponentValues()},e.prototype.parseComponentValue=function(){for(var e=this.consumeToken();31===e.type;)e=this.consumeToken();if(32===e.type)throw new SyntaxError("Error parsing CSS component value, unexpected EOF");this.reconsumeToken(e);var t=this.consumeComponentValue();do{e=this.consumeToken()}while(31===e.type);if(32===e.type)return t;throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one")},e.prototype.parseComponentValues=function(){for(var e=[];;){var t=this.consumeComponentValue();if(32===t.type)return e;e.push(t),e.push()}},e.prototype.consumeComponentValue=function(){var e=this.consumeToken();switch(e.type){case 11:case 28:case 2:return this.consumeSimpleBlock(e.type);case 19:return this.consumeFunction(e)}return e},e.prototype.consumeSimpleBlock=function(e){for(var t={type:e,values:[]},n=this.consumeToken();;){if(32===n.type||Hn(n,e))return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue()),n=this.consumeToken()}},e.prototype.consumeFunction=function(e){for(var t={name:e.value,values:[],type:18};;){var n=this.consumeToken();if(32===n.type||3===n.type)return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue())}},e.prototype.consumeToken=function(){var e=this._tokens.shift();return"undefined"===typeof e?Un:e},e.prototype.reconsumeToken=function(e){this._tokens.unshift(e)},e}(),Tn=function(e){return 15===e.type},kn=function(e){return 17===e.type},Qn=function(e){return 20===e.type},Ln=function(e){return 0===e.type},Dn=function(e,t){return Qn(e)&&e.value===t},In=function(e){return 31!==e.type},Rn=function(e){return 31!==e.type&&4!==e.type},Pn=function(e){var t=[],n=[];return e.forEach((function(e){if(4===e.type){if(0===n.length)throw new Error("Error parsing function args, zero tokens for arg");return t.push(n),void(n=[])}31!==e.type&&n.push(e)})),n.length&&t.push(n),t},Hn=function(e,t){return 11===t&&12===e.type||28===t&&29===e.type||2===t&&3===e.type},Nn=function(e){return 17===e.type||15===e.type},On=function(e){return 16===e.type||Nn(e)},Vn=function(e){return e.length>1?[e[0],e[1]]:[e[0]]},zn={type:17,number:0,flags:Ke},Gn={type:16,number:50,flags:Ke},Kn={type:16,number:100,flags:Ke},Wn=function(e,t,n){var r=e[0],i=e[1];return[jn(r,t),jn("undefined"!==typeof i?i:r,n)]},jn=function(e,t){if(16===e.type)return e.number/100*t;if(Tn(e))switch(e.unit){case"rem":case"em":return 16*e.number;default:return e.number}return e.number},Xn="deg",qn="grad",Yn="rad",Jn="turn",Zn={name:"angle",parse:function(e,t){if(15===t.type)switch(t.unit){case Xn:return Math.PI*t.number/180;case qn:return Math.PI/200*t.number;case Yn:return t.number;case Jn:return 2*Math.PI*t.number}throw new Error("Unsupported angle type")}},$n=function(e){return 15===e.type&&(e.unit===Xn||e.unit===qn||e.unit===Yn||e.unit===Jn)},er=function(e){switch(e.filter(Qn).map((function(e){return e.value})).join(" ")){case"to bottom right":case"to right bottom":case"left top":case"top left":return[zn,zn];case"to top":case"bottom":return tr(0);case"to bottom left":case"to left bottom":case"right top":case"top right":return[zn,Kn];case"to right":case"left":return tr(90);case"to top left":case"to left top":case"right bottom":case"bottom right":return[Kn,Kn];case"to bottom":case"top":return tr(180);case"to top right":case"to right top":case"left bottom":case"bottom left":return[Kn,zn];case"to left":case"right":return tr(270)}return 0},tr=function(e){return Math.PI*e/180},nr={name:"color",parse:function(e,t){if(18===t.type){var n=ur[t.name];if("undefined"===typeof n)throw new Error('Attempting to parse an unsupported color function "'+t.name+'"');return n(e,t.values)}if(5===t.type){if(3===t.value.length){var r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),1)}if(4===t.value.length){r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);var a=t.value.substring(3,4);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),parseInt(a+a,16)/255)}if(6===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),1);if(8===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),a=t.value.substring(6,8),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),parseInt(a,16)/255)}if(20===t.type){var o=dr[t.value.toUpperCase()];if("undefined"!==typeof o)return o}return dr.TRANSPARENT}},rr=function(e){return 0===(255&e)},ir=function(e){var t=255&e,n=255&e>>8,r=255&e>>16,i=255&e>>24;return t<255?"rgba("+i+","+r+","+n+","+t/255+")":"rgb("+i+","+r+","+n+")"},Ar=function(e,t,n,r){return(e<<24|t<<16|n<<8|Math.round(255*r)<<0)>>>0},ar=function(e,t){if(17===e.type)return e.number;if(16===e.type){var n=3===t?1:255;return 3===t?e.number/100*n:Math.round(e.number/100*n)}return 0},or=function(e,t){var n=t.filter(Rn);if(3===n.length){var r=n.map(ar),i=r[0],A=r[1],a=r[2];return Ar(i,A,a,1)}if(4===n.length){var o=n.map(ar),s=(i=o[0],A=o[1],a=o[2],o[3]);return Ar(i,A,a,s)}return 0};function sr(e,t,n){return n<0&&(n+=1),n>=1&&(n-=1),n<1/6?(t-e)*n*6+e:n<.5?t:n<2/3?6*(t-e)*(2/3-n)+e:e}var lr=function(e,t){var n=t.filter(Rn),r=n[0],i=n[1],A=n[2],a=n[3],o=(17===r.type?tr(r.number):Zn.parse(e,r))/(2*Math.PI),s=On(i)?i.number/100:0,l=On(A)?A.number/100:0,u="undefined"!==typeof a&&On(a)?jn(a,1):1;if(0===s)return Ar(255*l,255*l,255*l,1);var c=l<=.5?l*(s+1):l+s-l*s,d=2*l-c,h=sr(d,c,o+1/3),f=sr(d,c,o),p=sr(d,c,o-1/3);return Ar(255*h,255*f,255*p,u)},ur={hsl:lr,hsla:lr,rgb:or,rgba:or},cr=function(e,t){return nr.parse(e,Fn.create(t).parseComponentValue())},dr={ALICEBLUE:4042850303,ANTIQUEWHITE:4209760255,AQUA:16777215,AQUAMARINE:2147472639,AZURE:4043309055,BEIGE:4126530815,BISQUE:4293182719,BLACK:255,BLANCHEDALMOND:4293643775,BLUE:65535,BLUEVIOLET:2318131967,BROWN:2771004159,BURLYWOOD:3736635391,CADETBLUE:1604231423,CHARTREUSE:2147418367,CHOCOLATE:3530104575,CORAL:4286533887,CORNFLOWERBLUE:1687547391,CORNSILK:4294499583,CRIMSON:3692313855,CYAN:16777215,DARKBLUE:35839,DARKCYAN:9145343,DARKGOLDENROD:3095837695,DARKGRAY:2846468607,DARKGREEN:6553855,DARKGREY:2846468607,DARKKHAKI:3182914559,DARKMAGENTA:2332068863,DARKOLIVEGREEN:1433087999,DARKORANGE:4287365375,DARKORCHID:2570243327,DARKRED:2332033279,DARKSALMON:3918953215,DARKSEAGREEN:2411499519,DARKSLATEBLUE:1211993087,DARKSLATEGRAY:793726975,DARKSLATEGREY:793726975,DARKTURQUOISE:13554175,DARKVIOLET:2483082239,DEEPPINK:4279538687,DEEPSKYBLUE:12582911,DIMGRAY:1768516095,DIMGREY:1768516095,DODGERBLUE:512819199,FIREBRICK:2988581631,FLORALWHITE:4294635775,FORESTGREEN:579543807,FUCHSIA:4278255615,GAINSBORO:3705462015,GHOSTWHITE:4177068031,GOLD:4292280575,GOLDENROD:3668254975,GRAY:2155905279,GREEN:8388863,GREENYELLOW:2919182335,GREY:2155905279,HONEYDEW:4043305215,HOTPINK:4285117695,INDIANRED:3445382399,INDIGO:1258324735,IVORY:4294963455,KHAKI:4041641215,LAVENDER:3873897215,LAVENDERBLUSH:4293981695,LAWNGREEN:2096890111,LEMONCHIFFON:4294626815,LIGHTBLUE:2916673279,LIGHTCORAL:4034953471,LIGHTCYAN:3774873599,LIGHTGOLDENRODYELLOW:4210742015,LIGHTGRAY:3553874943,LIGHTGREEN:2431553791,LIGHTGREY:3553874943,LIGHTPINK:4290167295,LIGHTSALMON:4288707327,LIGHTSEAGREEN:548580095,LIGHTSKYBLUE:2278488831,LIGHTSLATEGRAY:2005441023,LIGHTSLATEGREY:2005441023,LIGHTSTEELBLUE:2965692159,LIGHTYELLOW:4294959359,LIME:16711935,LIMEGREEN:852308735,LINEN:4210091775,MAGENTA:4278255615,MAROON:2147483903,MEDIUMAQUAMARINE:1724754687,MEDIUMBLUE:52735,MEDIUMORCHID:3126187007,MEDIUMPURPLE:2473647103,MEDIUMSEAGREEN:1018393087,MEDIUMSLATEBLUE:2070474495,MEDIUMSPRINGGREEN:16423679,MEDIUMTURQUOISE:1221709055,MEDIUMVIOLETRED:3340076543,MIDNIGHTBLUE:421097727,MINTCREAM:4127193855,MISTYROSE:4293190143,MOCCASIN:4293178879,NAVAJOWHITE:4292783615,NAVY:33023,OLDLACE:4260751103,OLIVE:2155872511,OLIVEDRAB:1804477439,ORANGE:4289003775,ORANGERED:4282712319,ORCHID:3664828159,PALEGOLDENROD:4008225535,PALEGREEN:2566625535,PALETURQUOISE:2951671551,PALEVIOLETRED:3681588223,PAPAYAWHIP:4293907967,PEACHPUFF:4292524543,PERU:3448061951,PINK:4290825215,PLUM:3718307327,POWDERBLUE:2967529215,PURPLE:2147516671,REBECCAPURPLE:1714657791,RED:4278190335,ROSYBROWN:3163525119,ROYALBLUE:1097458175,SADDLEBROWN:2336560127,SALMON:4202722047,SANDYBROWN:4104413439,SEAGREEN:780883967,SEASHELL:4294307583,SIENNA:2689740287,SILVER:3233857791,SKYBLUE:2278484991,SLATEBLUE:1784335871,SLATEGRAY:1887473919,SLATEGREY:1887473919,SNOW:4294638335,SPRINGGREEN:16744447,STEELBLUE:1182971135,TAN:3535047935,TEAL:8421631,THISTLE:3636451583,TOMATO:4284696575,TRANSPARENT:0,TURQUOISE:1088475391,VIOLET:4001558271,WHEAT:4125012991,WHITE:4294967295,WHITESMOKE:4126537215,YELLOW:4294902015,YELLOWGREEN:2597139199},hr={name:"background-clip",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},fr={name:"background-color",initialValue:"transparent",prefix:!1,type:3,format:"color"},pr=function(e,t){var n=nr.parse(e,t[0]),r=t[1];return r&&On(r)?{color:n,stop:r}:{color:n,stop:null}},gr=function(e,t){var n=e[0],r=e[e.length-1];null===n.stop&&(n.stop=zn),null===r.stop&&(r.stop=Kn);for(var i=[],A=0,a=0;aA?i.push(s):i.push(A),A=s}else i.push(null)}var l=null;for(a=0;ae.optimumDistance)?{optimumCorner:t,optimumDistance:o}:e}),{optimumDistance:i?1/0:-1/0,optimumCorner:null}).optimumCorner},Br=function(e,t,n,r,i){var A=0,a=0;switch(e.size){case 0:0===e.shape?A=a=Math.min(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.min(Math.abs(t),Math.abs(t-r)),a=Math.min(Math.abs(n),Math.abs(n-i)));break;case 2:if(0===e.shape)A=a=Math.min(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){var o=Math.min(Math.abs(n),Math.abs(n-i))/Math.min(Math.abs(t),Math.abs(t-r)),s=wr(r,i,t,n,!0),l=s[0],u=s[1];a=o*(A=yr(l-t,(u-n)/o))}break;case 1:0===e.shape?A=a=Math.max(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.max(Math.abs(t),Math.abs(t-r)),a=Math.max(Math.abs(n),Math.abs(n-i)));break;case 3:if(0===e.shape)A=a=Math.max(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){o=Math.max(Math.abs(n),Math.abs(n-i))/Math.max(Math.abs(t),Math.abs(t-r));var c=wr(r,i,t,n,!1);l=c[0],u=c[1],a=o*(A=yr(l-t,(u-n)/o))}}return Array.isArray(e.size)&&(A=jn(e.size[0],r),a=2===e.size.length?jn(e.size[1],i):A),[A,a]},_r=function(e,t){var n=tr(180),r=[];return Pn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&-1!==["top","left","right","bottom"].indexOf(A.value))return void(n=er(t));if($n(A))return void(n=(Zn.parse(e,A)+tr(270))%tr(360))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},br="closest-side",xr="farthest-side",Cr="closest-corner",Sr="farthest-corner",Er="circle",Ur="ellipse",Mr="cover",Fr="contain",Tr=function(e,t){var n=0,r=3,i=[],A=[];return Pn(t).forEach((function(t,a){var o=!0;if(0===a?o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case"center":return A.push(Gn),!1;case"top":case"left":return A.push(zn),!1;case"right":case"bottom":return A.push(Kn),!1}else if(On(t)||Nn(t))return A.push(t),!1;return e}),o):1===a&&(o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case Fr:case br:return r=0,!1;case xr:return r=1,!1;case Cr:return r=2,!1;case Mr:case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)),o){var s=pr(e,t);i.push(s)}})),{size:r,shape:n,stops:i,position:A,type:2}},kr=function(e){return 1===e.type},Qr=function(e){return 2===e.type},Lr={name:"image",parse:function(e,t){if(22===t.type){var n={url:t.value,type:0};return e.cache.addImage(t.value),n}if(18===t.type){var r=Rr[t.name];if("undefined"===typeof r)throw new Error('Attempting to parse an unsupported image function "'+t.name+'"');return r(e,t.values)}throw new Error("Unsupported image type "+t.type)}};function Dr(e){return!(20===e.type&&"none"===e.value)&&(18!==e.type||!!Rr[e.name])}var Ir,Rr={"linear-gradient":function(e,t){var n=tr(180),r=[];return Pn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&"to"===A.value)return void(n=er(t));if($n(A))return void(n=Zn.parse(e,A))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},"-moz-linear-gradient":_r,"-ms-linear-gradient":_r,"-o-linear-gradient":_r,"-webkit-linear-gradient":_r,"radial-gradient":function(e,t){var n=0,r=3,i=[],A=[];return Pn(t).forEach((function(t,a){var o=!0;if(0===a){var s=!1;o=t.reduce((function(e,t){if(s)if(Qn(t))switch(t.value){case"center":return A.push(Gn),e;case"top":case"left":return A.push(zn),e;case"right":case"bottom":return A.push(Kn),e}else(On(t)||Nn(t))&&A.push(t);else if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case"at":return s=!0,!1;case br:return r=0,!1;case Mr:case xr:return r=1,!1;case Fr:case Cr:return r=2,!1;case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)}if(o){var l=pr(e,t);i.push(l)}})),{size:r,shape:n,stops:i,position:A,type:2}},"-moz-radial-gradient":Tr,"-ms-radial-gradient":Tr,"-o-radial-gradient":Tr,"-webkit-radial-gradient":Tr,"-webkit-gradient":function(e,t){var n=tr(180),r=[],i=1,A=0,a=3,o=[];return Pn(t).forEach((function(t,n){var A=t[0];if(0===n){if(Qn(A)&&"linear"===A.value)return void(i=1);if(Qn(A)&&"radial"===A.value)return void(i=2)}if(18===A.type)if("from"===A.name){var a=nr.parse(e,A.values[0]);r.push({stop:zn,color:a})}else if("to"===A.name)a=nr.parse(e,A.values[0]),r.push({stop:Kn,color:a});else if("color-stop"===A.name){var o=A.values.filter(Rn);if(2===o.length){a=nr.parse(e,o[1]);var s=o[0];kn(s)&&r.push({stop:{type:16,number:100*s.number,flags:s.flags},color:a})}}})),1===i?{angle:(n+tr(180))%tr(360),stops:r,type:i}:{size:a,shape:A,stops:r,position:o,type:i}}},Pr={name:"background-image",initialValue:"none",type:1,prefix:!1,parse:function(e,t){if(0===t.length)return[];var n=t[0];return 20===n.type&&"none"===n.value?[]:t.filter((function(e){return Rn(e)&&Dr(e)})).map((function(t){return Lr.parse(e,t)}))}},Hr={name:"background-origin",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},Nr={name:"background-position",initialValue:"0% 0%",type:1,prefix:!1,parse:function(e,t){return Pn(t).map((function(e){return e.filter(On)})).map(Vn)}},Or={name:"background-repeat",initialValue:"repeat",prefix:!1,type:1,parse:function(e,t){return Pn(t).map((function(e){return e.filter(Qn).map((function(e){return e.value})).join(" ")})).map(Vr)}},Vr=function(e){switch(e){case"no-repeat":return 1;case"repeat-x":case"repeat no-repeat":return 2;case"repeat-y":case"no-repeat repeat":return 3;default:return 0}};!function(e){e.AUTO="auto",e.CONTAIN="contain",e.COVER="cover"}(Ir||(Ir={}));var zr,Gr={name:"background-size",initialValue:"0",prefix:!1,type:1,parse:function(e,t){return Pn(t).map((function(e){return e.filter(Kr)}))}},Kr=function(e){return Qn(e)||On(e)},Wr=function(e){return{name:"border-"+e+"-color",initialValue:"transparent",prefix:!1,type:3,format:"color"}},jr=Wr("top"),Xr=Wr("right"),qr=Wr("bottom"),Yr=Wr("left"),Jr=function(e){return{name:"border-radius-"+e,initialValue:"0 0",prefix:!1,type:1,parse:function(e,t){return Vn(t.filter(On))}}},Zr=Jr("top-left"),$r=Jr("top-right"),ei=Jr("bottom-right"),ti=Jr("bottom-left"),ni=function(e){return{name:"border-"+e+"-style",initialValue:"solid",prefix:!1,type:2,parse:function(e,t){switch(t){case"none":return 0;case"dashed":return 2;case"dotted":return 3;case"double":return 4}return 1}}},ri=ni("top"),ii=ni("right"),Ai=ni("bottom"),ai=ni("left"),oi=function(e){return{name:"border-"+e+"-width",initialValue:"0",type:0,prefix:!1,parse:function(e,t){return Tn(t)?t.number:0}}},si=oi("top"),li=oi("right"),ui=oi("bottom"),ci=oi("left"),di={name:"color",initialValue:"transparent",prefix:!1,type:3,format:"color"},hi={name:"direction",initialValue:"ltr",prefix:!1,type:2,parse:function(e,t){return"rtl"===t?1:0}},fi={name:"display",initialValue:"inline-block",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).reduce((function(e,t){return e|pi(t.value)}),0)}},pi=function(e){switch(e){case"block":case"-webkit-box":return 2;case"inline":return 4;case"run-in":return 8;case"flow":return 16;case"flow-root":return 32;case"table":return 64;case"flex":case"-webkit-flex":return 128;case"grid":case"-ms-grid":return 256;case"ruby":return 512;case"subgrid":return 1024;case"list-item":return 2048;case"table-row-group":return 4096;case"table-header-group":return 8192;case"table-footer-group":return 16384;case"table-row":return 32768;case"table-cell":return 65536;case"table-column-group":return 131072;case"table-column":return 262144;case"table-caption":return 524288;case"ruby-base":return 1048576;case"ruby-text":return 2097152;case"ruby-base-container":return 4194304;case"ruby-text-container":return 8388608;case"contents":return 16777216;case"inline-block":return 33554432;case"inline-list-item":return 67108864;case"inline-table":return 134217728;case"inline-flex":return 268435456;case"inline-grid":return 536870912}return 0},gi={name:"float",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"left":return 1;case"right":return 2;case"inline-start":return 3;case"inline-end":return 4}return 0}},mi={name:"letter-spacing",initialValue:"0",prefix:!1,type:0,parse:function(e,t){return 20===t.type&&"normal"===t.value?0:17===t.type||15===t.type?t.number:0}};!function(e){e.NORMAL="normal",e.STRICT="strict"}(zr||(zr={}));var vi,yi={name:"line-break",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"strict"===t?zr.STRICT:zr.NORMAL}},wi={name:"line-height",initialValue:"normal",prefix:!1,type:4},Bi=function(e,t){return Qn(e)&&"normal"===e.value?1.2*t:17===e.type?t*e.number:On(e)?jn(e,t):t},_i={name:"list-style-image",initialValue:"none",type:0,prefix:!1,parse:function(e,t){return 20===t.type&&"none"===t.value?null:Lr.parse(e,t)}},bi={name:"list-style-position",initialValue:"outside",prefix:!1,type:2,parse:function(e,t){return"inside"===t?0:1}},xi={name:"list-style-type",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"disc":return 0;case"circle":return 1;case"square":return 2;case"decimal":return 3;case"cjk-decimal":return 4;case"decimal-leading-zero":return 5;case"lower-roman":return 6;case"upper-roman":return 7;case"lower-greek":return 8;case"lower-alpha":return 9;case"upper-alpha":return 10;case"arabic-indic":return 11;case"armenian":return 12;case"bengali":return 13;case"cambodian":return 14;case"cjk-earthly-branch":return 15;case"cjk-heavenly-stem":return 16;case"cjk-ideographic":return 17;case"devanagari":return 18;case"ethiopic-numeric":return 19;case"georgian":return 20;case"gujarati":return 21;case"gurmukhi":case"hebrew":return 22;case"hiragana":return 23;case"hiragana-iroha":return 24;case"japanese-formal":return 25;case"japanese-informal":return 26;case"kannada":return 27;case"katakana":return 28;case"katakana-iroha":return 29;case"khmer":return 30;case"korean-hangul-formal":return 31;case"korean-hanja-formal":return 32;case"korean-hanja-informal":return 33;case"lao":return 34;case"lower-armenian":return 35;case"malayalam":return 36;case"mongolian":return 37;case"myanmar":return 38;case"oriya":return 39;case"persian":return 40;case"simp-chinese-formal":return 41;case"simp-chinese-informal":return 42;case"tamil":return 43;case"telugu":return 44;case"thai":return 45;case"tibetan":return 46;case"trad-chinese-formal":return 47;case"trad-chinese-informal":return 48;case"upper-armenian":return 49;case"disclosure-open":return 50;case"disclosure-closed":return 51;default:return-1}}},Ci=function(e){return{name:"margin-"+e,initialValue:"0",prefix:!1,type:4}},Si=Ci("top"),Ei=Ci("right"),Ui=Ci("bottom"),Mi=Ci("left"),Fi={name:"overflow",initialValue:"visible",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).map((function(e){switch(e.value){case"hidden":return 1;case"scroll":return 2;case"clip":return 3;case"auto":return 4;default:return 0}}))}},Ti={name:"overflow-wrap",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"break-word"===t?"break-word":"normal"}},ki=function(e){return{name:"padding-"+e,initialValue:"0",prefix:!1,type:3,format:"length-percentage"}},Qi=ki("top"),Li=ki("right"),Di=ki("bottom"),Ii=ki("left"),Ri={name:"text-align",initialValue:"left",prefix:!1,type:2,parse:function(e,t){switch(t){case"right":return 2;case"center":case"justify":return 1;default:return 0}}},Pi={name:"position",initialValue:"static",prefix:!1,type:2,parse:function(e,t){switch(t){case"relative":return 1;case"absolute":return 2;case"fixed":return 3;case"sticky":return 4}return 0}},Hi={name:"text-shadow",initialValue:"none",type:1,prefix:!1,parse:function(e,t){return 1===t.length&&Dn(t[0],"none")?[]:Pn(t).map((function(t){for(var n={color:dr.TRANSPARENT,offsetX:zn,offsetY:zn,blur:zn},r=0,i=0;i1?1:0],this.overflowWrap=vA(e,Ti,t.overflowWrap),this.paddingTop=vA(e,Qi,t.paddingTop),this.paddingRight=vA(e,Li,t.paddingRight),this.paddingBottom=vA(e,Di,t.paddingBottom),this.paddingLeft=vA(e,Ii,t.paddingLeft),this.paintOrder=vA(e,dA,t.paintOrder),this.position=vA(e,Pi,t.position),this.textAlign=vA(e,Ri,t.textAlign),this.textDecorationColor=vA(e,Ji,null!==(n=t.textDecorationColor)&&void 0!==n?n:t.color),this.textDecorationLine=vA(e,Zi,null!==(r=t.textDecorationLine)&&void 0!==r?r:t.textDecoration),this.textShadow=vA(e,Hi,t.textShadow),this.textTransform=vA(e,Ni,t.textTransform),this.transform=vA(e,Oi,t.transform),this.transformOrigin=vA(e,Ki,t.transformOrigin),this.visibility=vA(e,Wi,t.visibility),this.webkitTextStrokeColor=vA(e,hA,t.webkitTextStrokeColor),this.webkitTextStrokeWidth=vA(e,fA,t.webkitTextStrokeWidth),this.wordBreak=vA(e,ji,t.wordBreak),this.zIndex=vA(e,Xi,t.zIndex)}return e.prototype.isVisible=function(){return this.display>0&&this.opacity>0&&0===this.visibility},e.prototype.isTransparent=function(){return rr(this.backgroundColor)},e.prototype.isTransformed=function(){return null!==this.transform},e.prototype.isPositioned=function(){return 0!==this.position},e.prototype.isPositionedWithZIndex=function(){return this.isPositioned()&&!this.zIndex.auto},e.prototype.isFloating=function(){return 0!==this.float},e.prototype.isInlineLevel=function(){return iA(this.display,4)||iA(this.display,33554432)||iA(this.display,268435456)||iA(this.display,536870912)||iA(this.display,67108864)||iA(this.display,134217728)},e}(),gA=function(){function e(e,t){this.content=vA(e,AA,t.content),this.quotes=vA(e,lA,t.quotes)}return e}(),mA=function(){function e(e,t){this.counterIncrement=vA(e,aA,t.counterIncrement),this.counterReset=vA(e,oA,t.counterReset)}return e}(),vA=function(e,t,n){var r=new Mn,i=null!==n&&"undefined"!==typeof n?n.toString():t.initialValue;r.write(i);var A=new Fn(r.read());switch(t.type){case 2:var a=A.parseComponentValue();return t.parse(e,Qn(a)?a.value:t.initialValue);case 0:return t.parse(e,A.parseComponentValue());case 1:return t.parse(e,A.parseComponentValues());case 4:return A.parseComponentValue();case 3:switch(t.format){case"angle":return Zn.parse(e,A.parseComponentValue());case"color":return nr.parse(e,A.parseComponentValue());case"image":return Lr.parse(e,A.parseComponentValue());case"length":var o=A.parseComponentValue();return Nn(o)?o:zn;case"length-percentage":var s=A.parseComponentValue();return On(s)?s:zn;case"time":return qi.parse(e,A.parseComponentValue())}}},yA="data-html2canvas-debug",wA=function(e){switch(e.getAttribute(yA)){case"all":return 1;case"clone":return 2;case"parse":return 3;case"render":return 4;default:return 0}},BA=function(e,t){var n=wA(e);return 1===n||t===n},_A=function(){function e(e,t){this.context=e,this.textNodes=[],this.elements=[],this.flags=0,BA(t,3),this.styles=new pA(e,window.getComputedStyle(t,null)),lo(t)&&(this.styles.animationDuration.some((function(e){return e>0}))&&(t.style.animationDuration="0s"),null!==this.styles.transform&&(t.style.transform="none")),this.bounds=o(this.context,t),BA(t,4)&&(this.flags|=16)}return e}(),bA="AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA=",xA="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",CA="undefined"===typeof Uint8Array?[]:new Uint8Array(256),SA=0;SA>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},UA=function(e){for(var t=e.length,n=[],r=0;r>FA,LA=(1<>FA)+32,IA=65536>>TA,RA=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>FA])<>FA)])<>TA),t=this.index[t],t+=e>>FA&RA,t=((t=this.index[t])<=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},sa=NA(bA),la="\xd7",ua="\xf7",ca=function(e){return sa.get(e)},da=function(e,t,n){var r=n-2,i=t[r],A=t[n-1],a=t[n];if(A===jA&&a===XA)return la;if(A===jA||A===XA||A===qA)return ua;if(a===jA||a===XA||a===qA)return ua;if(A===ZA&&-1!==[ZA,$A,ta,na].indexOf(a))return la;if((A===ta||A===$A)&&(a===$A||a===ea))return la;if((A===na||A===ea)&&a===ea)return la;if(a===ra||a===YA)return la;if(a===JA)return la;if(A===WA)return la;if(A===ra&&a===ia){for(;i===YA;)i=t[--r];if(i===ia)return la}if(A===Aa&&a===Aa){for(var o=0;i===Aa;)o++,i=t[--r];if(o%2===0)return la}return ua},ha=function(e){var t=aa(e),n=t.length,r=0,i=0,A=t.map(ca);return{next:function(){if(r>=n)return{done:!0,value:null};for(var e=la;ra.x||i.y>a.y;return a=i,0===t||o}));return e.body.removeChild(t),o},ma=function(){return"undefined"!==typeof(new Image).crossOrigin},va=function(){return"string"===typeof(new XMLHttpRequest).responseType},ya=function(e){var t=new Image,n=e.createElement("canvas"),r=n.getContext("2d");if(!r)return!1;t.src="data:image/svg+xml,";try{r.drawImage(t,0,0),n.toDataURL()}catch(Rt){return!1}return!0},wa=function(e){return 0===e[0]&&255===e[1]&&0===e[2]&&255===e[3]},Ba=function(e){var t=e.createElement("canvas"),n=100;t.width=n,t.height=n;var r=t.getContext("2d");if(!r)return Promise.reject(!1);r.fillStyle="rgb(0, 255, 0)",r.fillRect(0,0,n,n);var i=new Image,A=t.toDataURL();i.src=A;var a=_a(n,n,0,0,i);return r.fillStyle="red",r.fillRect(0,0,n,n),ba(a).then((function(t){r.drawImage(t,0,0);var i=r.getImageData(0,0,n,n).data;r.fillStyle="red",r.fillRect(0,0,n,n);var a=e.createElement("div");return a.style.backgroundImage="url("+A+")",a.style.height=n+"px",wa(i)?ba(_a(n,n,0,0,a)):Promise.reject(!1)})).then((function(e){return r.drawImage(e,0,0),wa(r.getImageData(0,0,n,n).data)})).catch((function(){return!1}))},_a=function(e,t,n,r,i){var A="http://www.w3.org/2000/svg",a=document.createElementNS(A,"svg"),o=document.createElementNS(A,"foreignObject");return a.setAttributeNS(null,"width",e.toString()),a.setAttributeNS(null,"height",t.toString()),o.setAttributeNS(null,"width","100%"),o.setAttributeNS(null,"height","100%"),o.setAttributeNS(null,"x",n.toString()),o.setAttributeNS(null,"y",r.toString()),o.setAttributeNS(null,"externalResourcesRequired","true"),a.appendChild(o),o.appendChild(i),a},ba=function(e){return new Promise((function(t,n){var r=new Image;r.onload=function(){return t(r)},r.onerror=n,r.src="data:image/svg+xml;charset=utf-8,"+encodeURIComponent((new XMLSerializer).serializeToString(e))}))},xa={get SUPPORT_RANGE_BOUNDS(){var e=pa(document);return Object.defineProperty(xa,"SUPPORT_RANGE_BOUNDS",{value:e}),e},get SUPPORT_WORD_BREAKING(){var e=xa.SUPPORT_RANGE_BOUNDS&&ga(document);return Object.defineProperty(xa,"SUPPORT_WORD_BREAKING",{value:e}),e},get SUPPORT_SVG_DRAWING(){var e=ya(document);return Object.defineProperty(xa,"SUPPORT_SVG_DRAWING",{value:e}),e},get SUPPORT_FOREIGNOBJECT_DRAWING(){var e="function"===typeof Array.from&&"function"===typeof window.fetch?Ba(document):Promise.resolve(!1);return Object.defineProperty(xa,"SUPPORT_FOREIGNOBJECT_DRAWING",{value:e}),e},get SUPPORT_CORS_IMAGES(){var e=ma();return Object.defineProperty(xa,"SUPPORT_CORS_IMAGES",{value:e}),e},get SUPPORT_RESPONSE_TYPE(){var e=va();return Object.defineProperty(xa,"SUPPORT_RESPONSE_TYPE",{value:e}),e},get SUPPORT_CORS_XHR(){var e="withCredentials"in new XMLHttpRequest;return Object.defineProperty(xa,"SUPPORT_CORS_XHR",{value:e}),e},get SUPPORT_NATIVE_TEXT_SEGMENTATION(){var e=!("undefined"===typeof Intl||!Intl.Segmenter);return Object.defineProperty(xa,"SUPPORT_NATIVE_TEXT_SEGMENTATION",{value:e}),e}},Ca=function(){function e(e,t){this.text=e,this.bounds=t}return e}(),Sa=function(e,t,n,r){var i=Ta(t,n),A=[],o=0;return i.forEach((function(t){if(n.textDecorationLine.length||t.trim().length>0)if(xa.SUPPORT_RANGE_BOUNDS){var i=Ua(r,o,t.length).getClientRects();if(i.length>1){var s=Ma(t),l=0;s.forEach((function(t){A.push(new Ca(t,a.fromDOMRectList(e,Ua(r,l+o,t.length).getClientRects()))),l+=t.length}))}else A.push(new Ca(t,a.fromDOMRectList(e,i)))}else{var u=r.splitText(t.length);A.push(new Ca(t,Ea(e,r))),r=u}else xa.SUPPORT_RANGE_BOUNDS||(r=r.splitText(t.length));o+=t.length})),A},Ea=function(e,t){var n=t.ownerDocument;if(n){var r=n.createElement("html2canvaswrapper");r.appendChild(t.cloneNode(!0));var i=t.parentNode;if(i){i.replaceChild(r,t);var A=o(e,r);return r.firstChild&&i.replaceChild(r.firstChild,r),A}}return a.EMPTY},Ua=function(e,t,n){var r=e.ownerDocument;if(!r)throw new Error("Node has no owner document");var i=r.createRange();return i.setStart(e,t),i.setEnd(e,t+n),i},Ma=function(e){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var t=new Intl.Segmenter(void 0,{granularity:"grapheme"});return Array.from(t.segment(e)).map((function(e){return e.segment}))}return fa(e)},Fa=function(e,t){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var n=new Intl.Segmenter(void 0,{granularity:"word"});return Array.from(n.segment(e)).map((function(e){return e.segment}))}return Qa(e,t)},Ta=function(e,t){return 0!==t.letterSpacing?Ma(e):Fa(e,t)},ka=[32,160,4961,65792,65793,4153,4241],Qa=function(e,t){for(var n,r=Ve(e,{lineBreak:t.lineBreak,wordBreak:"break-word"===t.overflowWrap?"break-word":t.wordBreak}),i=[],A=function(){if(n.value){var e=n.value.slice(),t=l(e),r="";t.forEach((function(e){-1===ka.indexOf(e)?r+=u(e):(r.length&&i.push(r),i.push(u(e)),r="")})),r.length&&i.push(r)}};!(n=r.next()).done;)A();return i},La=function(){function e(e,t,n){this.text=Da(t.data,n.textTransform),this.textBounds=Sa(e,this.text,n,t)}return e}(),Da=function(e,t){switch(t){case 1:return e.toLowerCase();case 3:return e.replace(Ia,Ra);case 2:return e.toUpperCase();default:return e}},Ia=/(^|\s|:|-|\(|\))([a-z])/g,Ra=function(e,t,n){return e.length>0?t+n.toUpperCase():e},Pa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.src=n.currentSrc||n.src,r.intrinsicWidth=n.naturalWidth,r.intrinsicHeight=n.naturalHeight,r.context.cache.addImage(r.src),r}return t(n,e),n}(_A),Ha=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.canvas=n,r.intrinsicWidth=n.width,r.intrinsicHeight=n.height,r}return t(n,e),n}(_A),Na=function(e){function n(t,n){var r=e.call(this,t,n)||this,i=new XMLSerializer,A=o(t,n);return n.setAttribute("width",A.width+"px"),n.setAttribute("height",A.height+"px"),r.svg="data:image/svg+xml,"+encodeURIComponent(i.serializeToString(n)),r.intrinsicWidth=n.width.baseVal.value,r.intrinsicHeight=n.height.baseVal.value,r.context.cache.addImage(r.svg),r}return t(n,e),n}(_A),Oa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.value=n.value,r}return t(n,e),n}(_A),Va=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.start=n.start,r.reversed="boolean"===typeof n.reversed&&!0===n.reversed,r}return t(n,e),n}(_A),za=[{type:15,flags:0,unit:"px",number:3}],Ga=[{type:16,flags:0,number:50}],Ka=function(e){return e.width>e.height?new a(e.left+(e.width-e.height)/2,e.top,e.height,e.height):e.width0)r.textNodes.push(new La(t,A,r.styles));else if(so(A))if(So(A)&&A.assignedNodes)A.assignedNodes().forEach((function(n){return e(t,n,r,i)}));else{var o=ro(t,A);o.styles.isVisible()&&(Ao(A,o,i)?o.flags|=4:ao(o.styles)&&(o.flags|=2),-1!==to.indexOf(A.tagName)&&(o.flags|=8),r.elements.push(o),A.slot,A.shadowRoot?e(t,A.shadowRoot,o,i):xo(A)||go(A)||Co(A)||e(t,A,o,i))}},ro=function(e,t){return wo(t)?new Pa(e,t):vo(t)?new Ha(e,t):go(t)?new Na(e,t):co(t)?new Oa(e,t):ho(t)?new Va(e,t):fo(t)?new Ja(e,t):Co(t)?new Za(e,t):xo(t)?new $a(e,t):Bo(t)?new eo(e,t):new _A(e,t)},io=function(e,t){var n=ro(e,t);return n.flags|=4,no(e,t,n,n),n},Ao=function(e,t,n){return t.styles.isPositionedWithZIndex()||t.styles.opacity<1||t.styles.isTransformed()||mo(e)&&n.styles.isTransparent()},ao=function(e){return e.isPositioned()||e.isFloating()},oo=function(e){return e.nodeType===Node.TEXT_NODE},so=function(e){return e.nodeType===Node.ELEMENT_NODE},lo=function(e){return so(e)&&"undefined"!==typeof e.style&&!uo(e)},uo=function(e){return"object"===typeof e.className},co=function(e){return"LI"===e.tagName},ho=function(e){return"OL"===e.tagName},fo=function(e){return"INPUT"===e.tagName},po=function(e){return"HTML"===e.tagName},go=function(e){return"svg"===e.tagName},mo=function(e){return"BODY"===e.tagName},vo=function(e){return"CANVAS"===e.tagName},yo=function(e){return"VIDEO"===e.tagName},wo=function(e){return"IMG"===e.tagName},Bo=function(e){return"IFRAME"===e.tagName},_o=function(e){return"STYLE"===e.tagName},bo=function(e){return"SCRIPT"===e.tagName},xo=function(e){return"TEXTAREA"===e.tagName},Co=function(e){return"SELECT"===e.tagName},So=function(e){return"SLOT"===e.tagName},Eo=function(e){return e.tagName.indexOf("-")>0},Uo=function(){function e(){this.counters={}}return e.prototype.getCounterValue=function(e){var t=this.counters[e];return t&&t.length?t[t.length-1]:1},e.prototype.getCounterValues=function(e){var t=this.counters[e];return t||[]},e.prototype.pop=function(e){var t=this;e.forEach((function(e){return t.counters[e].pop()}))},e.prototype.parse=function(e){var t=this,n=e.counterIncrement,r=e.counterReset,i=!0;null!==n&&n.forEach((function(e){var n=t.counters[e.counter];n&&0!==e.increment&&(i=!1,n.length||n.push(1),n[Math.max(0,n.length-1)]+=e.increment)}));var A=[];return i&&r.forEach((function(e){var n=t.counters[e.counter];A.push(e.counter),n||(n=t.counters[e.counter]=[]),n.push(e.reset)})),A},e}(),Mo={integers:[1e3,900,500,400,100,90,50,40,10,9,5,4,1],values:["M","CM","D","CD","C","XC","L","XL","X","IX","V","IV","I"]},Fo={integers:[9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u0554","\u0553","\u0552","\u0551","\u0550","\u054f","\u054e","\u054d","\u054c","\u054b","\u054a","\u0549","\u0548","\u0547","\u0546","\u0545","\u0544","\u0543","\u0542","\u0541","\u0540","\u053f","\u053e","\u053d","\u053c","\u053b","\u053a","\u0539","\u0538","\u0537","\u0536","\u0535","\u0534","\u0533","\u0532","\u0531"]},To={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,400,300,200,100,90,80,70,60,50,40,30,20,19,18,17,16,15,10,9,8,7,6,5,4,3,2,1],values:["\u05d9\u05f3","\u05d8\u05f3","\u05d7\u05f3","\u05d6\u05f3","\u05d5\u05f3","\u05d4\u05f3","\u05d3\u05f3","\u05d2\u05f3","\u05d1\u05f3","\u05d0\u05f3","\u05ea","\u05e9","\u05e8","\u05e7","\u05e6","\u05e4","\u05e2","\u05e1","\u05e0","\u05de","\u05dc","\u05db","\u05d9\u05d8","\u05d9\u05d7","\u05d9\u05d6","\u05d8\u05d6","\u05d8\u05d5","\u05d9","\u05d8","\u05d7","\u05d6","\u05d5","\u05d4","\u05d3","\u05d2","\u05d1","\u05d0"]},ko={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u10f5","\u10f0","\u10ef","\u10f4","\u10ee","\u10ed","\u10ec","\u10eb","\u10ea","\u10e9","\u10e8","\u10e7","\u10e6","\u10e5","\u10e4","\u10f3","\u10e2","\u10e1","\u10e0","\u10df","\u10de","\u10dd","\u10f2","\u10dc","\u10db","\u10da","\u10d9","\u10d8","\u10d7","\u10f1","\u10d6","\u10d5","\u10d4","\u10d3","\u10d2","\u10d1","\u10d0"]},Qo=function(e,t,n,r,i,A){return en?Wo(e,i,A.length>0):r.integers.reduce((function(t,n,i){for(;e>=n;)e-=n,t+=r.values[i];return t}),"")+A},Lo=function(e,t,n,r){var i="";do{n||e--,i=r(e)+i,e/=t}while(e*t>=t);return i},Do=function(e,t,n,r,i){var A=n-t+1;return(e<0?"-":"")+(Lo(Math.abs(e),A,r,(function(e){return u(Math.floor(e%A)+t)}))+i)},Io=function(e,t,n){void 0===n&&(n=". ");var r=t.length;return Lo(Math.abs(e),r,!1,(function(e){return t[Math.floor(e%r)]}))+n},Ro=1,Po=2,Ho=4,No=8,Oo=function(e,t,n,r,i,A){if(e<-9999||e>9999)return Wo(e,4,i.length>0);var a=Math.abs(e),o=i;if(0===a)return t[0]+o;for(var s=0;a>0&&s<=4;s++){var l=a%10;0===l&&iA(A,Ro)&&""!==o?o=t[l]+o:l>1||1===l&&0===s||1===l&&1===s&&iA(A,Po)||1===l&&1===s&&iA(A,Ho)&&e>100||1===l&&s>1&&iA(A,No)?o=t[l]+(s>0?n[s-1]:"")+o:1===l&&s>0&&(o=n[s-1]+o),a=Math.floor(a/10)}return(e<0?r:"")+o},Vo="\u5341\u767e\u5343\u842c",zo="\u62fe\u4f70\u4edf\u842c",Go="\u30de\u30a4\u30ca\u30b9",Ko="\ub9c8\uc774\ub108\uc2a4",Wo=function(e,t,n){var r=n?". ":"",i=n?"\u3001":"",A=n?", ":"",a=n?" ":"";switch(t){case 0:return"\u2022"+a;case 1:return"\u25e6"+a;case 2:return"\u25fe"+a;case 5:var o=Do(e,48,57,!0,r);return o.length<4?"0"+o:o;case 4:return Io(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",i);case 6:return Qo(e,1,3999,Mo,3,r).toLowerCase();case 7:return Qo(e,1,3999,Mo,3,r);case 8:return Do(e,945,969,!1,r);case 9:return Do(e,97,122,!1,r);case 10:return Do(e,65,90,!1,r);case 11:return Do(e,1632,1641,!0,r);case 12:case 49:return Qo(e,1,9999,Fo,3,r);case 35:return Qo(e,1,9999,Fo,3,r).toLowerCase();case 13:return Do(e,2534,2543,!0,r);case 14:case 30:return Do(e,6112,6121,!0,r);case 15:return Io(e,"\u5b50\u4e11\u5bc5\u536f\u8fb0\u5df3\u5348\u672a\u7533\u9149\u620c\u4ea5",i);case 16:return Io(e,"\u7532\u4e59\u4e19\u4e01\u620a\u5df1\u5e9a\u8f9b\u58ec\u7678",i);case 17:case 48:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8ca0",i,Po|Ho|No);case 47:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u8086\u4f0d\u9678\u67d2\u634c\u7396",zo,"\u8ca0",i,Ro|Po|Ho|No);case 42:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8d1f",i,Po|Ho|No);case 41:return Oo(e,"\u96f6\u58f9\u8d30\u53c1\u8086\u4f0d\u9646\u67d2\u634c\u7396",zo,"\u8d1f",i,Ro|Po|Ho|No);case 26:return Oo(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u4e07",Go,i,0);case 25:return Oo(e,"\u96f6\u58f1\u5f10\u53c2\u56db\u4f0d\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343\u4e07",Go,i,Ro|Po|Ho);case 31:return Oo(e,"\uc601\uc77c\uc774\uc0bc\uc0ac\uc624\uc721\uce60\ud314\uad6c","\uc2ed\ubc31\ucc9c\ub9cc",Ko,A,Ro|Po|Ho);case 33:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u842c",Ko,A,0);case 32:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343",Ko,A,Ro|Po|Ho);case 18:return Do(e,2406,2415,!0,r);case 20:return Qo(e,1,19999,ko,3,r);case 21:return Do(e,2790,2799,!0,r);case 22:return Do(e,2662,2671,!0,r);case 22:return Qo(e,1,10999,To,3,r);case 23:return Io(e,"\u3042\u3044\u3046\u3048\u304a\u304b\u304d\u304f\u3051\u3053\u3055\u3057\u3059\u305b\u305d\u305f\u3061\u3064\u3066\u3068\u306a\u306b\u306c\u306d\u306e\u306f\u3072\u3075\u3078\u307b\u307e\u307f\u3080\u3081\u3082\u3084\u3086\u3088\u3089\u308a\u308b\u308c\u308d\u308f\u3090\u3091\u3092\u3093");case 24:return Io(e,"\u3044\u308d\u306f\u306b\u307b\u3078\u3068\u3061\u308a\u306c\u308b\u3092\u308f\u304b\u3088\u305f\u308c\u305d\u3064\u306d\u306a\u3089\u3080\u3046\u3090\u306e\u304a\u304f\u3084\u307e\u3051\u3075\u3053\u3048\u3066\u3042\u3055\u304d\u3086\u3081\u307f\u3057\u3091\u3072\u3082\u305b\u3059");case 27:return Do(e,3302,3311,!0,r);case 28:return Io(e,"\u30a2\u30a4\u30a6\u30a8\u30aa\u30ab\u30ad\u30af\u30b1\u30b3\u30b5\u30b7\u30b9\u30bb\u30bd\u30bf\u30c1\u30c4\u30c6\u30c8\u30ca\u30cb\u30cc\u30cd\u30ce\u30cf\u30d2\u30d5\u30d8\u30db\u30de\u30df\u30e0\u30e1\u30e2\u30e4\u30e6\u30e8\u30e9\u30ea\u30eb\u30ec\u30ed\u30ef\u30f0\u30f1\u30f2\u30f3",i);case 29:return Io(e,"\u30a4\u30ed\u30cf\u30cb\u30db\u30d8\u30c8\u30c1\u30ea\u30cc\u30eb\u30f2\u30ef\u30ab\u30e8\u30bf\u30ec\u30bd\u30c4\u30cd\u30ca\u30e9\u30e0\u30a6\u30f0\u30ce\u30aa\u30af\u30e4\u30de\u30b1\u30d5\u30b3\u30a8\u30c6\u30a2\u30b5\u30ad\u30e6\u30e1\u30df\u30b7\u30f1\u30d2\u30e2\u30bb\u30b9",i);case 34:return Do(e,3792,3801,!0,r);case 37:return Do(e,6160,6169,!0,r);case 38:return Do(e,4160,4169,!0,r);case 39:return Do(e,2918,2927,!0,r);case 40:return Do(e,1776,1785,!0,r);case 43:return Do(e,3046,3055,!0,r);case 44:return Do(e,3174,3183,!0,r);case 45:return Do(e,3664,3673,!0,r);case 46:return Do(e,3872,3881,!0,r);default:return Do(e,48,57,!0,r)}},jo="data-html2canvas-ignore",Xo=function(){function e(e,t,n){if(this.context=e,this.options=n,this.scrolledElements=[],this.referenceElement=t,this.counters=new Uo,this.quoteDepth=0,!t.ownerDocument)throw new Error("Cloned element does not have an owner document");this.documentElement=this.cloneNode(t.ownerDocument.documentElement,!1)}return e.prototype.toIFrame=function(e,t){var n=this,A=Yo(e,t);if(!A.contentWindow)return Promise.reject("Unable to find iframe window");var a=e.defaultView.pageXOffset,o=e.defaultView.pageYOffset,s=A.contentWindow,l=s.document,u=$o(A).then((function(){return r(n,void 0,void 0,(function(){var e,n;return i(this,(function(r){switch(r.label){case 0:return this.scrolledElements.forEach(is),s&&(s.scrollTo(t.left,t.top),!/(iPad|iPhone|iPod)/g.test(navigator.userAgent)||s.scrollY===t.top&&s.scrollX===t.left||(this.context.logger.warn("Unable to restore scroll position for cloned document"),this.context.windowBounds=this.context.windowBounds.add(s.scrollX-t.left,s.scrollY-t.top,0,0))),e=this.options.onclone,"undefined"===typeof(n=this.clonedReferenceElement)?[2,Promise.reject("Error finding the "+this.referenceElement.nodeName+" in the cloned document")]:l.fonts&&l.fonts.ready?[4,l.fonts.ready]:[3,2];case 1:r.sent(),r.label=2;case 2:return/(AppleWebKit)/g.test(navigator.userAgent)?[4,Zo(l)]:[3,4];case 3:r.sent(),r.label=4;case 4:return"function"===typeof e?[2,Promise.resolve().then((function(){return e(l,n)})).then((function(){return A}))]:[2,A]}}))}))}));return l.open(),l.write(ns(document.doctype)+""),rs(this.referenceElement.ownerDocument,a,o),l.replaceChild(l.adoptNode(this.documentElement),l.documentElement),l.close(),u},e.prototype.createElementClone=function(e){if(BA(e,2),vo(e))return this.createCanvasClone(e);if(yo(e))return this.createVideoClone(e);if(_o(e))return this.createStyleClone(e);var t=e.cloneNode(!1);return wo(t)&&(wo(e)&&e.currentSrc&&e.currentSrc!==e.src&&(t.src=e.currentSrc,t.srcset=""),"lazy"===t.loading&&(t.loading="eager")),Eo(t)?this.createCustomElementClone(t):t},e.prototype.createCustomElementClone=function(e){var t=document.createElement("html2canvascustomelement");return ts(e.style,t),t},e.prototype.createStyleClone=function(e){try{var t=e.sheet;if(t&&t.cssRules){var n=[].slice.call(t.cssRules,0).reduce((function(e,t){return t&&"string"===typeof t.cssText?e+t.cssText:e}),""),r=e.cloneNode(!1);return r.textContent=n,r}}catch(Rt){if(this.context.logger.error("Unable to access cssRules property",Rt),"SecurityError"!==Rt.name)throw Rt}return e.cloneNode(!1)},e.prototype.createCanvasClone=function(e){var t;if(this.options.inlineImages&&e.ownerDocument){var n=e.ownerDocument.createElement("img");try{return n.src=e.toDataURL(),n}catch(Rt){this.context.logger.info("Unable to inline canvas contents, canvas is tainted",e)}}var r=e.cloneNode(!1);try{r.width=e.width,r.height=e.height;var i=e.getContext("2d"),A=r.getContext("2d");if(A)if(!this.options.allowTaint&&i)A.putImageData(i.getImageData(0,0,e.width,e.height),0,0);else{var a=null!==(t=e.getContext("webgl2"))&&void 0!==t?t:e.getContext("webgl");if(a){var o=a.getContextAttributes();!1===(null===o||void 0===o?void 0:o.preserveDrawingBuffer)&&this.context.logger.warn("Unable to clone WebGL context as it has preserveDrawingBuffer=false",e)}A.drawImage(e,0,0)}return r}catch(Rt){this.context.logger.info("Unable to clone canvas as it is tainted",e)}return r},e.prototype.createVideoClone=function(e){var t=e.ownerDocument.createElement("canvas");t.width=e.offsetWidth,t.height=e.offsetHeight;var n=t.getContext("2d");try{return n&&(n.drawImage(e,0,0,t.width,t.height),this.options.allowTaint||n.getImageData(0,0,t.width,t.height)),t}catch(Rt){this.context.logger.info("Unable to clone video as it is tainted",e)}var r=e.ownerDocument.createElement("canvas");return r.width=e.offsetWidth,r.height=e.offsetHeight,r},e.prototype.appendChildNode=function(e,t,n){so(t)&&(bo(t)||t.hasAttribute(jo)||"function"===typeof this.options.ignoreElements&&this.options.ignoreElements(t))||this.options.copyStyles&&so(t)&&_o(t)||e.appendChild(this.cloneNode(t,n))},e.prototype.cloneChildNodes=function(e,t,n){for(var r=this,i=e.shadowRoot?e.shadowRoot.firstChild:e.firstChild;i;i=i.nextSibling)if(so(i)&&So(i)&&"function"===typeof i.assignedNodes){var A=i.assignedNodes();A.length&&A.forEach((function(e){return r.appendChildNode(t,e,n)}))}else this.appendChildNode(t,i,n)},e.prototype.cloneNode=function(e,t){if(oo(e))return document.createTextNode(e.data);if(!e.ownerDocument)return e.cloneNode(!1);var n=e.ownerDocument.defaultView;if(n&&so(e)&&(lo(e)||uo(e))){var r=this.createElementClone(e);r.style.transitionProperty="none";var i=n.getComputedStyle(e),A=n.getComputedStyle(e,":before"),a=n.getComputedStyle(e,":after");this.referenceElement===e&&lo(r)&&(this.clonedReferenceElement=r),mo(r)&&us(r);var o=this.counters.parse(new mA(this.context,i)),s=this.resolvePseudoContent(e,r,A,KA.BEFORE);Eo(e)&&(t=!0),yo(e)||this.cloneChildNodes(e,r,t),s&&r.insertBefore(s,r.firstChild);var l=this.resolvePseudoContent(e,r,a,KA.AFTER);return l&&r.appendChild(l),this.counters.pop(o),(i&&(this.options.copyStyles||uo(e))&&!Bo(e)||t)&&ts(i,r),0===e.scrollTop&&0===e.scrollLeft||this.scrolledElements.push([r,e.scrollLeft,e.scrollTop]),(xo(e)||Co(e))&&(xo(r)||Co(r))&&(r.value=e.value),r}return e.cloneNode(!1)},e.prototype.resolvePseudoContent=function(e,t,n,r){var i=this;if(n){var A=n.content,a=t.ownerDocument;if(a&&A&&"none"!==A&&"-moz-alt-content"!==A&&"none"!==n.display){this.counters.parse(new mA(this.context,n));var o=new gA(this.context,n),s=a.createElement("html2canvaspseudoelement");ts(n,s),o.content.forEach((function(t){if(0===t.type)s.appendChild(a.createTextNode(t.value));else if(22===t.type){var n=a.createElement("img");n.src=t.value,n.style.opacity="1",s.appendChild(n)}else if(18===t.type){if("attr"===t.name){var r=t.values.filter(Qn);r.length&&s.appendChild(a.createTextNode(e.getAttribute(r[0].value)||""))}else if("counter"===t.name){var A=t.values.filter(Rn),l=A[0],u=A[1];if(l&&Qn(l)){var c=i.counters.getCounterValue(l.value),d=u&&Qn(u)?xi.parse(i.context,u.value):3;s.appendChild(a.createTextNode(Wo(c,d,!1)))}}else if("counters"===t.name){var h=t.values.filter(Rn),f=(l=h[0],h[1]);if(u=h[2],l&&Qn(l)){var p=i.counters.getCounterValues(l.value),g=u&&Qn(u)?xi.parse(i.context,u.value):3,m=f&&0===f.type?f.value:"",v=p.map((function(e){return Wo(e,g,!1)})).join(m);s.appendChild(a.createTextNode(v))}}}else if(20===t.type)switch(t.value){case"open-quote":s.appendChild(a.createTextNode(uA(o.quotes,i.quoteDepth++,!0)));break;case"close-quote":s.appendChild(a.createTextNode(uA(o.quotes,--i.quoteDepth,!1)));break;default:s.appendChild(a.createTextNode(t.value))}})),s.className=os+" "+ss;var l=r===KA.BEFORE?" "+os:" "+ss;return uo(t)?t.className.baseValue+=l:t.className+=l,s}}},e.destroy=function(e){return!!e.parentNode&&(e.parentNode.removeChild(e),!0)},e}();!function(e){e[e.BEFORE=0]="BEFORE",e[e.AFTER=1]="AFTER"}(KA||(KA={}));var qo,Yo=function(e,t){var n=e.createElement("iframe");return n.className="html2canvas-container",n.style.visibility="hidden",n.style.position="fixed",n.style.left="-10000px",n.style.top="0px",n.style.border="0",n.width=t.width.toString(),n.height=t.height.toString(),n.scrolling="no",n.setAttribute(jo,"true"),e.body.appendChild(n),n},Jo=function(e){return new Promise((function(t){e.complete?t():e.src?(e.onload=t,e.onerror=t):t()}))},Zo=function(e){return Promise.all([].slice.call(e.images,0).map(Jo))},$o=function(e){return new Promise((function(t,n){var r=e.contentWindow;if(!r)return n("No window assigned for iframe");var i=r.document;r.onload=e.onload=function(){r.onload=e.onload=null;var n=setInterval((function(){i.body.childNodes.length>0&&"complete"===i.readyState&&(clearInterval(n),t(e))}),50)}}))},es=["all","d","content"],ts=function(e,t){for(var n=e.length-1;n>=0;n--){var r=e.item(n);-1===es.indexOf(r)&&t.style.setProperty(r,e.getPropertyValue(r))}return t},ns=function(e){var t="";return e&&(t+=""),t},rs=function(e,t,n){e&&e.defaultView&&(t!==e.defaultView.pageXOffset||n!==e.defaultView.pageYOffset)&&e.defaultView.scrollTo(t,n)},is=function(e){var t=e[0],n=e[1],r=e[2];t.scrollLeft=n,t.scrollTop=r},As=":before",as=":after",os="___html2canvas___pseudoelement_before",ss="___html2canvas___pseudoelement_after",ls='{\n content: "" !important;\n display: none !important;\n}',us=function(e){cs(e,"."+os+As+ls+"\n ."+ss+as+ls)},cs=function(e,t){var n=e.ownerDocument;if(n){var r=n.createElement("style");r.textContent=t,e.appendChild(r)}},ds=function(){function e(){}return e.getOrigin=function(t){var n=e._link;return n?(n.href=t,n.href=n.href,n.protocol+n.hostname+n.port):"about:blank"},e.isSameOrigin=function(t){return e.getOrigin(t)===e._origin},e.setContext=function(t){e._link=t.document.createElement("a"),e._origin=e.getOrigin(t.location.href)},e._origin="about:blank",e}(),hs=function(){function e(e,t){this.context=e,this._options=t,this._cache={}}return e.prototype.addImage=function(e){var t=Promise.resolve();return this.has(e)?t:ws(e)||ms(e)?((this._cache[e]=this.loadImage(e)).catch((function(){})),t):t},e.prototype.match=function(e){return this._cache[e]},e.prototype.loadImage=function(e){return r(this,void 0,void 0,(function(){var t,n,r,A,a=this;return i(this,(function(i){switch(i.label){case 0:return t=ds.isSameOrigin(e),n=!vs(e)&&!0===this._options.useCORS&&xa.SUPPORT_CORS_IMAGES&&!t,r=!vs(e)&&!t&&!ws(e)&&"string"===typeof this._options.proxy&&xa.SUPPORT_CORS_XHR&&!n,t||!1!==this._options.allowTaint||vs(e)||ws(e)||r||n?(A=e,r?[4,this.proxy(A)]:[3,2]):[2];case 1:A=i.sent(),i.label=2;case 2:return this.context.logger.debug("Added image "+e.substring(0,256)),[4,new Promise((function(e,t){var r=new Image;r.onload=function(){return e(r)},r.onerror=t,(ys(A)||n)&&(r.crossOrigin="anonymous"),r.src=A,!0===r.complete&&setTimeout((function(){return e(r)}),500),a._options.imageTimeout>0&&setTimeout((function(){return t("Timed out ("+a._options.imageTimeout+"ms) loading image")}),a._options.imageTimeout)}))];case 3:return[2,i.sent()]}}))}))},e.prototype.has=function(e){return"undefined"!==typeof this._cache[e]},e.prototype.keys=function(){return Promise.resolve(Object.keys(this._cache))},e.prototype.proxy=function(e){var t=this,n=this._options.proxy;if(!n)throw new Error("No proxy defined");var r=e.substring(0,256);return new Promise((function(i,A){var a=xa.SUPPORT_RESPONSE_TYPE?"blob":"text",o=new XMLHttpRequest;o.onload=function(){if(200===o.status)if("text"===a)i(o.response);else{var e=new FileReader;e.addEventListener("load",(function(){return i(e.result)}),!1),e.addEventListener("error",(function(e){return A(e)}),!1),e.readAsDataURL(o.response)}else A("Failed to proxy resource "+r+" with status code "+o.status)},o.onerror=A;var s=n.indexOf("?")>-1?"&":"?";if(o.open("GET",""+n+s+"url="+encodeURIComponent(e)+"&responseType="+a),"text"!==a&&o instanceof XMLHttpRequest&&(o.responseType=a),t._options.imageTimeout){var l=t._options.imageTimeout;o.timeout=l,o.ontimeout=function(){return A("Timed out ("+l+"ms) proxying "+r)}}o.send()}))},e}(),fs=/^data:image\/svg\+xml/i,ps=/^data:image\/.*;base64,/i,gs=/^data:image\/.*/i,ms=function(e){return xa.SUPPORT_SVG_DRAWING||!Bs(e)},vs=function(e){return gs.test(e)},ys=function(e){return ps.test(e)},ws=function(e){return"blob"===e.substr(0,4)},Bs=function(e){return"svg"===e.substr(-3).toLowerCase()||fs.test(e)},_s=function(){function e(e,t){this.type=0,this.x=e,this.y=t}return e.prototype.add=function(t,n){return new e(this.x+t,this.y+n)},e}(),bs=function(e,t,n){return new _s(e.x+(t.x-e.x)*n,e.y+(t.y-e.y)*n)},xs=function(){function e(e,t,n,r){this.type=1,this.start=e,this.startControl=t,this.endControl=n,this.end=r}return e.prototype.subdivide=function(t,n){var r=bs(this.start,this.startControl,t),i=bs(this.startControl,this.endControl,t),A=bs(this.endControl,this.end,t),a=bs(r,i,t),o=bs(i,A,t),s=bs(a,o,t);return n?new e(this.start,r,a,s):new e(s,o,A,this.end)},e.prototype.add=function(t,n){return new e(this.start.add(t,n),this.startControl.add(t,n),this.endControl.add(t,n),this.end.add(t,n))},e.prototype.reverse=function(){return new e(this.end,this.endControl,this.startControl,this.start)},e}(),Cs=function(e){return 1===e.type},Ss=function(){function e(e){var t=e.styles,n=e.bounds,r=Wn(t.borderTopLeftRadius,n.width,n.height),i=r[0],A=r[1],a=Wn(t.borderTopRightRadius,n.width,n.height),o=a[0],s=a[1],l=Wn(t.borderBottomRightRadius,n.width,n.height),u=l[0],c=l[1],d=Wn(t.borderBottomLeftRadius,n.width,n.height),h=d[0],f=d[1],p=[];p.push((i+o)/n.width),p.push((h+u)/n.width),p.push((A+f)/n.height),p.push((s+c)/n.height);var g=Math.max.apply(Math,p);g>1&&(i/=g,A/=g,o/=g,s/=g,u/=g,c/=g,h/=g,f/=g);var m=n.width-o,v=n.height-c,y=n.width-u,w=n.height-f,B=t.borderTopWidth,_=t.borderRightWidth,b=t.borderBottomWidth,x=t.borderLeftWidth,C=jn(t.paddingTop,e.bounds.width),S=jn(t.paddingRight,e.bounds.width),E=jn(t.paddingBottom,e.bounds.width),U=jn(t.paddingLeft,e.bounds.width);this.topLeftBorderDoubleOuterBox=i>0||A>0?Es(n.left+x/3,n.top+B/3,i-x/3,A-B/3,qo.TOP_LEFT):new _s(n.left+x/3,n.top+B/3),this.topRightBorderDoubleOuterBox=i>0||A>0?Es(n.left+m,n.top+B/3,o-_/3,s-B/3,qo.TOP_RIGHT):new _s(n.left+n.width-_/3,n.top+B/3),this.bottomRightBorderDoubleOuterBox=u>0||c>0?Es(n.left+y,n.top+v,u-_/3,c-b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/3,n.top+n.height-b/3),this.bottomLeftBorderDoubleOuterBox=h>0||f>0?Es(n.left+x/3,n.top+w,h-x/3,f-b/3,qo.BOTTOM_LEFT):new _s(n.left+x/3,n.top+n.height-b/3),this.topLeftBorderDoubleInnerBox=i>0||A>0?Es(n.left+2*x/3,n.top+2*B/3,i-2*x/3,A-2*B/3,qo.TOP_LEFT):new _s(n.left+2*x/3,n.top+2*B/3),this.topRightBorderDoubleInnerBox=i>0||A>0?Es(n.left+m,n.top+2*B/3,o-2*_/3,s-2*B/3,qo.TOP_RIGHT):new _s(n.left+n.width-2*_/3,n.top+2*B/3),this.bottomRightBorderDoubleInnerBox=u>0||c>0?Es(n.left+y,n.top+v,u-2*_/3,c-2*b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-2*_/3,n.top+n.height-2*b/3),this.bottomLeftBorderDoubleInnerBox=h>0||f>0?Es(n.left+2*x/3,n.top+w,h-2*x/3,f-2*b/3,qo.BOTTOM_LEFT):new _s(n.left+2*x/3,n.top+n.height-2*b/3),this.topLeftBorderStroke=i>0||A>0?Es(n.left+x/2,n.top+B/2,i-x/2,A-B/2,qo.TOP_LEFT):new _s(n.left+x/2,n.top+B/2),this.topRightBorderStroke=i>0||A>0?Es(n.left+m,n.top+B/2,o-_/2,s-B/2,qo.TOP_RIGHT):new _s(n.left+n.width-_/2,n.top+B/2),this.bottomRightBorderStroke=u>0||c>0?Es(n.left+y,n.top+v,u-_/2,c-b/2,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/2,n.top+n.height-b/2),this.bottomLeftBorderStroke=h>0||f>0?Es(n.left+x/2,n.top+w,h-x/2,f-b/2,qo.BOTTOM_LEFT):new _s(n.left+x/2,n.top+n.height-b/2),this.topLeftBorderBox=i>0||A>0?Es(n.left,n.top,i,A,qo.TOP_LEFT):new _s(n.left,n.top),this.topRightBorderBox=o>0||s>0?Es(n.left+m,n.top,o,s,qo.TOP_RIGHT):new _s(n.left+n.width,n.top),this.bottomRightBorderBox=u>0||c>0?Es(n.left+y,n.top+v,u,c,qo.BOTTOM_RIGHT):new _s(n.left+n.width,n.top+n.height),this.bottomLeftBorderBox=h>0||f>0?Es(n.left,n.top+w,h,f,qo.BOTTOM_LEFT):new _s(n.left,n.top+n.height),this.topLeftPaddingBox=i>0||A>0?Es(n.left+x,n.top+B,Math.max(0,i-x),Math.max(0,A-B),qo.TOP_LEFT):new _s(n.left+x,n.top+B),this.topRightPaddingBox=o>0||s>0?Es(n.left+Math.min(m,n.width-_),n.top+B,m>n.width+_?0:Math.max(0,o-_),Math.max(0,s-B),qo.TOP_RIGHT):new _s(n.left+n.width-_,n.top+B),this.bottomRightPaddingBox=u>0||c>0?Es(n.left+Math.min(y,n.width-x),n.top+Math.min(v,n.height-b),Math.max(0,u-_),Math.max(0,c-b),qo.BOTTOM_RIGHT):new _s(n.left+n.width-_,n.top+n.height-b),this.bottomLeftPaddingBox=h>0||f>0?Es(n.left+x,n.top+Math.min(w,n.height-b),Math.max(0,h-x),Math.max(0,f-b),qo.BOTTOM_LEFT):new _s(n.left+x,n.top+n.height-b),this.topLeftContentBox=i>0||A>0?Es(n.left+x+U,n.top+B+C,Math.max(0,i-(x+U)),Math.max(0,A-(B+C)),qo.TOP_LEFT):new _s(n.left+x+U,n.top+B+C),this.topRightContentBox=o>0||s>0?Es(n.left+Math.min(m,n.width+x+U),n.top+B+C,m>n.width+x+U?0:o-x+U,s-(B+C),qo.TOP_RIGHT):new _s(n.left+n.width-(_+S),n.top+B+C),this.bottomRightContentBox=u>0||c>0?Es(n.left+Math.min(y,n.width-(x+U)),n.top+Math.min(v,n.height+B+C),Math.max(0,u-(_+S)),c-(b+E),qo.BOTTOM_RIGHT):new _s(n.left+n.width-(_+S),n.top+n.height-(b+E)),this.bottomLeftContentBox=h>0||f>0?Es(n.left+x+U,n.top+w,Math.max(0,h-(x+U)),f-(b+E),qo.BOTTOM_LEFT):new _s(n.left+x+U,n.top+n.height-(b+E))}return e}();!function(e){e[e.TOP_LEFT=0]="TOP_LEFT",e[e.TOP_RIGHT=1]="TOP_RIGHT",e[e.BOTTOM_RIGHT=2]="BOTTOM_RIGHT",e[e.BOTTOM_LEFT=3]="BOTTOM_LEFT"}(qo||(qo={}));var Es=function(e,t,n,r,i){var A=(Math.sqrt(2)-1)/3*4,a=n*A,o=r*A,s=e+n,l=t+r;switch(i){case qo.TOP_LEFT:return new xs(new _s(e,l),new _s(e,l-o),new _s(s-a,t),new _s(s,t));case qo.TOP_RIGHT:return new xs(new _s(e,t),new _s(e+a,t),new _s(s,l-o),new _s(s,l));case qo.BOTTOM_RIGHT:return new xs(new _s(s,t),new _s(s,t+o),new _s(e+a,l),new _s(e,l));case qo.BOTTOM_LEFT:default:return new xs(new _s(s,l),new _s(s-a,l),new _s(e,t+o),new _s(e,t))}},Us=function(e){return[e.topLeftBorderBox,e.topRightBorderBox,e.bottomRightBorderBox,e.bottomLeftBorderBox]},Ms=function(e){return[e.topLeftContentBox,e.topRightContentBox,e.bottomRightContentBox,e.bottomLeftContentBox]},Fs=function(e){return[e.topLeftPaddingBox,e.topRightPaddingBox,e.bottomRightPaddingBox,e.bottomLeftPaddingBox]},Ts=function(){function e(e,t,n){this.offsetX=e,this.offsetY=t,this.matrix=n,this.type=0,this.target=6}return e}(),ks=function(){function e(e,t){this.path=e,this.target=t,this.type=1}return e}(),Qs=function(){function e(e){this.opacity=e,this.type=2,this.target=6}return e}(),Ls=function(e){return 0===e.type},Ds=function(e){return 1===e.type},Is=function(e){return 2===e.type},Rs=function(e,t){return e.length===t.length&&e.some((function(e,n){return e===t[n]}))},Ps=function(e,t,n,r,i){return e.map((function(e,A){switch(A){case 0:return e.add(t,n);case 1:return e.add(t+r,n);case 2:return e.add(t+r,n+i);case 3:return e.add(t,n+i)}return e}))},Hs=function(){function e(e){this.element=e,this.inlineLevel=[],this.nonInlineLevel=[],this.negativeZIndex=[],this.zeroOrAutoZIndexOrTransformedOrOpacity=[],this.positiveZIndex=[],this.nonPositionedFloats=[],this.nonPositionedInlineLevel=[]}return e}(),Ns=function(){function e(e,t){if(this.container=e,this.parent=t,this.effects=[],this.curves=new Ss(this.container),this.container.styles.opacity<1&&this.effects.push(new Qs(this.container.styles.opacity)),null!==this.container.styles.transform){var n=this.container.bounds.left+this.container.styles.transformOrigin[0].number,r=this.container.bounds.top+this.container.styles.transformOrigin[1].number,i=this.container.styles.transform;this.effects.push(new Ts(n,r,i))}if(0!==this.container.styles.overflowX){var A=Us(this.curves),a=Fs(this.curves);Rs(A,a)?this.effects.push(new ks(A,6)):(this.effects.push(new ks(A,2)),this.effects.push(new ks(a,4)))}}return e.prototype.getEffects=function(e){for(var t=-1===[2,3].indexOf(this.container.styles.position),n=this.parent,r=this.effects.slice(0);n;){var i=n.effects.filter((function(e){return!Ds(e)}));if(t||0!==n.container.styles.position||!n.parent){if(r.unshift.apply(r,i),t=-1===[2,3].indexOf(n.container.styles.position),0!==n.container.styles.overflowX){var A=Us(n.curves),a=Fs(n.curves);Rs(A,a)||r.unshift(new ks(a,6))}}else r.unshift.apply(r,i);n=n.parent}return r.filter((function(t){return iA(t.target,e)}))},e}(),Os=function e(t,n,r,i){t.container.elements.forEach((function(A){var a=iA(A.flags,4),o=iA(A.flags,2),s=new Ns(A,t);iA(A.styles.display,2048)&&i.push(s);var l=iA(A.flags,8)?[]:i;if(a||o){var u=a||A.styles.isPositioned()?r:n,c=new Hs(s);if(A.styles.isPositioned()||A.styles.opacity<1||A.styles.isTransformed()){var d=A.styles.zIndex.order;if(d<0){var h=0;u.negativeZIndex.some((function(e,t){return d>e.element.container.styles.zIndex.order?(h=t,!1):h>0})),u.negativeZIndex.splice(h,0,c)}else if(d>0){var f=0;u.positiveZIndex.some((function(e,t){return d>=e.element.container.styles.zIndex.order?(f=t+1,!1):f>0})),u.positiveZIndex.splice(f,0,c)}else u.zeroOrAutoZIndexOrTransformedOrOpacity.push(c)}else A.styles.isFloating()?u.nonPositionedFloats.push(c):u.nonPositionedInlineLevel.push(c);e(s,c,a?c:r,l)}else A.styles.isInlineLevel()?n.inlineLevel.push(s):n.nonInlineLevel.push(s),e(s,n,r,l);iA(A.flags,8)&&Vs(A,l)}))},Vs=function(e,t){for(var n=e instanceof Va?e.start:1,r=e instanceof Va&&e.reversed,i=0;i0&&e.intrinsicHeight>0){var r=Js(e),i=Fs(t);this.path(i),this.ctx.save(),this.ctx.clip(),this.ctx.drawImage(n,0,0,e.intrinsicWidth,e.intrinsicHeight,r.left,r.top,r.width,r.height),this.ctx.restore()}},n.prototype.renderNodeContent=function(e){return r(this,void 0,void 0,(function(){var t,r,A,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){switch(i.label){case 0:this.applyEffects(e.getEffects(4)),t=e.container,r=e.curves,A=t.styles,o=0,s=t.textNodes,i.label=1;case 1:return o0&&x>0&&(v=r.ctx.createPattern(p,"repeat"),r.renderRepeat(w,v,S,E))):Qr(n)&&(y=el(e,t,[null,null,null]),w=y[0],B=y[1],_=y[2],b=y[3],x=y[4],C=0===n.position.length?[Gn]:n.position,S=jn(C[0],b),E=jn(C[C.length-1],x),U=Br(n,S,E,b,x),M=U[0],F=U[1],M>0&&F>0&&(T=r.ctx.createRadialGradient(B+S,_+E,0,B+S,_+E,M),gr(n.stops,2*M).forEach((function(e){return T.addColorStop(e.stop,ir(e.color))})),r.path(w),r.ctx.fillStyle=T,M!==F?(k=e.bounds.left+.5*e.bounds.width,Q=e.bounds.top+.5*e.bounds.height,D=1/(L=F/M),r.ctx.save(),r.ctx.translate(k,Q),r.ctx.transform(1,0,0,L,0,0),r.ctx.translate(-k,-Q),r.ctx.fillRect(B,D*(_-Q)+Q,b,x*D),r.ctx.restore()):r.ctx.fill())),i.label=6;case 6:return t--,[2]}}))},r=this,A=0,a=e.styles.backgroundImage.slice(0).reverse(),s.label=1;case 1:return A0?2!==l.style?[3,5]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,2)]:[3,11]:[3,13];case 4:return i.sent(),[3,11];case 5:return 3!==l.style?[3,7]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,3)];case 6:return i.sent(),[3,11];case 7:return 4!==l.style?[3,9]:[4,this.renderDoubleBorder(l.color,l.width,a,e.curves)];case 8:return i.sent(),[3,11];case 9:return[4,this.renderSolidBorder(l.color,a,e.curves)];case 10:i.sent(),i.label=11;case 11:a++,i.label=12;case 12:return o++,[3,3];case 13:return[2]}}))}))},n.prototype.renderDashedDottedBorder=function(e,t,n,A,a){return r(this,void 0,void 0,(function(){var r,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){return this.ctx.save(),r=js(A,n),o=Gs(A,n),2===a&&(this.path(o),this.ctx.clip()),Cs(o[0])?(s=o[0].start.x,l=o[0].start.y):(s=o[0].x,l=o[0].y),Cs(o[1])?(u=o[1].end.x,c=o[1].end.y):(u=o[1].x,c=o[1].y),d=0===n||2===n?Math.abs(s-u):Math.abs(l-c),this.ctx.beginPath(),3===a?this.formatPath(r):this.formatPath(o.slice(0,2)),h=t<3?3*t:2*t,f=t<3?2*t:t,3===a&&(h=t,f=t),p=!0,d<=2*h?p=!1:d<=2*h+f?(h*=g=d/(2*h+f),f*=g):(m=Math.floor((d+f)/(h+f)),v=(d-m*h)/(m-1),f=(y=(d-(m+1)*h)/m)<=0||Math.abs(f-v)