diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CivilCAD Free LINK Download Full Version.md b/spaces/1gistliPinn/ChatGPT4/Examples/CivilCAD Free LINK Download Full Version.md deleted file mode 100644 index 97e8c316502a922edefc02c1339a42f48ebb8406..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/CivilCAD Free LINK Download Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -
Download Zip ››››› https://imgfil.com/2uxYkO
Crysis 2 is a sci-fi first-person shooter game developed by Crytek and released in 2011. It is the sequel to the critically acclaimed Crysis, which was known for its stunning graphics and demanding system requirements. Crysis 2 is set in a post-apocalyptic New York City, where the player has to fight against alien invaders and human enemies using a nanosuit that grants enhanced abilities.
-Many PC gamers wonder if they can run Crysis 2 on Windows 10 64-bit OS, since the game was originally designed for Windows XP, Vista, and 7. The good news is that Crysis 2 is compatible with Windows 10 64-bit OS, as long as you have the recommended system requirements and install the latest patches and updates. Here are some tips on how to run Crysis 2 on Windows 10 64-bit OS smoothly and enjoyably.
-Download Zip → https://imgfil.com/2uxZ67
Before you install and run Crysis 2 on Windows 10 64-bit OS, you should check if your PC meets the minimum or recommended system requirements for the game. Here are the official system requirements for Crysis 2:
-Minimum Requirements | Recommended Requirements |
---|---|
CPU: Intel Core 2 Duo 2 GHz or AMD Athlon 64 X2 2 GHz | CPU: Intel Core i5-750 or AMD Phenom II X4 3 GHz |
RAM: 2 GB | RAM: 3 GB |
GPU: NVIDIA GeForce 8800 GT or ATI Radeon HD 3850 with 512 MB VRAM | GPU: NVIDIA GeForce GTX 260 or ATI Radeon HD 5850 with 1 GB VRAM |
OS: Windows XP, Vista, or 7 (32-bit) | OS: Windows XP, Vista, or 7 (64-bit) |
HDD: At least 9 GB of free space | HDD: At least 9 GB of free space |
DX: DirectX 9.0c | DX: DirectX 11 |
Sound: DirectX compatible sound card | Sound: DirectX compatible sound card |
Internet: Broadband connection for online multiplayer | Internet: Broadband connection for online multiplayer |
If your PC meets the minimum requirements, you should be able to run Crysis 2 on Windows 10 64-bit OS at low settings and resolution. However, if you want to enjoy the game at higher settings and resolution, you should aim for the recommended requirements or higher. You can use tools like Can You Run It or System Requirements Lab to check your PC's compatibility with Crysis 2.
-Another important step to run Crysis 2 on Windows 10 64-bit OS is to install the latest patches and updates for the game. These patches and updates fix various bugs, improve performance, and add new features to the game. The most important patch for Crysis 2 is Patch 1.9, which prepares the game for DirectX 11 features and high-resolution textures[^1^]. You can download Patch 1.9 from the official website of Crysis or from other sources like Steam or Origin.
-Patch 1.9 also includes two optional downloads: DirectX 11 Ultra Upgrade and High-Resolution Textures[^1^]. These downloads enhance the graphics quality of Crysis
- d5da3c52bfDownload ✒ https://imgfil.com/2uy0TG
Have you ever wondered what your pet is thinking or feeling? Have you ever wished you could communicate with animals in a deeper and more meaningful way? If so, you are not alone. Many people have a natural curiosity and affinity for animals, and want to learn how to connect with them on a spiritual, emotional, or mental level.
-Animal communication, also known as interspecies communication, is the ability to communicate with animals using non-verbal methods such as telepathy, intuition, or body language. It is not a supernatural or paranormal phenomenon, but rather a natural and innate skill that anyone can develop with practice and patience.
-Download ::: https://urlin.us/2uSVI8
In this article, we will explore what animal communication is and why it is important, how to prepare yourself for it, how to practice it in different situations, and how to improve your abilities. We will also answer some frequently asked questions about animal communication at the end.
-Animal communication is the exchange of information and feelings between humans and animals without using words or sounds. It can involve sending and receiving images, emotions, thoughts, sensations, impressions, or intentions through a mental or energetic connection.
-Animal communication is important for several reasons. First of all, it can help us understand animals better and appreciate their intelligence, personality, and emotions. It can also help us improve our relationship with them by resolving conflicts, addressing behavioral issues, or expressing our love and gratitude.
-connect animal game
-connect animal onet kyodai
-connect animal classic
-connect animal puzzle
-connect animal matching
-connect animal link
-connect animal deluxe
-connect animal free
-connect animal offline
-connect animal online
-connect animal app
-connect animal apk
-connect animal download
-connect animal play store
-connect animal y8
-connect animal html5
-connect animal mobile
-connect animal pc
-connect animal android
-connect animal ios
-connect animal ipad
-connect animal iphone
-connect animal spearmint games
-connect animal around the world
-connect animal travel
-connect animal cute
-connect animal fun
-connect animal addictive
-connect animal challenging
-connect animal levels
-connect animal timer
-connect animal power ups
-connect animal hints
-connect animal shuffle
-connect animal bomb
-connect animal score
-connect animal leaderboard
-connect animal review
-connect animal rating
-connect animal feedback
-connect animal tips
-connect animal tricks
-connect animal cheats
-connect animal guide
-connect animal walkthrough
-connect animal gameplay
-connect animal video
-connect animal trailer
-connect animals onet kyodai game y8.com[^2^]
Secondly, animal communication can benefit both humans and animals in terms of health and well-being. It can help us detect and treat physical or emotional problems in animals before they become serious. It can also help us cope with stress, anxiety, grief, or loneliness by providing comfort and support from our animal friends.
-Thirdly, animal communication can foster a deeper connection with nature and all living beings. It can help us respect and protect animals and their habitats by raising our awareness of their needs and rights. It can also help us learn from their wisdom and insights by tapping into their unique perspectives and experiences.
-To communicate with animals effectively, you need to develop some skills and qualities that will enhance your receptivity and accuracy. Some of these are:
-These skills and qualities can be cultivated through various practices such as meditation, mindfulness, yoga, journaling, or self-care. You can also learn from other animal communicators by reading books, taking courses, or joining communities.
-There are many tools and techniques that can help you communicate with animals more easily and effectively. Some of these are:
-These tools and techniques are not necessary for animal communication, but they can be helpful for beginners or as a support for your intuition. You can experiment with different tools and techniques and find what works best for you and the animals you communicate with.
-Connecting with your own pets or domestic animals is a great way to start practicing animal communication. They are usually familiar with you and willing to communicate with you. Here are some steps you can follow to connect with them:
-Connecting with wild animals or animals in nature is a more challenging but rewarding form of animal communication. They are usually less familiar with humans and may have different needs and preferences than domestic animals. Here are some steps you can follow to connect with them:
-Connecting with animals in distress or need is a more sensitive and delicate form of animal communication. They are usually suffering from physical or emotional pain, trauma, fear, or loss. They may also be in danger, captivity, or abuse. Here are some steps you can follow to connect with them:
-To improve your animal communication abilities, you need to practice regularly and learn from your experiences. Here are some tips and resources you can follow to enhance your skills:
-To improve your animal communication abilities, you also need to avoid some common mistakes and pitfalls that can hinder your progress or harm your relationship with animals. Some of these are:
-In conclusion, animal communication is a wonderful way of connecting with animals on a deeper and more meaningful level. It can help us understand them better, improve our relationship with them, benefit our health and well-being, foster a deeper connection with nature, and learn from their wisdom and insights.
-To communicate with animals effectively, we need to prepare ourselves by developing some skills and qualities, using some tools and techniques, and practicing in different situations. We also need to improve our abilities by following some tips and resources, and avoiding some common mistakes and pitfalls.
-If you are interested in learning more about animal communication, here are some frequently asked questions and answers that may help you:
-A: Yes, anyone can communicate with animals, as it is a natural and innate skill that we all have. However, some people may have more natural talent or affinity for it than others, and some people may need more training or practice to develop it.
-A: You can tell if an animal is communicating with you by paying attention to your intuition and the signs that they are sending you. Some signs may include eye contact, body language, facial expressions, sounds, or behaviors. You may also receive messages from them in the form of images, emotions, thoughts, sensations, impressions, or intentions in your mind or heart.
-A: You can verify the accuracy of your communication by asking for feedback from the animal or from other sources. For example, you can ask the animal to confirm or clarify their message by sending you a sign or a signal. You can also ask other people who know the animal well or have access to their information to validate your communication.
-A: You can protect yourself from negative or harmful energies when communicating with animals by setting boundaries, shielding yourself, and cleansing yourself. For example, you can set boundaries by asking for permission before you communicate and respecting the animal's choice if they decline or end the communication. You can shield yourself by imagining a protective bubble or a white light around you and the animal. You can cleanse yourself by taking a shower, using salt water, burning sage, or meditating after the communication.
-A: You can communicate with animals who have passed away by using the same methods and techniques as you would with living animals. However, you may need to adjust your frequency and vibration to match theirs, as they are in a different realm or dimension. You may also need to be more patient and respectful, as they may have different rules or preferences than living animals.
-I hope this article has helped you learn more about animal communication and how to connect with animals. If you have any questions or comments, please feel free to contact me. Thank you for reading and happy communicating!
197e85843dSimCity BuildIt is a popular mobile game that allows you to create and manage your own city. You can build various types of buildings, such as residential zones, factories, shops, parks, landmarks, and more. You can also provide services to your citizens, such as power, water, sewage, waste management, fire, police, health, education, transportation, entertainment, etc. You can also participate in club wars, contests of mayors, event tracks, design challenges, and other activities.
-Download File ☆☆☆ https://jinyurl.com/2uNLfT
The game is free to play, but it also has some in-game currencies that you can use to speed up your progress or unlock special features. These currencies are simoleons (the basic money), simcash (the premium money), golden keys (used to unlock specializations), platinum keys (used to unlock mayor's pass buildings), neosimoleons (used in omega zones), war simoleons (used in club wars), regional simoleons (used in regions), and design simoleons (used in design challenges).
-However, earning these currencies can be time-consuming and challenging. You may need to complete various tasks, participate in events, trade with other players, or spend real money to get them. This can make the game frustrating or boring for some players who want to enjoy the game without limitations. That's why some players may want to use a hack or mod apk for SimCity BuildIt.
-A hack or mod apk is a modified version of the original game that gives you access to unlimited resources or other advantages. For example, a hack or mod apk for SimCity BuildIt may allow you to get unlimited money, golden keys, platinum keys, neosimoleons, war simoleons, regional simoleons, design simoleons, or other resources. It may also allow you to unlock all the buildings, services, specializations, regions, etc. It may also give you other features such as faster production speed, instant upgrade completion, unlimited storage capacity, etc.
-Using a hack or mod apk for SimCity BuildIt can make the game easier and more fun for you. You can build your dream city without worrying about running out of resources or waiting for long hours. You can also experiment with different designs and layouts without any restrictions. You can also dominate the club wars and contests of mayors with your powerful city.
-how to get simcity buildit hack tool
-how to install simcity buildit hack apk
-how to use simcity buildit hack and cheats tool
-how to download simcity buildit hack for android
-how to download simcity buildit hack for ios
-how to download simcity buildit hack for pc
-how to download simcity buildit hack no survey
-how to download simcity buildit hack no human verification
-how to download simcity buildit hack without root
-how to download simcity buildit hack without jailbreak
-how to download simcity buildit hack online
-how to download simcity buildit hack offline
-how to download simcity buildit hack 2023
-how to download simcity buildit hack latest version
-how to download simcity buildit hack mod apk
-how to download simcity buildit hack unlimited money
-how to download simcity buildit hack unlimited simcash
-how to download simcity buildit hack unlimited keys
-how to download simcity buildit hack free resources
-how to download simcity buildit hack generator
-how to download simcity buildit hack reddit
-how to download simcity buildit hack youtube
-how to download simcity buildit hack video tutorial
-how to download simcity buildit hack step by step guide
-how to download simcity buildit hack easy method
-how to download simcity buildit hack working 100%
-how to download simcity buildit hack safe and secure
-how to download simcity buildit hack legal and legit
-how to download simcity buildit hack from official website
-how to download simcity buildit hack from trusted source
-how to download simcity buildit hack from apkcombo.com[^3^]
-how to download simcity buildit hack from reddit.com[^1^] [^2^]
-how to download simcity buildit hack from newscientist.com
-how to download simcity buildit hack from the-sun.com[^3^]
-how to download simcity buildit hack from yahoo.com[^1^]
-how to download simcity buildit hack with proof of success
-how to download simcity buildit hack with positive reviews
-how to download simcity buildit hack with customer support
-how to download simcity buildit hack with updates and patches
-how to download simcity buildit hack with bonus features and tips
-how to download simcity buildit hack with no ads and malware
-how to download simcity buildit hack with no errors and bugs
-how to download simcity buildit hack with no password and activation code
-how to download simcity buildit hack with no viruses and spyware
-how to download simcity buildit hack with no risks and bans
If you want
If you want to get unlimited money, golden keys, and other resources in SimCity BuildIt, you will need to download and install a hack or mod apk for the game. Here are the steps you need to follow:
-Using a hack or mod apk for SimCity BuildIt can be fun and exciting, but it can also be risky and problematic. Here are some tips and tricks for using the hack or mod apk effectively:
-Using a hack or mod apk for SimCity BuildIt can also have some potential risks and drawbacks. Here are some of them:
-If you don't want to use a hack or mod apk for SimCity BuildIt, you can still play the game without them. You can enjoy the game's challenges and rewards by playing it legitimately and fairly. Here are some ways to play SimCity BuildIt without hack:
-You can earn money, golden keys, and other resources in SimCity BuildIt by completing various tasks, participating in events, and trading with other players. Here are some examples:
-You can build the ultimate city in SimCity BuildIt by following some proven tips and cheats that will help you optimize your city's performance
You can build the ultimate city in SimCity BuildIt by following some proven tips and cheats that will help you optimize your city's performance and appearance. Here are some examples:
-SimCity BuildIt is a fun and addictive game that lets you create and manage your own city. You can choose to play the game with or without a hack or mod apk. A hack or mod apk can give you unlimited resources and other advantages, but it can also have some risks and drawbacks. Playing the game without a hack or mod apk can be challenging and rewarding, but it can also be frustrating and boring. Ultimately, the choice is yours. You can decide what kind of city you want to build and how you want to play the game.
-Here are some frequently asked questions and answers about SimCity BuildIt hack:
-Do you like archery games? Want to try a fun, addictive and action-packed game? Then, you will love Bowmasters, a archery game in which you can choose from more than 60 different characters and compete against other players or artificial intelligence. But what if you want to play with all the characters from the beginning? Or do you want unlimited coins to buy upgrades and customize your experience? In that case, you need to download Bowmasters MOD APK, a modified version of the game that offers you all the unlocked characters and other benefits. In this article, we tell you everything you need to know about Bowmasters and how to download and install its mod apk on your Android device.
-Bowmasters is a archery game developed by Playgendary, a company known for creating casual and fun games for mobile devices. Bowmasters launched in 2016 and has since accumulated over 100 million downloads on the Google Play Store, where it has a rating of 4.5 stars. The game is also available for iOS and has a web version.
-Download File >>>>> https://bltlly.com/2v6M24
The objective of the game is simple: you must aim and shoot your bow or weapon towards your opponent, trying to hit him in the head or body to reduce his life bar. The game has realistic physics and colorful cartoon graphics that make each shot a fun and bloody experience. In addition, the game has some sound effects and voices that give more humor and personality to the game.
-Bowmasters also offers several game modes so you never get bored. You can play against artificial intelligence in the mode du elo, where you can face different opponents and unlock new characters and weapons. You can also play against other players online in multiplayer mode, where you can prove your skill and earn rewards. You can also try the tournament mode, where you must pass several rounds and reach the final. Or if you prefer something more relaxed, you can play target shooting mode, where you must hit different targets with your bow or gun. And if you want something more fun, you can play rubber duck mode, where you must shoot some rubber ducks floating in the water.
-Bowmasters is a very fun and addictive game, but it also has some drawbacks. For example, to unlock all the characters and weapons, you must play long time or spend real money on integrated purchases. In addition, the game has many ads that can interrupt your fun and consume your mobile data. So if you want to enjoy Bowmasters to the fullest, we recommend that you download Bowmasters MOD APK, a modified version of the game that offers several benefits.
-One of the most important benefits of Bowmasters MOD APK is that it allows you to play with all the characters from the beginning, without having to unlock them one by one. Thus, you can choose the character that you like best or that best suits your style of play. In addition, you can try all the weapons and special abilities that each character has. This will give you an advantage over your opponents and make the game more varied and fun.
-Finally, Bowmasters MOD APK frees you from the annoying ads and built-in purchases that the original game has. Thus, you can play without interruptions or distractions, and without spending real money on the game. Plus, you can save your mobile data and battery by not having to watch or download ads. This will make your gaming experience more fluid and enjoyable.
- -Now that you know what Bowmasters is and why to download its mod apk, we explain how to download it and install it on your Android device. It is very easy and will only take a few minutes. Just follow these steps:
-The first thing to do is to download the Bowmasters MOD APK file from a reliable website. There are many websites that offer these types of files, but not all of them are secure or updated. Therefore, we recommend that you use a website like [APKPure] or [APKMirror], where you can find the latest version of Bowmasters MOD APK with all the unlocked characters and other benefits.
-The second thing to do is to enable the option of unknown sources on your Android device. This option allows you to install applications that do not come from the Google Play Store, such as Bowmasters MOD APK. To enable it, you just need to go to your device’s settings, then to security or privacy, and then enable the option of unknown sources or allow installation from unknown sources.
-Bowmasters is a very fun and addictive archery game, offering you more than 60 different characters, each with their own bow or weapon, their own special skill and their own personality. In addition, it has several game modes so you never get bored, such as duel mode, multiplayer mode, tournament mode, target shooting mode and rubber duck mode. However, if you want to play with all the characters from the beginning, have unlimited coins to buy upgrades and customize your experience, and get rid of annoying ads and integrated purchases, we recommend that you download Bowmasters MOD APK, a modified version of the game that gives you all these benefits. Just follow the steps we have explained in this article and you can enjoy Bowmasters with all the unlocked characters on your Android device.
-Here are some of the most frequently asked questions about Bowmasters and its apk mod:
-Question | -Answer | -
---|---|
Is it safe to download Bowmasters MOD APK? | -Yes, as long as you download it from a reliable website like APKPure or APKMirror, where you can find the latest version of Bowmasters MOD APK with all the unlocked characters and other benefits. These websites check the files they offer and update them constantly. | -
Do I need to root my device to install Bowmasters MOD APK? | -No, you don’t need to root your device to install Bowmasters MOD APK. You just need to enable the option of unknown sources on your Android device, as we have explained in this article. | -
Can I play online with Bowmasters MOD APK? | - -|
Can I upgrade Bowmasters MOD APK? | -Yes, you can upgrade Bowmasters MOD APK when a new version is available. However, you should keep in mind that when updating the game you may lose some of the benefits offered by the apk mod, such as unlocked characters or unlimited coins. Therefore, we recommend that you wait for a new version of the apk mod before updating the game. | -
What other games similar to Bowmasters can I try? | -If you like Bowmasters, you might also like other similar archery or casual action games, such as Archero, Kick the Buddy, Mr Bullet, Angry Birds 2 or Fruit Ninja. | -
元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/hardware disease.md b/spaces/SarthakSidhant/Go-Cattle/diseases/hardware disease.md
deleted file mode 100644
index 0d0d91ac80e4418e0de80d4907aa5a465ac9b395..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/hardware disease.md
+++ /dev/null
@@ -1,35 +0,0 @@
-## Hardware disease
-
-**Information:** Hardware disease, also known as traumatic reticuloperitonitis, is a condition that affects cattle when they ingest sharp objects, such as nails, wire, or pieces of metal. The object can puncture the reticulum, a part of the stomach, and cause infection.
-
-**Symptoms:**
-
-* Depression
-* Weight loss
-* Loss of appetite
-* Fever
-* Coughing
-* Difficulty breathing
-* Bloating
-* Pain in the abdomen
-* Lump in the abdomen
-
-**Remedies:**
-
-* Hardware disease is a medical emergency and requires immediate treatment.
-* Treatment usually involves surgery to remove the object and antibiotics to treat the infection.
-* The cow may also need fluids and electrolytes to prevent dehydration.
-* In severe cases, the cow may need to be hospitalized.
-
-**Causes:**
-
-* Hardware disease is caused when cattle ingest sharp objects, such as nails, wire, or pieces of metal.
-* These objects can puncture the reticulum, a part of the stomach, and cause infection.
-* The infection can then spread to other parts of the body, such as the liver, lungs, and heart.
-
-**Prevention:**
-
-* The best way to prevent hardware disease is to keep cattle's feed and water sources free of sharp objects.
-* Animals should also be monitored for signs of the disease, such as depression, weight loss, and loss of appetite.
-* If an animal is suspected of having hardware disease, it should be taken to a veterinarian immediately for diagnosis and treatment.
-
diff --git a/spaces/Saturdays/Focus_on_driving/app.py b/spaces/Saturdays/Focus_on_driving/app.py
deleted file mode 100644
index c34d0d7a7a14c7b0e5e1a84c440447cf6e17e455..0000000000000000000000000000000000000000
--- a/spaces/Saturdays/Focus_on_driving/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import gradio as gr
-import pandas as pd
-import numpy as np
-
-from keras.models import model_from_json
-from tensorflow.keras.preprocessing import image
-from keras.applications.vgg16 import VGG16, preprocess_input
-import heapq
-
-file = open("focusondriving.json", 'r')
-model_json2 = file.read()
-file.close()
-loaded_model = model_from_json(model_json2)
-loaded_model.load_weights("focusondriving.h5")
-
-class_dict = {
- 'c0': 'Conduciendo de forma segura',
- 'c1': 'Móvil en la mano derecha',
- 'c2': 'Hablando por el teléfono con la mano derecha',
- 'c3': "Móvil en la mano izquierda",
- 'c4': 'Hablando con el teléfono con la mano izquierda',
- 'c5': 'Tocando la radio o el salpicadero',
- 'c6': 'Bebiendo',
- 'c7': 'Buscando en la parte trasera',
- 'c8': 'Manos en la cara o el pelo',
- 'c9': 'Mirando hacia el lado'
-}
-
-def predict_image(pic):
- img = image.load_img(pic, target_size=(224, 224))
- x = image.img_to_array(img)
- x = np.expand_dims(x, axis=0)
- x = preprocess_input(x)
- preds = loaded_model.predict(x)
- preds = list(preds[0])
-
- list_desc_order = heapq.nlargest(2, range(len(preds)), key=preds.__getitem__)
- result1 = f'c{list_desc_order[0]}'
- result2 = '-'
- result2_ = 0
- if preds[list_desc_order[1]] > 0.3:
- result2 = f'c{list_desc_order[1]}'
- result2_ = round(preds[list_desc_order[1]], 2)
-
- score = round(preds[list_desc_order[0]], 2)*100
- score = int(score)
- txt2 = f"Resultado: {class_dict.get(result1)} Probabilidad {score}%"
- txt3="pepe"
- return txt2
-
-
-iface = gr.Interface(
- predict_image,
- [
-
- gr.inputs.Image(source="upload",type="filepath", label="Imagen")
- ],
-
- "text",
-
-
-
- interpretation="default",
- title = 'Focus on Driving',
- description = 'El objetivo de este proyecto es ajustar un modelo de Machine Learning capaz de identificar y clasificar las diferentes distracciones a que estamos expuestos siempre que conducimos. https://saturdays.ai/2022/03/16/focus-on-driving-redes-neuronales-aplicadas-a-la-seguridad-vial/',
- examples=[["img_50156.jpg"], ["img_32161.jpg"], ["img_97052.jpg"], ["img_95082.jpg"], ["img_32168.jpg"], ["img_42945.jpg"], ["img_62638.jpg"], ["img_30.jpg"], ["img_13171.jpg"], ["img_90752.jpg"]],
- theme = 'peach'
- )
-
-
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Sohag1/Handwritten-text-Recognition-Using-TrOCR/README.md b/spaces/Sohag1/Handwritten-text-Recognition-Using-TrOCR/README.md
deleted file mode 100644
index 8a10f9934e292435ace293d53588fed008efcda2..0000000000000000000000000000000000000000
--- a/spaces/Sohag1/Handwritten-text-Recognition-Using-TrOCR/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Handwritten Text Recognition Using TrOCR
-emoji: 🦀
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Soumen/transform_image/app.py b/spaces/Soumen/transform_image/app.py
deleted file mode 100644
index 21c25c5ab7f764473cbce0a61cee4c25c6f439d4..0000000000000000000000000000000000000000
--- a/spaces/Soumen/transform_image/app.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from transformers import DetrFeatureExtractor, DetrForObjectDetection
-import requests
-import torch
-
-feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50")
-model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
-
-
-# Core Pkgs
-import time
-from json import load
-import streamlit as st
-import cv2
-from PIL import Image,ImageEnhance
-import numpy as np
-from io import BytesIO
-from transformers import pipeline
-st.set_page_config(page_title="Do Transform Images", initial_sidebar_state = "auto" )
-st.title("Image Transformation & Detection App")
-st.text("Build with Streamlit and OpenCV")
-
-face_cascade = cv2.CascadeClassifier('frecog/haarcascade_frontalface_default.xml')
-eye_cascade = cv2.CascadeClassifier('frecog/haarcascade_eye.xml')
-smile_cascade = cv2.CascadeClassifier('frecog/haarcascade_smile.xml')
-#@st_cache
-#od():
- #obj_detector = pipeline('object-detection')
- #return obj_detector
-def detect_faces(our_image):
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- gray = cv2.cvtColor(new_img, cv2.COLOR_BGR2GRAY)
- # Detect faces
- faces = face_cascade.detectMultiScale(gray, 1.1, 4)
- # Draw rectangle around the faces
- for (x, y, w, h) in faces:
- cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)
- return img,faces
-def detect_eyes(our_image):
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- gray = cv2.cvtColor(new_img, cv2.COLOR_BGR2GRAY)
- eyes = eye_cascade.detectMultiScale(gray, 1.3, 5)
- for (ex,ey,ew,eh) in eyes:
- cv2.rectangle(img,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
- return img
-
-def detect_smiles(our_image):
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- gray = cv2.cvtColor(new_img, cv2.COLOR_BGR2GRAY)
- # Detect Smiles
- smiles = smile_cascade.detectMultiScale(gray, 1.1, 4)
- # Draw rectangle around the Smiles
- for (x, y, w, h) in smiles:
- cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)
- return img
-
-def cartonize_image(our_image):
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- gray = cv2.cvtColor(new_img, cv2.COLOR_BGR2GRAY)
- # Edges
- gray = cv2.medianBlur(gray, 5)
- edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
- #Color
- color = cv2.bilateralFilter(img, 9, 300, 300)
- #Cartoon
- cartoon = cv2.bitwise_and(color, color, mask=edges)
-
- return cartoon
-
-
-def cannize_image(our_image):
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- img = cv2.GaussianBlur(img, (11, 11), 0)
- canny = cv2.Canny(img, 100, 150)
- return canny
-def detect_objects(im):
- inputs = feature_extractor(images=im, return_tensors="pt")
- outputs = model(**inputs)
- # convert outputs (bounding boxes and class logits) to COCO API
- target_sizes = torch.tensor([im.size[::-1]])
- results = feature_extractor.post_process(outputs, target_sizes=target_sizes)[0]
- boxes = []
- f=None
- for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
- box = [round(i, 2) for i in box.tolist()]
- # let's only keep detections with score > 0.9
- if score > 0.9:
- st.success(
- f"Detected {model.config.id2label[label.item()]} with confidence "
- f"{round(score.item(), 3)} at location {box}"
- )
- boxes.append(box)
- new_img = np.array(im.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- for (x, y, w, h) in boxes:
- cv2.rectangle(img,(int(x),int(y)),(int(w), int(h)), (0, 0, 255))
- return st.image(img)#st.image(box)
-
-@st.cache
-def load_image(img):
- im = Image.open(img)
- return im
-activities = ["Detection","About"]
-choice = st.sidebar.selectbox("Select Activty",activities)
-def change_photo_state():
- st.session_state["photo"]="done"
-uploaded_photo = st.file_uploader("Upload Image",type=['jpg','png','jpeg'], on_change=change_photo_state)
-camera_photo = st.camera_input("Take a photo", on_change=change_photo_state)
-if "photo" not in st.session_state:
- st.session_state["photo"]="not done"
-if choice == 'Detection':
- st.subheader("Process your images ...")
- if st.session_state["photo"]=="done":
- if uploaded_photo:
- our_image= load_image(uploaded_photo)
- if camera_photo:
- our_image= load_image(camera_photo)
- if uploaded_photo==None and camera_photo==None:
- our_image=load_image("image.jpg")
- enhance_type = st.sidebar.radio("Enhance Type",["Original","Gray-Scale","Contrast","Brightness","Blurring"])
- if enhance_type == 'Gray-Scale':
- new_img = np.array(our_image.convert('RGB'))
- img = cv2.cvtColor(new_img,1)
- gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
- # st.write(new_img)
- st.image(gray)
- elif enhance_type == 'Contrast':
- c_rate = st.sidebar.slider("Contrast",0.5,3.5)
- enhancer = ImageEnhance.Contrast(our_image)
- img_output = enhancer.enhance(c_rate)
- st.image(img_output)
- elif enhance_type == 'Brightness':
- c_rate = st.sidebar.slider("Brightness",0.5,3.5)
- enhancer = ImageEnhance.Brightness(our_image)
- img_output = enhancer.enhance(c_rate)
- st.image(img_output)
- elif enhance_type == 'Blurring':
- new_img = np.array(our_image.convert('RGB'))
- blur_rate = st.sidebar.slider("Brightness",0.5,3.5)
- img = cv2.cvtColor(new_img,1)
- blur_img = cv2.GaussianBlur(img,(11,11),blur_rate)
- st.image(blur_img)
- elif enhance_type == 'Original':
- st.image(our_image,width=300)
-
- else:
- st.image(our_image,width=300)
- # Face Detection
- task = ["Detect_any_objects", "Faces","Smiles","Eyes","Cannize","Cartonize"]
- feature_choice = st.sidebar.selectbox("Find Features",task)
- if st.button("Process"):
- if feature_choice == 'Faces':
- result_img,result_faces = detect_faces(our_image)
- st.image(result_img)
-
- st.success("Found {} faces".format(len(result_faces)))
- elif feature_choice == 'Smiles':
- result_img = detect_smiles(our_image)
- st.image(result_img)
- elif feature_choice == 'Eyes':
- with st.spinner('Wait for it...'):
- time.sleep(5)
- result_img = detect_eyes(our_image)
- st.image(result_img)
-
- elif feature_choice == 'Cartonize':
- result_img = cartonize_image(our_image)
- st.image(result_img)
- elif feature_choice == 'Cannize':
- result_canny = cannize_image(our_image)
- st.image(result_canny)
- elif feature_choice == 'Detect_any_objects':
- detect_objects(our_image)
-
-elif choice == 'About':
- st.subheader("About Face Detection App")
- st.markdown("Built with Streamlit by [Soumen Sarker](https://soumen-sarker-personal-website.streamlitapp.com/)")
- st.markdown("Credit [here](https://huggingface.co/models?pipeline_tag=object-detection)")
- #st.success("Isshor Saves @Soumen Sarker")
\ No newline at end of file
diff --git a/spaces/Spectrez/Chest-Lung-Identification/README.md b/spaces/Spectrez/Chest-Lung-Identification/README.md
deleted file mode 100644
index 30c590402c028ca92339b1b0baa78958b8d4f080..0000000000000000000000000000000000000000
--- a/spaces/Spectrez/Chest-Lung-Identification/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chest Lung Identification
-emoji: 🫁
-colorFrom: blue
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prefilter.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prefilter.py
deleted file mode 100644
index e7e82e337718b577606b57ec9bccd096352e7c30..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prefilter.py
+++ /dev/null
@@ -1,700 +0,0 @@
-# encoding: utf-8
-"""
-Prefiltering components.
-
-Prefilters transform user input before it is exec'd by Python. These
-transforms are used to implement additional syntax such as !ls and %magic.
-"""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-from keyword import iskeyword
-import re
-
-from .autocall import IPyAutocall
-from traitlets.config.configurable import Configurable
-from .inputtransformer2 import (
- ESC_MAGIC,
- ESC_QUOTE,
- ESC_QUOTE2,
- ESC_PAREN,
-)
-from .macro import Macro
-from .splitinput import LineInfo
-
-from traitlets import (
- List, Integer, Unicode, Bool, Instance, CRegExp
-)
-
-#-----------------------------------------------------------------------------
-# Global utilities, errors and constants
-#-----------------------------------------------------------------------------
-
-
-class PrefilterError(Exception):
- pass
-
-
-# RegExp to identify potential function names
-re_fun_name = re.compile(r'[^\W\d]([\w.]*) *$')
-
-# RegExp to exclude strings with this start from autocalling. In
-# particular, all binary operators should be excluded, so that if foo is
-# callable, foo OP bar doesn't become foo(OP bar), which is invalid. The
-# characters '!=()' don't need to be checked for, as the checkPythonChars
-# routine explicitly does so, to catch direct calls and rebindings of
-# existing names.
-
-# Warning: the '-' HAS TO BE AT THE END of the first group, otherwise
-# it affects the rest of the group in square brackets.
-re_exclude_auto = re.compile(r'^[,&^\|\*/\+-]'
- r'|^is |^not |^in |^and |^or ')
-
-# try to catch also methods for stuff in lists/tuples/dicts: off
-# (experimental). For this to work, the line_split regexp would need
-# to be modified so it wouldn't break things at '['. That line is
-# nasty enough that I shouldn't change it until I can test it _well_.
-#self.re_fun_name = re.compile (r'[a-zA-Z_]([a-zA-Z0-9_.\[\]]*) ?$')
-
-
-# Handler Check Utilities
-def is_shadowed(identifier, ip):
- """Is the given identifier defined in one of the namespaces which shadow
- the alias and magic namespaces? Note that an identifier is different
- than ifun, because it can not contain a '.' character."""
- # This is much safer than calling ofind, which can change state
- return (identifier in ip.user_ns \
- or identifier in ip.user_global_ns \
- or identifier in ip.ns_table['builtin']\
- or iskeyword(identifier))
-
-
-#-----------------------------------------------------------------------------
-# Main Prefilter manager
-#-----------------------------------------------------------------------------
-
-
-class PrefilterManager(Configurable):
- """Main prefilter component.
-
- The IPython prefilter is run on all user input before it is run. The
- prefilter consumes lines of input and produces transformed lines of
- input.
-
- The implementation consists of two phases:
-
- 1. Transformers
- 2. Checkers and handlers
-
- Over time, we plan on deprecating the checkers and handlers and doing
- everything in the transformers.
-
- The transformers are instances of :class:`PrefilterTransformer` and have
- a single method :meth:`transform` that takes a line and returns a
- transformed line. The transformation can be accomplished using any
- tool, but our current ones use regular expressions for speed.
-
- After all the transformers have been run, the line is fed to the checkers,
- which are instances of :class:`PrefilterChecker`. The line is passed to
- the :meth:`check` method, which either returns `None` or a
- :class:`PrefilterHandler` instance. If `None` is returned, the other
- checkers are tried. If an :class:`PrefilterHandler` instance is returned,
- the line is passed to the :meth:`handle` method of the returned
- handler and no further checkers are tried.
-
- Both transformers and checkers have a `priority` attribute, that determines
- the order in which they are called. Smaller priorities are tried first.
-
- Both transformers and checkers also have `enabled` attribute, which is
- a boolean that determines if the instance is used.
-
- Users or developers can change the priority or enabled attribute of
- transformers or checkers, but they must call the :meth:`sort_checkers`
- or :meth:`sort_transformers` method after changing the priority.
- """
-
- multi_line_specials = Bool(True).tag(config=True)
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
-
- def __init__(self, shell=None, **kwargs):
- super(PrefilterManager, self).__init__(shell=shell, **kwargs)
- self.shell = shell
- self._transformers = []
- self.init_handlers()
- self.init_checkers()
-
- #-------------------------------------------------------------------------
- # API for managing transformers
- #-------------------------------------------------------------------------
-
- def sort_transformers(self):
- """Sort the transformers by priority.
-
- This must be called after the priority of a transformer is changed.
- The :meth:`register_transformer` method calls this automatically.
- """
- self._transformers.sort(key=lambda x: x.priority)
-
- @property
- def transformers(self):
- """Return a list of checkers, sorted by priority."""
- return self._transformers
-
- def register_transformer(self, transformer):
- """Register a transformer instance."""
- if transformer not in self._transformers:
- self._transformers.append(transformer)
- self.sort_transformers()
-
- def unregister_transformer(self, transformer):
- """Unregister a transformer instance."""
- if transformer in self._transformers:
- self._transformers.remove(transformer)
-
- #-------------------------------------------------------------------------
- # API for managing checkers
- #-------------------------------------------------------------------------
-
- def init_checkers(self):
- """Create the default checkers."""
- self._checkers = []
- for checker in _default_checkers:
- checker(
- shell=self.shell, prefilter_manager=self, parent=self
- )
-
- def sort_checkers(self):
- """Sort the checkers by priority.
-
- This must be called after the priority of a checker is changed.
- The :meth:`register_checker` method calls this automatically.
- """
- self._checkers.sort(key=lambda x: x.priority)
-
- @property
- def checkers(self):
- """Return a list of checkers, sorted by priority."""
- return self._checkers
-
- def register_checker(self, checker):
- """Register a checker instance."""
- if checker not in self._checkers:
- self._checkers.append(checker)
- self.sort_checkers()
-
- def unregister_checker(self, checker):
- """Unregister a checker instance."""
- if checker in self._checkers:
- self._checkers.remove(checker)
-
- #-------------------------------------------------------------------------
- # API for managing handlers
- #-------------------------------------------------------------------------
-
- def init_handlers(self):
- """Create the default handlers."""
- self._handlers = {}
- self._esc_handlers = {}
- for handler in _default_handlers:
- handler(
- shell=self.shell, prefilter_manager=self, parent=self
- )
-
- @property
- def handlers(self):
- """Return a dict of all the handlers."""
- return self._handlers
-
- def register_handler(self, name, handler, esc_strings):
- """Register a handler instance by name with esc_strings."""
- self._handlers[name] = handler
- for esc_str in esc_strings:
- self._esc_handlers[esc_str] = handler
-
- def unregister_handler(self, name, handler, esc_strings):
- """Unregister a handler instance by name with esc_strings."""
- try:
- del self._handlers[name]
- except KeyError:
- pass
- for esc_str in esc_strings:
- h = self._esc_handlers.get(esc_str)
- if h is handler:
- del self._esc_handlers[esc_str]
-
- def get_handler_by_name(self, name):
- """Get a handler by its name."""
- return self._handlers.get(name)
-
- def get_handler_by_esc(self, esc_str):
- """Get a handler by its escape string."""
- return self._esc_handlers.get(esc_str)
-
- #-------------------------------------------------------------------------
- # Main prefiltering API
- #-------------------------------------------------------------------------
-
- def prefilter_line_info(self, line_info):
- """Prefilter a line that has been converted to a LineInfo object.
-
- This implements the checker/handler part of the prefilter pipe.
- """
- # print "prefilter_line_info: ", line_info
- handler = self.find_handler(line_info)
- return handler.handle(line_info)
-
- def find_handler(self, line_info):
- """Find a handler for the line_info by trying checkers."""
- for checker in self.checkers:
- if checker.enabled:
- handler = checker.check(line_info)
- if handler:
- return handler
- return self.get_handler_by_name('normal')
-
- def transform_line(self, line, continue_prompt):
- """Calls the enabled transformers in order of increasing priority."""
- for transformer in self.transformers:
- if transformer.enabled:
- line = transformer.transform(line, continue_prompt)
- return line
-
- def prefilter_line(self, line, continue_prompt=False):
- """Prefilter a single input line as text.
-
- This method prefilters a single line of text by calling the
- transformers and then the checkers/handlers.
- """
-
- # print "prefilter_line: ", line, continue_prompt
- # All handlers *must* return a value, even if it's blank ('').
-
- # save the line away in case we crash, so the post-mortem handler can
- # record it
- self.shell._last_input_line = line
-
- if not line:
- # Return immediately on purely empty lines, so that if the user
- # previously typed some whitespace that started a continuation
- # prompt, he can break out of that loop with just an empty line.
- # This is how the default python prompt works.
- return ''
-
- # At this point, we invoke our transformers.
- if not continue_prompt or (continue_prompt and self.multi_line_specials):
- line = self.transform_line(line, continue_prompt)
-
- # Now we compute line_info for the checkers and handlers
- line_info = LineInfo(line, continue_prompt)
-
- # the input history needs to track even empty lines
- stripped = line.strip()
-
- normal_handler = self.get_handler_by_name('normal')
- if not stripped:
- return normal_handler.handle(line_info)
-
- # special handlers are only allowed for single line statements
- if continue_prompt and not self.multi_line_specials:
- return normal_handler.handle(line_info)
-
- prefiltered = self.prefilter_line_info(line_info)
- # print "prefiltered line: %r" % prefiltered
- return prefiltered
-
- def prefilter_lines(self, lines, continue_prompt=False):
- """Prefilter multiple input lines of text.
-
- This is the main entry point for prefiltering multiple lines of
- input. This simply calls :meth:`prefilter_line` for each line of
- input.
-
- This covers cases where there are multiple lines in the user entry,
- which is the case when the user goes back to a multiline history
- entry and presses enter.
- """
- llines = lines.rstrip('\n').split('\n')
- # We can get multiple lines in one shot, where multiline input 'blends'
- # into one line, in cases like recalling from the readline history
- # buffer. We need to make sure that in such cases, we correctly
- # communicate downstream which line is first and which are continuation
- # ones.
- if len(llines) > 1:
- out = '\n'.join([self.prefilter_line(line, lnum>0)
- for lnum, line in enumerate(llines) ])
- else:
- out = self.prefilter_line(llines[0], continue_prompt)
-
- return out
-
-#-----------------------------------------------------------------------------
-# Prefilter transformers
-#-----------------------------------------------------------------------------
-
-
-class PrefilterTransformer(Configurable):
- """Transform a line of user input."""
-
- priority = Integer(100).tag(config=True)
- # Transformers don't currently use shell or prefilter_manager, but as we
- # move away from checkers and handlers, they will need them.
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
- prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
- enabled = Bool(True).tag(config=True)
-
- def __init__(self, shell=None, prefilter_manager=None, **kwargs):
- super(PrefilterTransformer, self).__init__(
- shell=shell, prefilter_manager=prefilter_manager, **kwargs
- )
- self.prefilter_manager.register_transformer(self)
-
- def transform(self, line, continue_prompt):
- """Transform a line, returning the new one."""
- return None
-
- def __repr__(self):
- return "<%s(priority=%r, enabled=%r)>" % (
- self.__class__.__name__, self.priority, self.enabled)
-
-
-#-----------------------------------------------------------------------------
-# Prefilter checkers
-#-----------------------------------------------------------------------------
-
-
-class PrefilterChecker(Configurable):
- """Inspect an input line and return a handler for that line."""
-
- priority = Integer(100).tag(config=True)
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
- prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
- enabled = Bool(True).tag(config=True)
-
- def __init__(self, shell=None, prefilter_manager=None, **kwargs):
- super(PrefilterChecker, self).__init__(
- shell=shell, prefilter_manager=prefilter_manager, **kwargs
- )
- self.prefilter_manager.register_checker(self)
-
- def check(self, line_info):
- """Inspect line_info and return a handler instance or None."""
- return None
-
- def __repr__(self):
- return "<%s(priority=%r, enabled=%r)>" % (
- self.__class__.__name__, self.priority, self.enabled)
-
-
-class EmacsChecker(PrefilterChecker):
-
- priority = Integer(100).tag(config=True)
- enabled = Bool(False).tag(config=True)
-
- def check(self, line_info):
- "Emacs ipython-mode tags certain input lines."
- if line_info.line.endswith('# PYTHON-MODE'):
- return self.prefilter_manager.get_handler_by_name('emacs')
- else:
- return None
-
-
-class MacroChecker(PrefilterChecker):
-
- priority = Integer(250).tag(config=True)
-
- def check(self, line_info):
- obj = self.shell.user_ns.get(line_info.ifun)
- if isinstance(obj, Macro):
- return self.prefilter_manager.get_handler_by_name('macro')
- else:
- return None
-
-
-class IPyAutocallChecker(PrefilterChecker):
-
- priority = Integer(300).tag(config=True)
-
- def check(self, line_info):
- "Instances of IPyAutocall in user_ns get autocalled immediately"
- obj = self.shell.user_ns.get(line_info.ifun, None)
- if isinstance(obj, IPyAutocall):
- obj.set_ip(self.shell)
- return self.prefilter_manager.get_handler_by_name('auto')
- else:
- return None
-
-
-class AssignmentChecker(PrefilterChecker):
-
- priority = Integer(600).tag(config=True)
-
- def check(self, line_info):
- """Check to see if user is assigning to a var for the first time, in
- which case we want to avoid any sort of automagic / autocall games.
-
- This allows users to assign to either alias or magic names true python
- variables (the magic/alias systems always take second seat to true
- python code). E.g. ls='hi', or ls,that=1,2"""
- if line_info.the_rest:
- if line_info.the_rest[0] in '=,':
- return self.prefilter_manager.get_handler_by_name('normal')
- else:
- return None
-
-
-class AutoMagicChecker(PrefilterChecker):
-
- priority = Integer(700).tag(config=True)
-
- def check(self, line_info):
- """If the ifun is magic, and automagic is on, run it. Note: normal,
- non-auto magic would already have been triggered via '%' in
- check_esc_chars. This just checks for automagic. Also, before
- triggering the magic handler, make sure that there is nothing in the
- user namespace which could shadow it."""
- if not self.shell.automagic or not self.shell.find_magic(line_info.ifun):
- return None
-
- # We have a likely magic method. Make sure we should actually call it.
- if line_info.continue_prompt and not self.prefilter_manager.multi_line_specials:
- return None
-
- head = line_info.ifun.split('.',1)[0]
- if is_shadowed(head, self.shell):
- return None
-
- return self.prefilter_manager.get_handler_by_name('magic')
-
-
-class PythonOpsChecker(PrefilterChecker):
-
- priority = Integer(900).tag(config=True)
-
- def check(self, line_info):
- """If the 'rest' of the line begins with a function call or pretty much
- any python operator, we should simply execute the line (regardless of
- whether or not there's a possible autocall expansion). This avoids
- spurious (and very confusing) geattr() accesses."""
- if line_info.the_rest and line_info.the_rest[0] in '!=()<>,+*/%^&|':
- return self.prefilter_manager.get_handler_by_name('normal')
- else:
- return None
-
-
-class AutocallChecker(PrefilterChecker):
-
- priority = Integer(1000).tag(config=True)
-
- function_name_regexp = CRegExp(re_fun_name,
- help="RegExp to identify potential function names."
- ).tag(config=True)
- exclude_regexp = CRegExp(re_exclude_auto,
- help="RegExp to exclude strings with this start from autocalling."
- ).tag(config=True)
-
- def check(self, line_info):
- "Check if the initial word/function is callable and autocall is on."
- if not self.shell.autocall:
- return None
-
- oinfo = line_info.ofind(self.shell) # This can mutate state via getattr
- if not oinfo.found:
- return None
-
- ignored_funs = ['b', 'f', 'r', 'u', 'br', 'rb', 'fr', 'rf']
- ifun = line_info.ifun
- line = line_info.line
- if ifun.lower() in ignored_funs and (line.startswith(ifun + "'") or line.startswith(ifun + '"')):
- return None
-
- if (
- callable(oinfo.obj)
- and (not self.exclude_regexp.match(line_info.the_rest))
- and self.function_name_regexp.match(line_info.ifun)
- ):
- return self.prefilter_manager.get_handler_by_name("auto")
- else:
- return None
-
-
-#-----------------------------------------------------------------------------
-# Prefilter handlers
-#-----------------------------------------------------------------------------
-
-
-class PrefilterHandler(Configurable):
-
- handler_name = Unicode('normal')
- esc_strings = List([])
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
- prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
-
- def __init__(self, shell=None, prefilter_manager=None, **kwargs):
- super(PrefilterHandler, self).__init__(
- shell=shell, prefilter_manager=prefilter_manager, **kwargs
- )
- self.prefilter_manager.register_handler(
- self.handler_name,
- self,
- self.esc_strings
- )
-
- def handle(self, line_info):
- # print "normal: ", line_info
- """Handle normal input lines. Use as a template for handlers."""
-
- # With autoindent on, we need some way to exit the input loop, and I
- # don't want to force the user to have to backspace all the way to
- # clear the line. The rule will be in this case, that either two
- # lines of pure whitespace in a row, or a line of pure whitespace but
- # of a size different to the indent level, will exit the input loop.
- line = line_info.line
- continue_prompt = line_info.continue_prompt
-
- if (continue_prompt and
- self.shell.autoindent and
- line.isspace() and
- 0 < abs(len(line) - self.shell.indent_current_nsp) <= 2):
- line = ''
-
- return line
-
- def __str__(self):
- return "<%s(name=%s)>" % (self.__class__.__name__, self.handler_name)
-
-
-class MacroHandler(PrefilterHandler):
- handler_name = Unicode("macro")
-
- def handle(self, line_info):
- obj = self.shell.user_ns.get(line_info.ifun)
- pre_space = line_info.pre_whitespace
- line_sep = "\n" + pre_space
- return pre_space + line_sep.join(obj.value.splitlines())
-
-
-class MagicHandler(PrefilterHandler):
-
- handler_name = Unicode('magic')
- esc_strings = List([ESC_MAGIC])
-
- def handle(self, line_info):
- """Execute magic functions."""
- ifun = line_info.ifun
- the_rest = line_info.the_rest
- #Prepare arguments for get_ipython().run_line_magic(magic_name, magic_args)
- t_arg_s = ifun + " " + the_rest
- t_magic_name, _, t_magic_arg_s = t_arg_s.partition(' ')
- t_magic_name = t_magic_name.lstrip(ESC_MAGIC)
- cmd = '%sget_ipython().run_line_magic(%r, %r)' % (line_info.pre_whitespace, t_magic_name, t_magic_arg_s)
- return cmd
-
-
-class AutoHandler(PrefilterHandler):
-
- handler_name = Unicode('auto')
- esc_strings = List([ESC_PAREN, ESC_QUOTE, ESC_QUOTE2])
-
- def handle(self, line_info):
- """Handle lines which can be auto-executed, quoting if requested."""
- line = line_info.line
- ifun = line_info.ifun
- the_rest = line_info.the_rest
- esc = line_info.esc
- continue_prompt = line_info.continue_prompt
- obj = line_info.ofind(self.shell).obj
-
- # This should only be active for single-line input!
- if continue_prompt:
- return line
-
- force_auto = isinstance(obj, IPyAutocall)
-
- # User objects sometimes raise exceptions on attribute access other
- # than AttributeError (we've seen it in the past), so it's safest to be
- # ultra-conservative here and catch all.
- try:
- auto_rewrite = obj.rewrite
- except Exception:
- auto_rewrite = True
-
- if esc == ESC_QUOTE:
- # Auto-quote splitting on whitespace
- newcmd = '%s("%s")' % (ifun,'", "'.join(the_rest.split()) )
- elif esc == ESC_QUOTE2:
- # Auto-quote whole string
- newcmd = '%s("%s")' % (ifun,the_rest)
- elif esc == ESC_PAREN:
- newcmd = '%s(%s)' % (ifun,",".join(the_rest.split()))
- else:
- # Auto-paren.
- if force_auto:
- # Don't rewrite if it is already a call.
- do_rewrite = not the_rest.startswith('(')
- else:
- if not the_rest:
- # We only apply it to argument-less calls if the autocall
- # parameter is set to 2.
- do_rewrite = (self.shell.autocall >= 2)
- elif the_rest.startswith('[') and hasattr(obj, '__getitem__'):
- # Don't autocall in this case: item access for an object
- # which is BOTH callable and implements __getitem__.
- do_rewrite = False
- else:
- do_rewrite = True
-
- # Figure out the rewritten command
- if do_rewrite:
- if the_rest.endswith(';'):
- newcmd = '%s(%s);' % (ifun.rstrip(),the_rest[:-1])
- else:
- newcmd = '%s(%s)' % (ifun.rstrip(), the_rest)
- else:
- normal_handler = self.prefilter_manager.get_handler_by_name('normal')
- return normal_handler.handle(line_info)
-
- # Display the rewritten call
- if auto_rewrite:
- self.shell.auto_rewrite_input(newcmd)
-
- return newcmd
-
-
-class EmacsHandler(PrefilterHandler):
-
- handler_name = Unicode('emacs')
- esc_strings = List([])
-
- def handle(self, line_info):
- """Handle input lines marked by python-mode."""
-
- # Currently, nothing is done. Later more functionality can be added
- # here if needed.
-
- # The input cache shouldn't be updated
- return line_info.line
-
-
-#-----------------------------------------------------------------------------
-# Defaults
-#-----------------------------------------------------------------------------
-
-
-_default_checkers = [
- EmacsChecker,
- MacroChecker,
- IPyAutocallChecker,
- AssignmentChecker,
- AutoMagicChecker,
- PythonOpsChecker,
- AutocallChecker
-]
-
-_default_handlers = [
- PrefilterHandler,
- MacroHandler,
- MagicHandler,
- AutoHandler,
- EmacsHandler
-]
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_testing.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_testing.py
deleted file mode 100644
index c8191b3866f7104d2d02d32da9826c68ca17ac95..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_testing.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from __future__ import annotations
-
-from typing import Any, Awaitable, Generator
-
-from ._compat import DeprecatedAwaitableList, _warn_deprecation
-from ._eventloop import get_asynclib
-
-
-class TaskInfo:
- """
- Represents an asynchronous task.
-
- :ivar int id: the unique identifier of the task
- :ivar parent_id: the identifier of the parent task, if any
- :vartype parent_id: Optional[int]
- :ivar str name: the description of the task (if any)
- :ivar ~collections.abc.Coroutine coro: the coroutine object of the task
- """
-
- __slots__ = "_name", "id", "parent_id", "name", "coro"
-
- def __init__(
- self,
- id: int,
- parent_id: int | None,
- name: str | None,
- coro: Generator[Any, Any, Any] | Awaitable[Any],
- ):
- func = get_current_task
- self._name = f"{func.__module__}.{func.__qualname__}"
- self.id: int = id
- self.parent_id: int | None = parent_id
- self.name: str | None = name
- self.coro: Generator[Any, Any, Any] | Awaitable[Any] = coro
-
- def __eq__(self, other: object) -> bool:
- if isinstance(other, TaskInfo):
- return self.id == other.id
-
- return NotImplemented
-
- def __hash__(self) -> int:
- return hash(self.id)
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}(id={self.id!r}, name={self.name!r})"
-
- def __await__(self) -> Generator[None, None, TaskInfo]:
- _warn_deprecation(self)
- if False:
- yield
-
- return self
-
- def _unwrap(self) -> TaskInfo:
- return self
-
-
-def get_current_task() -> TaskInfo:
- """
- Return the current task.
-
- :return: a representation of the current task
-
- """
- return get_asynclib().get_current_task()
-
-
-def get_running_tasks() -> DeprecatedAwaitableList[TaskInfo]:
- """
- Return a list of running tasks in the current event loop.
-
- :return: a list of task info objects
-
- """
- tasks = get_asynclib().get_running_tasks()
- return DeprecatedAwaitableList(tasks, func=get_running_tasks)
-
-
-async def wait_all_tasks_blocked() -> None:
- """Wait until all other tasks are waiting for something."""
- await get_asynclib().wait_all_tasks_blocked()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/__init__.py
deleted file mode 100644
index f471804c76d3394bc055e14f13d1f114aaad2528..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import warnings
-with warnings.catch_warnings():
- warnings.filterwarnings("ignore", category=DeprecationWarning)
- try:
- __import__('pkg_resources').declare_namespace(__name__)
- except ImportError:
- import pkgutil
- __path__ = pkgutil.extend_path(__path__, __name__)
diff --git a/spaces/Superlang/ImageProcessor/annotator/openpose/__init__.py b/spaces/Superlang/ImageProcessor/annotator/openpose/__init__.py
deleted file mode 100644
index 088b70ca6673df64e38f5d5908eac98e09d2339b..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/openpose/__init__.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# Openpose
-# Original from CMU https://github.com/CMU-Perceptual-Computing-Lab/openpose
-# 2nd Edited by https://github.com/Hzzone/pytorch-openpose
-# 3rd Edited by ControlNet
-# 4th Edited by ControlNet (added face and correct hands)
-# 5th Edited by ControlNet (Improved JSON serialization/deserialization, and lots of bug fixs)
-# This preprocessor is licensed by CMU for non-commercial use only.
-
-
-import os
-
-from annotator.base_annotator import BaseProcessor
-
-os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
-
-import json
-import torch
-import numpy as np
-from . import util
-from .body import Body, BodyResult, Keypoint
-from .hand import Hand
-from .face import Face
-
-from typing import NamedTuple, Tuple, List, Callable, Union
-
-body_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/body_pose_model.pth"
-hand_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/hand_pose_model.pth"
-face_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/facenet.pth"
-
-HandResult = List[Keypoint]
-FaceResult = List[Keypoint]
-
-
-class PoseResult(NamedTuple):
- body: BodyResult
- left_hand: Union[HandResult, None]
- right_hand: Union[HandResult, None]
- face: Union[FaceResult, None]
-
-
-def draw_poses(poses: List[PoseResult], H, W, draw_body=True, draw_hand=True, draw_face=True):
- """
- Draw the detected poses on an empty canvas.
-
- Args:
- poses (List[PoseResult]): A list of PoseResult objects containing the detected poses.
- H (int): The height of the canvas.
- W (int): The width of the canvas.
- draw_body (bool, optional): Whether to draw body keypoints. Defaults to True.
- draw_hand (bool, optional): Whether to draw hand keypoints. Defaults to True.
- draw_face (bool, optional): Whether to draw face keypoints. Defaults to True.
-
- Returns:
- numpy.ndarray: A 3D numpy array representing the canvas with the drawn poses.
- """
- canvas = np.zeros(shape=(H, W, 3), dtype=np.uint8)
-
- for pose in poses:
- if draw_body:
- canvas = util.draw_bodypose(canvas, pose.body.keypoints)
-
- if draw_hand:
- canvas = util.draw_handpose(canvas, pose.left_hand)
- canvas = util.draw_handpose(canvas, pose.right_hand)
-
- if draw_face:
- canvas = util.draw_facepose(canvas, pose.face)
-
- return canvas
-
-
-def encode_poses_as_json(poses: List[PoseResult], canvas_height: int, canvas_width: int) -> str:
- """ Encode the pose as a JSON string following openpose JSON output format:
- https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/02_output.md
- """
-
- def compress_keypoints(keypoints: Union[List[Keypoint], None]) -> Union[List[float], None]:
- if not keypoints:
- return None
-
- return [
- value
- for keypoint in keypoints
- for value in (
- [float(keypoint.x), float(keypoint.y), 1.0]
- if keypoint is not None
- else [0.0, 0.0, 0.0]
- )
- ]
-
- return json.dumps({
- 'people': [
- {
- 'pose_keypoints_2d': compress_keypoints(pose.body.keypoints),
- "face_keypoints_2d": compress_keypoints(pose.face),
- "hand_left_keypoints_2d": compress_keypoints(pose.left_hand),
- "hand_right_keypoints_2d": compress_keypoints(pose.right_hand),
- }
- for pose in poses
- ],
- 'canvas_height': canvas_height,
- 'canvas_width': canvas_width,
- }, indent=4)
-
-
-class OpenposeDetector(BaseProcessor):
- """
- A class for detecting human poses in images using the Openpose model.
-
- Attributes:
- model_dir (str): Path to the directory where the pose models are stored.
- """
-
- def __init__(self, **kwargs):
- """
- 初始化device 默认CPU
- 初始化模型路径
- """
- super().__init__(**kwargs)
- self.model_dir = os.path.join(self.models_path, "openpose")
- self.body_estimation = None
- self.hand_estimation = None
- self.face_estimation = None
-
- def load_model(self):
- """
- Load the Openpose body, hand, and face models.
- """
- body_modelpath = os.path.join(self.model_dir, "body_pose_model.pth")
- hand_modelpath = os.path.join(self.model_dir, "hand_pose_model.pth")
- face_modelpath = os.path.join(self.model_dir, "facenet.pth")
-
- if not os.path.exists(body_modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(body_model_path, model_dir=self.model_dir)
-
- if not os.path.exists(hand_modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(hand_model_path, model_dir=self.model_dir)
-
- if not os.path.exists(face_modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(face_model_path, model_dir=self.model_dir)
-
- self.body_estimation = Body(body_modelpath)
- self.hand_estimation = Hand(hand_modelpath)
- self.face_estimation = Face(face_modelpath)
-
- def unload_model(self):
- """
- Unload the Openpose models by moving them to the CPU.
- """
- if self.body_estimation is not None:
- self.body_estimation.model.to("cpu")
- self.hand_estimation.model.to("cpu")
- self.face_estimation.model.to("cpu")
-
- def detect_hands(self, body: BodyResult, oriImg) -> Tuple[Union[HandResult, None], Union[HandResult, None]]:
- left_hand = None
- right_hand = None
- H, W, _ = oriImg.shape
- for x, y, w, is_left in util.handDetect(body, oriImg):
- peaks = self.hand_estimation(oriImg[y:y + w, x:x + w, :]).astype(np.float32)
- if peaks.ndim == 2 and peaks.shape[1] == 2:
- peaks[:, 0] = np.where(peaks[:, 0] < 1e-6, -1, peaks[:, 0] + x) / float(W)
- peaks[:, 1] = np.where(peaks[:, 1] < 1e-6, -1, peaks[:, 1] + y) / float(H)
-
- hand_result = [
- Keypoint(x=peak[0], y=peak[1])
- for peak in peaks
- ]
-
- if is_left:
- left_hand = hand_result
- else:
- right_hand = hand_result
-
- return left_hand, right_hand
-
- def detect_face(self, body: BodyResult, oriImg) -> Union[FaceResult, None]:
- face = util.faceDetect(body, oriImg)
- if face is None:
- return None
-
- x, y, w = face
- H, W, _ = oriImg.shape
- heatmaps = self.face_estimation(oriImg[y:y + w, x:x + w, :])
- peaks = self.face_estimation.compute_peaks_from_heatmaps(heatmaps).astype(np.float32)
- if peaks.ndim == 2 and peaks.shape[1] == 2:
- peaks[:, 0] = np.where(peaks[:, 0] < 1e-6, -1, peaks[:, 0] + x) / float(W)
- peaks[:, 1] = np.where(peaks[:, 1] < 1e-6, -1, peaks[:, 1] + y) / float(H)
- return [
- Keypoint(x=peak[0], y=peak[1])
- for peak in peaks
- ]
-
- return None
-
- def detect_poses(self, oriImg, include_hand=False, include_face=False) -> List[PoseResult]:
- """
- Detect poses in the given image.
- Args:
- oriImg (numpy.ndarray): The input image for pose detection.
- include_hand (bool, optional): Whether to include hand detection. Defaults to False.
- include_face (bool, optional): Whether to include face detection. Defaults to False.
-
- Returns:
- List[PoseResult]: A list of PoseResult objects containing the detected poses.
- """
- if self.body_estimation is None:
- self.load_model()
-
- self.body_estimation.model.to(self.device)
- self.hand_estimation.model.to(self.device)
- self.face_estimation.model.to(self.device)
-
- self.body_estimation.cn_device = self.device
- self.hand_estimation.cn_device = self.device
- self.face_estimation.cn_device = self.device
-
- oriImg = oriImg[:, :, ::-1].copy()
- H, W, C = oriImg.shape
- with torch.no_grad():
- candidate, subset = self.body_estimation(oriImg)
- bodies = self.body_estimation.format_body_result(candidate, subset)
-
- results = []
- for body in bodies:
- left_hand, right_hand, face = (None,) * 3
- if include_hand:
- left_hand, right_hand = self.detect_hands(body, oriImg)
- if include_face:
- face = self.detect_face(body, oriImg)
-
- results.append(PoseResult(BodyResult(
- keypoints=[
- Keypoint(
- x=keypoint.x / float(W),
- y=keypoint.y / float(H)
- ) if keypoint is not None else None
- for keypoint in body.keypoints
- ],
- total_score=body.total_score,
- total_parts=body.total_parts
- ), left_hand, right_hand, face))
-
- return results
-
- def __call__(
- self, oriImg, include_body=True, include_hand=False, include_face=False,
- json_pose_callback: Callable[[str], None] = None,
- ):
- """
- Detect and draw poses in the given image.
-
- Args:
- oriImg (numpy.ndarray): The input image for pose detection and drawing.
- include_body (bool, optional): Whether to include body keypoints. Defaults to True.
- include_hand (bool, optional): Whether to include hand keypoints. Defaults to False.
- include_face (bool, optional): Whether to include face keypoints. Defaults to False.
- json_pose_callback (Callable, optional): A callback that accepts the pose JSON string.
-
- Returns:
- numpy.ndarray: The image with detected and drawn poses.
- """
- H, W, _ = oriImg.shape
- poses = self.detect_poses(oriImg, include_hand, include_face)
- if json_pose_callback:
- json_pose_callback(encode_poses_as_json(poses, H, W))
- return draw_poses(poses, H, W, draw_body=include_body, draw_hand=include_hand, draw_face=include_face)
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py
deleted file mode 100644
index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UNet',
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False),
- decode_head=dict(
- type='PSPHead',
- in_channels=64,
- in_index=4,
- channels=16,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=128,
- in_index=3,
- channels=64,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='slide', crop_size=256, stride=170))
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/utils/logger.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/utils/logger.py
deleted file mode 100644
index 4149d9eda3dfef07490352d22ac40c42460315e4..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/utils/logger.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import logging
-
-from annotator.uniformer.mmcv.utils import get_logger
-
-
-def get_root_logger(log_file=None, log_level=logging.INFO):
- """Get the root logger.
-
- The logger will be initialized if it has not been initialized. By default a
- StreamHandler will be added. If `log_file` is specified, a FileHandler will
- also be added. The name of the root logger is the top-level package name,
- e.g., "mmseg".
-
- Args:
- log_file (str | None): The log filename. If specified, a FileHandler
- will be added to the root logger.
- log_level (int): The root logger level. Note that only the process of
- rank 0 is affected, while other processes will set the level to
- "Error" and be silent most of the time.
-
- Returns:
- logging.Logger: The root logger.
- """
-
- logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level)
-
- return logger
diff --git a/spaces/TIMBOVILL/RVC-Noobie/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/TIMBOVILL/RVC-Noobie/lib/infer_pack/modules/F0Predictor/F0Predictor.py
deleted file mode 100644
index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000
--- a/spaces/TIMBOVILL/RVC-Noobie/lib/infer_pack/modules/F0Predictor/F0Predictor.py
+++ /dev/null
@@ -1,16 +0,0 @@
-class F0Predictor(object):
- def compute_f0(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length]
- """
- pass
-
- def compute_f0_uv(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
- """
- pass
diff --git a/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/image.py b/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/image.py
deleted file mode 100644
index 868450f8dadf02646707eb86e1ffe8f688ca0eb2..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/image.py
+++ /dev/null
@@ -1,176 +0,0 @@
-from jaa import JaaCore
-from roop.utilities import get_device
-
-
-from typing import Any
-
-version = "4.0.0"
-
-class ChainImgProcessor(JaaCore):
-
- def __init__(self):
- JaaCore.__init__(self)
-
- self.processors:dict = {
- }
-
- self.processors_objects:dict[str,list[ChainImgPlugin]] = {}
-
- self.default_chain = ""
- self.init_on_start = ""
-
- self.inited_processors = []
-
- self.is_demo_row_render = False
-
- def process_plugin_manifest(self, modname, manifest):
- # adding processors from plugin manifest
- if "img_processor" in manifest: # process commands
- for cmd in manifest["img_processor"].keys():
- self.processors[cmd] = manifest["img_processor"][cmd]
-
- return manifest
-
- def init_with_plugins(self):
- self.init_plugins(["core"])
- self.display_init_info()
-
- #self.init_translator_engine(self.default_translator)
- init_on_start_arr = self.init_on_start.split(",")
- for proc_id in init_on_start_arr:
- self.init_processor(proc_id)
-
- def run_chain(self, img, params:dict[str,Any] = None, chain:str = None, thread_index:int = 0):
- if chain is None:
- chain = self.default_chain
- if params is None:
- params = {}
- params["_thread_index"] = thread_index
- chain_ar = chain.split(",")
- # init all not inited processors first
- for proc_id in chain_ar:
- if proc_id != "":
- if not proc_id in self.inited_processors:
- self.init_processor(proc_id)
-
-
-
- # run processing
- if self.is_demo_row_render:
- import cv2
- import numpy as np
- height, width, channels = img.shape
- img_blank = np.zeros((height+30, width*(1+len(chain_ar)), 3), dtype=np.uint8)
- img_blank.fill(255)
-
- y = 30
- x = 0
- img_blank[y:y + height, x:x + width] = img
-
- # Set the font scale and thickness
- font_scale = 1
- thickness = 2
-
- # Set the font face to a monospace font
- font_face = cv2.FONT_HERSHEY_SIMPLEX
-
- cv2.putText(img_blank, "original", (x+4, y-7), font_face, font_scale, (0, 0, 0), thickness)
-
-
- i = 0
- for proc_id in chain_ar:
- i += 1
- if proc_id != "":
- #img = self.processors[proc_id][1](self, img, params) # params can be modified inside
- y = 30
- img = self.processors_objects[proc_id][thread_index].process(img,params)
- if self.is_demo_row_render:
- x = width*i
- img_blank[y:y + height, x:x + width] = img
- cv2.putText(img_blank, proc_id, (x + 4, y - 7), font_face, font_scale, (0, 0, 0), thickness)
-
- if self.is_demo_row_render:
- return img_blank, params
-
- return img, params
-
- # ---------------- init translation stuff ----------------
- def fill_processors_for_thread_chains(self, threads:int = 1, chain:str = None):
- if chain is None:
- chain = self.default_chain
-
- chain_ar = chain.split(",")
- # init all not initialized processors first
- for processor_id in chain_ar:
- if processor_id != "":
- if self.processors_objects.get(processor_id) is None:
- self.processors_objects[processor_id] = []
- while len(self.processors_objects[processor_id]) < threads:
- self.add_processor_to_list(processor_id)
-
- def add_processor_to_list(self, processor_id: str):
- obj = self.processors[processor_id](self)
- obj.init_plugin()
- if self.processors_objects.get(processor_id) is None:
- self.processors_objects[processor_id] = []
- self.processors_objects[processor_id].append(obj)
- def init_processor(self, processor_id: str):
- if processor_id == "": # blank line case
- return
-
- if processor_id in self.inited_processors:
- return
-
- try:
- if self.verbose:
- self.print_blue("TRY: init processor plugin '{0}'...".format(processor_id))
- self.add_processor_to_list(processor_id)
- self.inited_processors.append(processor_id)
- if self.verbose:
- self.print_blue("SUCCESS: '{0}' initialized!".format(processor_id))
-
- except Exception as e:
- self.print_error("Error init processor plugin {0}...".format(processor_id), e)
-
- # ------------ formatting stuff -------------------
- def display_init_info(self):
- if self.verbose:
- print("ChainImgProcessor v{0}:".format(version))
- self.format_print_key_list("processors:", self.processors.keys())
-
- def format_print_key_list(self, key:str, value:list):
- print(key+": ".join(value))
-
- def print_error(self,err_txt,e:Exception = None):
- print(err_txt,"red")
- # if e != None:
- # cprint(e,"red")
- import traceback
- traceback.print_exc()
-
- def print_red(self,txt):
- print(txt)
-
- def print_blue(self, txt):
- print(txt)
-
-class ChainImgPlugin:
-
- device = 'cpu'
-
- def __init__(self, core: ChainImgProcessor):
- self.core = core
- self.device = get_device()
-
- def init_plugin(self): # here you can init something. Called once
- pass
- def process(self, img, params:dict): # process img. Called multiple
- return img
-
-_img_processor:ChainImgProcessor = None
-def get_single_image_processor() -> ChainImgProcessor:
- global _img_processor
- if _img_processor is None:
- _img_processor = ChainImgProcessor()
- _img_processor.init_with_plugins()
- return _img_processor
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/deprecation.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/deprecation.py
deleted file mode 100644
index 72bd6f25a554b303d0bf5028145cf3a5c71b3e06..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/deprecation.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""
-A module that implements tooling to enable easy warnings about deprecations.
-"""
-
-import logging
-import warnings
-from typing import Any, Optional, TextIO, Type, Union
-
-from pip._vendor.packaging.version import parse
-
-from pip import __version__ as current_version # NOTE: tests patch this name.
-
-DEPRECATION_MSG_PREFIX = "DEPRECATION: "
-
-
-class PipDeprecationWarning(Warning):
- pass
-
-
-_original_showwarning: Any = None
-
-
-# Warnings <-> Logging Integration
-def _showwarning(
- message: Union[Warning, str],
- category: Type[Warning],
- filename: str,
- lineno: int,
- file: Optional[TextIO] = None,
- line: Optional[str] = None,
-) -> None:
- if file is not None:
- if _original_showwarning is not None:
- _original_showwarning(message, category, filename, lineno, file, line)
- elif issubclass(category, PipDeprecationWarning):
- # We use a specially named logger which will handle all of the
- # deprecation messages for pip.
- logger = logging.getLogger("pip._internal.deprecations")
- logger.warning(message)
- else:
- _original_showwarning(message, category, filename, lineno, file, line)
-
-
-def install_warning_logger() -> None:
- # Enable our Deprecation Warnings
- warnings.simplefilter("default", PipDeprecationWarning, append=True)
-
- global _original_showwarning
-
- if _original_showwarning is None:
- _original_showwarning = warnings.showwarning
- warnings.showwarning = _showwarning
-
-
-def deprecated(
- *,
- reason: str,
- replacement: Optional[str],
- gone_in: Optional[str],
- feature_flag: Optional[str] = None,
- issue: Optional[int] = None,
-) -> None:
- """Helper to deprecate existing functionality.
-
- reason:
- Textual reason shown to the user about why this functionality has
- been deprecated. Should be a complete sentence.
- replacement:
- Textual suggestion shown to the user about what alternative
- functionality they can use.
- gone_in:
- The version of pip does this functionality should get removed in.
- Raises an error if pip's current version is greater than or equal to
- this.
- feature_flag:
- Command-line flag of the form --use-feature={feature_flag} for testing
- upcoming functionality.
- issue:
- Issue number on the tracker that would serve as a useful place for
- users to find related discussion and provide feedback.
- """
-
- # Determine whether or not the feature is already gone in this version.
- is_gone = gone_in is not None and parse(current_version) >= parse(gone_in)
-
- message_parts = [
- (reason, f"{DEPRECATION_MSG_PREFIX}{{}}"),
- (
- gone_in,
- "pip {} will enforce this behaviour change."
- if not is_gone
- else "Since pip {}, this is no longer supported.",
- ),
- (
- replacement,
- "A possible replacement is {}.",
- ),
- (
- feature_flag,
- "You can use the flag --use-feature={} to test the upcoming behaviour."
- if not is_gone
- else None,
- ),
- (
- issue,
- "Discussion can be found at https://github.com/pypa/pip/issues/{}",
- ),
- ]
-
- message = " ".join(
- format_str.format(value)
- for value, format_str in message_parts
- if format_str is not None and value is not None
- )
-
- # Raise as an error if this behaviour is deprecated.
- if is_gone:
- raise PipDeprecationWarning(message)
-
- warnings.warn(message, category=PipDeprecationWarning, stacklevel=2)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/macos.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/macos.py
deleted file mode 100644
index a753e2a3aa24383ec6ac8fd125a0120c1d6f9029..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/macos.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""macOS."""
-from __future__ import annotations
-
-import os.path
-
-from .api import PlatformDirsABC
-
-
-class MacOS(PlatformDirsABC):
- """
- Platform directories for the macOS operating system. Follows the guidance from `Apple documentation
- `_.
- Makes use of the `appname `,
- `version `,
- `ensure_exists `.
- """
-
- @property
- def user_data_dir(self) -> str:
- """:return: data directory tied to the user, e.g. ``~/Library/Application Support/$appname/$version``"""
- return self._append_app_name_and_version(os.path.expanduser("~/Library/Application Support")) # noqa: PTH111
-
- @property
- def site_data_dir(self) -> str:
- """:return: data directory shared by users, e.g. ``/Library/Application Support/$appname/$version``"""
- return self._append_app_name_and_version("/Library/Application Support")
-
- @property
- def user_config_dir(self) -> str:
- """:return: config directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def site_config_dir(self) -> str:
- """:return: config directory shared by the users, same as `site_data_dir`"""
- return self.site_data_dir
-
- @property
- def user_cache_dir(self) -> str:
- """:return: cache directory tied to the user, e.g. ``~/Library/Caches/$appname/$version``"""
- return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches")) # noqa: PTH111
-
- @property
- def site_cache_dir(self) -> str:
- """:return: cache directory shared by users, e.g. ``/Library/Caches/$appname/$version``"""
- return self._append_app_name_and_version("/Library/Caches")
-
- @property
- def user_state_dir(self) -> str:
- """:return: state directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def user_log_dir(self) -> str:
- """:return: log directory tied to the user, e.g. ``~/Library/Logs/$appname/$version``"""
- return self._append_app_name_and_version(os.path.expanduser("~/Library/Logs")) # noqa: PTH111
-
- @property
- def user_documents_dir(self) -> str:
- """:return: documents directory tied to the user, e.g. ``~/Documents``"""
- return os.path.expanduser("~/Documents") # noqa: PTH111
-
- @property
- def user_downloads_dir(self) -> str:
- """:return: downloads directory tied to the user, e.g. ``~/Downloads``"""
- return os.path.expanduser("~/Downloads") # noqa: PTH111
-
- @property
- def user_pictures_dir(self) -> str:
- """:return: pictures directory tied to the user, e.g. ``~/Pictures``"""
- return os.path.expanduser("~/Pictures") # noqa: PTH111
-
- @property
- def user_videos_dir(self) -> str:
- """:return: videos directory tied to the user, e.g. ``~/Movies``"""
- return os.path.expanduser("~/Movies") # noqa: PTH111
-
- @property
- def user_music_dir(self) -> str:
- """:return: music directory tied to the user, e.g. ``~/Music``"""
- return os.path.expanduser("~/Music") # noqa: PTH111
-
- @property
- def user_runtime_dir(self) -> str:
- """:return: runtime directory tied to the user, e.g. ``~/Library/Caches/TemporaryItems/$appname/$version``"""
- return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches/TemporaryItems")) # noqa: PTH111
-
-
-__all__ = [
- "MacOS",
-]
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/_mapping.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/_mapping.py
deleted file mode 100644
index de6a0153b777f255a754c1ca9f8e4dc55cd3934b..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/_mapping.py
+++ /dev/null
@@ -1,559 +0,0 @@
-# Automatically generated by scripts/gen_mapfiles.py.
-# DO NOT EDIT BY HAND; run `tox -e mapfiles` instead.
-
-LEXERS = {
- 'ABAPLexer': ('pip._vendor.pygments.lexers.business', 'ABAP', ('abap',), ('*.abap', '*.ABAP'), ('text/x-abap',)),
- 'AMDGPULexer': ('pip._vendor.pygments.lexers.amdgpu', 'AMDGPU', ('amdgpu',), ('*.isa',), ()),
- 'APLLexer': ('pip._vendor.pygments.lexers.apl', 'APL', ('apl',), ('*.apl', '*.aplf', '*.aplo', '*.apln', '*.aplc', '*.apli', '*.dyalog'), ()),
- 'AbnfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'ABNF', ('abnf',), ('*.abnf',), ('text/x-abnf',)),
- 'ActionScript3Lexer': ('pip._vendor.pygments.lexers.actionscript', 'ActionScript 3', ('actionscript3', 'as3'), ('*.as',), ('application/x-actionscript3', 'text/x-actionscript3', 'text/actionscript3')),
- 'ActionScriptLexer': ('pip._vendor.pygments.lexers.actionscript', 'ActionScript', ('actionscript', 'as'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
- 'AdaLexer': ('pip._vendor.pygments.lexers.ada', 'Ada', ('ada', 'ada95', 'ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)),
- 'AdlLexer': ('pip._vendor.pygments.lexers.archetype', 'ADL', ('adl',), ('*.adl', '*.adls', '*.adlf', '*.adlx'), ()),
- 'AgdaLexer': ('pip._vendor.pygments.lexers.haskell', 'Agda', ('agda',), ('*.agda',), ('text/x-agda',)),
- 'AheuiLexer': ('pip._vendor.pygments.lexers.esoteric', 'Aheui', ('aheui',), ('*.aheui',), ()),
- 'AlloyLexer': ('pip._vendor.pygments.lexers.dsls', 'Alloy', ('alloy',), ('*.als',), ('text/x-alloy',)),
- 'AmbientTalkLexer': ('pip._vendor.pygments.lexers.ambient', 'AmbientTalk', ('ambienttalk', 'ambienttalk/2', 'at'), ('*.at',), ('text/x-ambienttalk',)),
- 'AmplLexer': ('pip._vendor.pygments.lexers.ampl', 'Ampl', ('ampl',), ('*.run',), ()),
- 'Angular2HtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML + Angular2', ('html+ng2',), ('*.ng2',), ()),
- 'Angular2Lexer': ('pip._vendor.pygments.lexers.templates', 'Angular2', ('ng2',), (), ()),
- 'AntlrActionScriptLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-actionscript', 'antlr-as'), ('*.G', '*.g'), ()),
- 'AntlrCSharpLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()),
- 'AntlrCppLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()),
- 'AntlrJavaLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()),
- 'AntlrLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()),
- 'AntlrObjectiveCLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()),
- 'AntlrPerlLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()),
- 'AntlrPythonLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()),
- 'AntlrRubyLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()),
- 'ApacheConfLexer': ('pip._vendor.pygments.lexers.configs', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)),
- 'AppleScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'AppleScript', ('applescript',), ('*.applescript',), ()),
- 'ArduinoLexer': ('pip._vendor.pygments.lexers.c_like', 'Arduino', ('arduino',), ('*.ino',), ('text/x-arduino',)),
- 'ArrowLexer': ('pip._vendor.pygments.lexers.arrow', 'Arrow', ('arrow',), ('*.arw',), ()),
- 'ArturoLexer': ('pip._vendor.pygments.lexers.arturo', 'Arturo', ('arturo', 'art'), ('*.art',), ()),
- 'AscLexer': ('pip._vendor.pygments.lexers.asc', 'ASCII armored', ('asc', 'pem'), ('*.asc', '*.pem', 'id_dsa', 'id_ecdsa', 'id_ecdsa_sk', 'id_ed25519', 'id_ed25519_sk', 'id_rsa'), ('application/pgp-keys', 'application/pgp-encrypted', 'application/pgp-signature')),
- 'AspectJLexer': ('pip._vendor.pygments.lexers.jvm', 'AspectJ', ('aspectj',), ('*.aj',), ('text/x-aspectj',)),
- 'AsymptoteLexer': ('pip._vendor.pygments.lexers.graphics', 'Asymptote', ('asymptote', 'asy'), ('*.asy',), ('text/x-asymptote',)),
- 'AugeasLexer': ('pip._vendor.pygments.lexers.configs', 'Augeas', ('augeas',), ('*.aug',), ()),
- 'AutoItLexer': ('pip._vendor.pygments.lexers.automation', 'AutoIt', ('autoit',), ('*.au3',), ('text/x-autoit',)),
- 'AutohotkeyLexer': ('pip._vendor.pygments.lexers.automation', 'autohotkey', ('autohotkey', 'ahk'), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)),
- 'AwkLexer': ('pip._vendor.pygments.lexers.textedit', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk',)),
- 'BBCBasicLexer': ('pip._vendor.pygments.lexers.basic', 'BBC Basic', ('bbcbasic',), ('*.bbc',), ()),
- 'BBCodeLexer': ('pip._vendor.pygments.lexers.markup', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)),
- 'BCLexer': ('pip._vendor.pygments.lexers.algebra', 'BC', ('bc',), ('*.bc',), ()),
- 'BSTLexer': ('pip._vendor.pygments.lexers.bibtex', 'BST', ('bst', 'bst-pybtex'), ('*.bst',), ()),
- 'BareLexer': ('pip._vendor.pygments.lexers.bare', 'BARE', ('bare',), ('*.bare',), ()),
- 'BaseMakefileLexer': ('pip._vendor.pygments.lexers.make', 'Base Makefile', ('basemake',), (), ()),
- 'BashLexer': ('pip._vendor.pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'zsh', 'shell'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', '.kshrc', 'kshrc', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript', 'text/x-shellscript')),
- 'BashSessionLexer': ('pip._vendor.pygments.lexers.shell', 'Bash Session', ('console', 'shell-session'), ('*.sh-session', '*.shell-session'), ('application/x-shell-session', 'application/x-sh-session')),
- 'BatchLexer': ('pip._vendor.pygments.lexers.shell', 'Batchfile', ('batch', 'bat', 'dosbatch', 'winbatch'), ('*.bat', '*.cmd'), ('application/x-dos-batch',)),
- 'BddLexer': ('pip._vendor.pygments.lexers.bdd', 'Bdd', ('bdd',), ('*.feature',), ('text/x-bdd',)),
- 'BefungeLexer': ('pip._vendor.pygments.lexers.esoteric', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)),
- 'BerryLexer': ('pip._vendor.pygments.lexers.berry', 'Berry', ('berry', 'be'), ('*.be',), ('text/x-berry', 'application/x-berry')),
- 'BibTeXLexer': ('pip._vendor.pygments.lexers.bibtex', 'BibTeX', ('bibtex', 'bib'), ('*.bib',), ('text/x-bibtex',)),
- 'BlitzBasicLexer': ('pip._vendor.pygments.lexers.basic', 'BlitzBasic', ('blitzbasic', 'b3d', 'bplus'), ('*.bb', '*.decls'), ('text/x-bb',)),
- 'BlitzMaxLexer': ('pip._vendor.pygments.lexers.basic', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)),
- 'BnfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'BNF', ('bnf',), ('*.bnf',), ('text/x-bnf',)),
- 'BoaLexer': ('pip._vendor.pygments.lexers.boa', 'Boa', ('boa',), ('*.boa',), ()),
- 'BooLexer': ('pip._vendor.pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)),
- 'BoogieLexer': ('pip._vendor.pygments.lexers.verification', 'Boogie', ('boogie',), ('*.bpl',), ()),
- 'BrainfuckLexer': ('pip._vendor.pygments.lexers.esoteric', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)),
- 'BugsLexer': ('pip._vendor.pygments.lexers.modeling', 'BUGS', ('bugs', 'winbugs', 'openbugs'), ('*.bug',), ()),
- 'CAmkESLexer': ('pip._vendor.pygments.lexers.esoteric', 'CAmkES', ('camkes', 'idl4'), ('*.camkes', '*.idl4'), ()),
- 'CLexer': ('pip._vendor.pygments.lexers.c_cpp', 'C', ('c',), ('*.c', '*.h', '*.idc', '*.x[bp]m'), ('text/x-chdr', 'text/x-csrc', 'image/x-xbitmap', 'image/x-xpixmap')),
- 'CMakeLexer': ('pip._vendor.pygments.lexers.make', 'CMake', ('cmake',), ('*.cmake', 'CMakeLists.txt'), ('text/x-cmake',)),
- 'CObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)),
- 'CPSALexer': ('pip._vendor.pygments.lexers.lisp', 'CPSA', ('cpsa',), ('*.cpsa',), ()),
- 'CSSUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'CSS+UL4', ('css+ul4',), ('*.cssul4',), ()),
- 'CSharpAspxLexer': ('pip._vendor.pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
- 'CSharpLexer': ('pip._vendor.pygments.lexers.dotnet', 'C#', ('csharp', 'c#', 'cs'), ('*.cs',), ('text/x-csharp',)),
- 'Ca65Lexer': ('pip._vendor.pygments.lexers.asm', 'ca65 assembler', ('ca65',), ('*.s',), ()),
- 'CadlLexer': ('pip._vendor.pygments.lexers.archetype', 'cADL', ('cadl',), ('*.cadl',), ()),
- 'CapDLLexer': ('pip._vendor.pygments.lexers.esoteric', 'CapDL', ('capdl',), ('*.cdl',), ()),
- 'CapnProtoLexer': ('pip._vendor.pygments.lexers.capnproto', "Cap'n Proto", ('capnp',), ('*.capnp',), ()),
- 'CarbonLexer': ('pip._vendor.pygments.lexers.carbon', 'Carbon', ('carbon',), ('*.carbon',), ('text/x-carbon',)),
- 'CbmBasicV2Lexer': ('pip._vendor.pygments.lexers.basic', 'CBM BASIC V2', ('cbmbas',), ('*.bas',), ()),
- 'CddlLexer': ('pip._vendor.pygments.lexers.cddl', 'CDDL', ('cddl',), ('*.cddl',), ('text/x-cddl',)),
- 'CeylonLexer': ('pip._vendor.pygments.lexers.jvm', 'Ceylon', ('ceylon',), ('*.ceylon',), ('text/x-ceylon',)),
- 'Cfengine3Lexer': ('pip._vendor.pygments.lexers.configs', 'CFEngine3', ('cfengine3', 'cf3'), ('*.cf',), ()),
- 'ChaiscriptLexer': ('pip._vendor.pygments.lexers.scripting', 'ChaiScript', ('chaiscript', 'chai'), ('*.chai',), ('text/x-chaiscript', 'application/x-chaiscript')),
- 'ChapelLexer': ('pip._vendor.pygments.lexers.chapel', 'Chapel', ('chapel', 'chpl'), ('*.chpl',), ()),
- 'CharmciLexer': ('pip._vendor.pygments.lexers.c_like', 'Charmci', ('charmci',), ('*.ci',), ()),
- 'CheetahHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire', 'htmlcheetah'), (), ('text/html+cheetah', 'text/html+spitfire')),
- 'CheetahJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Cheetah', ('javascript+cheetah', 'js+cheetah', 'javascript+spitfire', 'js+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')),
- 'CheetahLexer': ('pip._vendor.pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')),
- 'CheetahXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')),
- 'CirruLexer': ('pip._vendor.pygments.lexers.webmisc', 'Cirru', ('cirru',), ('*.cirru',), ('text/x-cirru',)),
- 'ClayLexer': ('pip._vendor.pygments.lexers.c_like', 'Clay', ('clay',), ('*.clay',), ('text/x-clay',)),
- 'CleanLexer': ('pip._vendor.pygments.lexers.clean', 'Clean', ('clean',), ('*.icl', '*.dcl'), ()),
- 'ClojureLexer': ('pip._vendor.pygments.lexers.jvm', 'Clojure', ('clojure', 'clj'), ('*.clj', '*.cljc'), ('text/x-clojure', 'application/x-clojure')),
- 'ClojureScriptLexer': ('pip._vendor.pygments.lexers.jvm', 'ClojureScript', ('clojurescript', 'cljs'), ('*.cljs',), ('text/x-clojurescript', 'application/x-clojurescript')),
- 'CobolFreeformatLexer': ('pip._vendor.pygments.lexers.business', 'COBOLFree', ('cobolfree',), ('*.cbl', '*.CBL'), ()),
- 'CobolLexer': ('pip._vendor.pygments.lexers.business', 'COBOL', ('cobol',), ('*.cob', '*.COB', '*.cpy', '*.CPY'), ('text/x-cobol',)),
- 'CoffeeScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'CoffeeScript', ('coffeescript', 'coffee-script', 'coffee'), ('*.coffee',), ('text/coffeescript',)),
- 'ColdfusionCFCLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion CFC', ('cfc',), ('*.cfc',), ()),
- 'ColdfusionHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml'), ('application/x-coldfusion',)),
- 'ColdfusionLexer': ('pip._vendor.pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()),
- 'Comal80Lexer': ('pip._vendor.pygments.lexers.comal', 'COMAL-80', ('comal', 'comal80'), ('*.cml', '*.comal'), ()),
- 'CommonLispLexer': ('pip._vendor.pygments.lexers.lisp', 'Common Lisp', ('common-lisp', 'cl', 'lisp'), ('*.cl', '*.lisp'), ('text/x-common-lisp',)),
- 'ComponentPascalLexer': ('pip._vendor.pygments.lexers.oberon', 'Component Pascal', ('componentpascal', 'cp'), ('*.cp', '*.cps'), ('text/x-component-pascal',)),
- 'CoqLexer': ('pip._vendor.pygments.lexers.theorem', 'Coq', ('coq',), ('*.v',), ('text/x-coq',)),
- 'CplintLexer': ('pip._vendor.pygments.lexers.cplint', 'cplint', ('cplint',), ('*.ecl', '*.prolog', '*.pro', '*.pl', '*.P', '*.lpad', '*.cpl'), ('text/x-cplint',)),
- 'CppLexer': ('pip._vendor.pygments.lexers.c_cpp', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx', '*.C', '*.H', '*.cp', '*.CPP', '*.tpp'), ('text/x-c++hdr', 'text/x-c++src')),
- 'CppObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)),
- 'CrmshLexer': ('pip._vendor.pygments.lexers.dsls', 'Crmsh', ('crmsh', 'pcmk'), ('*.crmsh', '*.pcmk'), ()),
- 'CrocLexer': ('pip._vendor.pygments.lexers.d', 'Croc', ('croc',), ('*.croc',), ('text/x-crocsrc',)),
- 'CryptolLexer': ('pip._vendor.pygments.lexers.haskell', 'Cryptol', ('cryptol', 'cry'), ('*.cry',), ('text/x-cryptol',)),
- 'CrystalLexer': ('pip._vendor.pygments.lexers.crystal', 'Crystal', ('cr', 'crystal'), ('*.cr',), ('text/x-crystal',)),
- 'CsoundDocumentLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Document', ('csound-document', 'csound-csd'), ('*.csd',), ()),
- 'CsoundOrchestraLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Orchestra', ('csound', 'csound-orc'), ('*.orc', '*.udo'), ()),
- 'CsoundScoreLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Score', ('csound-score', 'csound-sco'), ('*.sco',), ()),
- 'CssDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), ('*.css.j2', '*.css.jinja2'), ('text/css+django', 'text/css+jinja')),
- 'CssErbLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Ruby', ('css+ruby', 'css+erb'), (), ('text/css+ruby',)),
- 'CssGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)),
- 'CssLexer': ('pip._vendor.pygments.lexers.css', 'CSS', ('css',), ('*.css',), ('text/css',)),
- 'CssPhpLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)),
- 'CssSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)),
- 'CudaLexer': ('pip._vendor.pygments.lexers.c_like', 'CUDA', ('cuda', 'cu'), ('*.cu', '*.cuh'), ('text/x-cuda',)),
- 'CypherLexer': ('pip._vendor.pygments.lexers.graph', 'Cypher', ('cypher',), ('*.cyp', '*.cypher'), ()),
- 'CythonLexer': ('pip._vendor.pygments.lexers.python', 'Cython', ('cython', 'pyx', 'pyrex'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')),
- 'DLexer': ('pip._vendor.pygments.lexers.d', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)),
- 'DObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)),
- 'DarcsPatchLexer': ('pip._vendor.pygments.lexers.diff', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()),
- 'DartLexer': ('pip._vendor.pygments.lexers.javascript', 'Dart', ('dart',), ('*.dart',), ('text/x-dart',)),
- 'Dasm16Lexer': ('pip._vendor.pygments.lexers.asm', 'DASM16', ('dasm16',), ('*.dasm16', '*.dasm'), ('text/x-dasm16',)),
- 'DaxLexer': ('pip._vendor.pygments.lexers.dax', 'Dax', ('dax',), ('*.dax',), ()),
- 'DebianControlLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Control file', ('debcontrol', 'control'), ('control',), ()),
- 'DelphiLexer': ('pip._vendor.pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas', '*.dpr'), ('text/x-pascal',)),
- 'DevicetreeLexer': ('pip._vendor.pygments.lexers.devicetree', 'Devicetree', ('devicetree', 'dts'), ('*.dts', '*.dtsi'), ('text/x-c',)),
- 'DgLexer': ('pip._vendor.pygments.lexers.python', 'dg', ('dg',), ('*.dg',), ('text/x-dg',)),
- 'DiffLexer': ('pip._vendor.pygments.lexers.diff', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')),
- 'DjangoLexer': ('pip._vendor.pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')),
- 'DockerLexer': ('pip._vendor.pygments.lexers.configs', 'Docker', ('docker', 'dockerfile'), ('Dockerfile', '*.docker'), ('text/x-dockerfile-config',)),
- 'DtdLexer': ('pip._vendor.pygments.lexers.html', 'DTD', ('dtd',), ('*.dtd',), ('application/xml-dtd',)),
- 'DuelLexer': ('pip._vendor.pygments.lexers.webmisc', 'Duel', ('duel', 'jbst', 'jsonml+bst'), ('*.duel', '*.jbst'), ('text/x-duel', 'text/x-jbst')),
- 'DylanConsoleLexer': ('pip._vendor.pygments.lexers.dylan', 'Dylan session', ('dylan-console', 'dylan-repl'), ('*.dylan-console',), ('text/x-dylan-console',)),
- 'DylanLexer': ('pip._vendor.pygments.lexers.dylan', 'Dylan', ('dylan',), ('*.dylan', '*.dyl', '*.intr'), ('text/x-dylan',)),
- 'DylanLidLexer': ('pip._vendor.pygments.lexers.dylan', 'DylanLID', ('dylan-lid', 'lid'), ('*.lid', '*.hdp'), ('text/x-dylan-lid',)),
- 'ECLLexer': ('pip._vendor.pygments.lexers.ecl', 'ECL', ('ecl',), ('*.ecl',), ('application/x-ecl',)),
- 'ECLexer': ('pip._vendor.pygments.lexers.c_like', 'eC', ('ec',), ('*.ec', '*.eh'), ('text/x-echdr', 'text/x-ecsrc')),
- 'EarlGreyLexer': ('pip._vendor.pygments.lexers.javascript', 'Earl Grey', ('earl-grey', 'earlgrey', 'eg'), ('*.eg',), ('text/x-earl-grey',)),
- 'EasytrieveLexer': ('pip._vendor.pygments.lexers.scripting', 'Easytrieve', ('easytrieve',), ('*.ezt', '*.mac'), ('text/x-easytrieve',)),
- 'EbnfLexer': ('pip._vendor.pygments.lexers.parsers', 'EBNF', ('ebnf',), ('*.ebnf',), ('text/x-ebnf',)),
- 'EiffelLexer': ('pip._vendor.pygments.lexers.eiffel', 'Eiffel', ('eiffel',), ('*.e',), ('text/x-eiffel',)),
- 'ElixirConsoleLexer': ('pip._vendor.pygments.lexers.erlang', 'Elixir iex session', ('iex',), (), ('text/x-elixir-shellsession',)),
- 'ElixirLexer': ('pip._vendor.pygments.lexers.erlang', 'Elixir', ('elixir', 'ex', 'exs'), ('*.ex', '*.eex', '*.exs', '*.leex'), ('text/x-elixir',)),
- 'ElmLexer': ('pip._vendor.pygments.lexers.elm', 'Elm', ('elm',), ('*.elm',), ('text/x-elm',)),
- 'ElpiLexer': ('pip._vendor.pygments.lexers.elpi', 'Elpi', ('elpi',), ('*.elpi',), ('text/x-elpi',)),
- 'EmacsLispLexer': ('pip._vendor.pygments.lexers.lisp', 'EmacsLisp', ('emacs-lisp', 'elisp', 'emacs'), ('*.el',), ('text/x-elisp', 'application/x-elisp')),
- 'EmailLexer': ('pip._vendor.pygments.lexers.email', 'E-mail', ('email', 'eml'), ('*.eml',), ('message/rfc822',)),
- 'ErbLexer': ('pip._vendor.pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)),
- 'ErlangLexer': ('pip._vendor.pygments.lexers.erlang', 'Erlang', ('erlang',), ('*.erl', '*.hrl', '*.es', '*.escript'), ('text/x-erlang',)),
- 'ErlangShellLexer': ('pip._vendor.pygments.lexers.erlang', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)),
- 'EvoqueHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), ('*.html',), ('text/html+evoque',)),
- 'EvoqueLexer': ('pip._vendor.pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)),
- 'EvoqueXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), ('*.xml',), ('application/xml+evoque',)),
- 'ExeclineLexer': ('pip._vendor.pygments.lexers.shell', 'execline', ('execline',), ('*.exec',), ()),
- 'EzhilLexer': ('pip._vendor.pygments.lexers.ezhil', 'Ezhil', ('ezhil',), ('*.n',), ('text/x-ezhil',)),
- 'FSharpLexer': ('pip._vendor.pygments.lexers.dotnet', 'F#', ('fsharp', 'f#'), ('*.fs', '*.fsi', '*.fsx'), ('text/x-fsharp',)),
- 'FStarLexer': ('pip._vendor.pygments.lexers.ml', 'FStar', ('fstar',), ('*.fst', '*.fsti'), ('text/x-fstar',)),
- 'FactorLexer': ('pip._vendor.pygments.lexers.factor', 'Factor', ('factor',), ('*.factor',), ('text/x-factor',)),
- 'FancyLexer': ('pip._vendor.pygments.lexers.ruby', 'Fancy', ('fancy', 'fy'), ('*.fy', '*.fancypack'), ('text/x-fancysrc',)),
- 'FantomLexer': ('pip._vendor.pygments.lexers.fantom', 'Fantom', ('fan',), ('*.fan',), ('application/x-fantom',)),
- 'FelixLexer': ('pip._vendor.pygments.lexers.felix', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)),
- 'FennelLexer': ('pip._vendor.pygments.lexers.lisp', 'Fennel', ('fennel', 'fnl'), ('*.fnl',), ()),
- 'FiftLexer': ('pip._vendor.pygments.lexers.fift', 'Fift', ('fift', 'fif'), ('*.fif',), ()),
- 'FishShellLexer': ('pip._vendor.pygments.lexers.shell', 'Fish', ('fish', 'fishshell'), ('*.fish', '*.load'), ('application/x-fish',)),
- 'FlatlineLexer': ('pip._vendor.pygments.lexers.dsls', 'Flatline', ('flatline',), (), ('text/x-flatline',)),
- 'FloScriptLexer': ('pip._vendor.pygments.lexers.floscript', 'FloScript', ('floscript', 'flo'), ('*.flo',), ()),
- 'ForthLexer': ('pip._vendor.pygments.lexers.forth', 'Forth', ('forth',), ('*.frt', '*.fs'), ('application/x-forth',)),
- 'FortranFixedLexer': ('pip._vendor.pygments.lexers.fortran', 'FortranFixed', ('fortranfixed',), ('*.f', '*.F'), ()),
- 'FortranLexer': ('pip._vendor.pygments.lexers.fortran', 'Fortran', ('fortran', 'f90'), ('*.f03', '*.f90', '*.F03', '*.F90'), ('text/x-fortran',)),
- 'FoxProLexer': ('pip._vendor.pygments.lexers.foxpro', 'FoxPro', ('foxpro', 'vfp', 'clipper', 'xbase'), ('*.PRG', '*.prg'), ()),
- 'FreeFemLexer': ('pip._vendor.pygments.lexers.freefem', 'Freefem', ('freefem',), ('*.edp',), ('text/x-freefem',)),
- 'FuncLexer': ('pip._vendor.pygments.lexers.func', 'FunC', ('func', 'fc'), ('*.fc', '*.func'), ()),
- 'FutharkLexer': ('pip._vendor.pygments.lexers.futhark', 'Futhark', ('futhark',), ('*.fut',), ('text/x-futhark',)),
- 'GAPConsoleLexer': ('pip._vendor.pygments.lexers.algebra', 'GAP session', ('gap-console', 'gap-repl'), ('*.tst',), ()),
- 'GAPLexer': ('pip._vendor.pygments.lexers.algebra', 'GAP', ('gap',), ('*.g', '*.gd', '*.gi', '*.gap'), ()),
- 'GDScriptLexer': ('pip._vendor.pygments.lexers.gdscript', 'GDScript', ('gdscript', 'gd'), ('*.gd',), ('text/x-gdscript', 'application/x-gdscript')),
- 'GLShaderLexer': ('pip._vendor.pygments.lexers.graphics', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)),
- 'GSQLLexer': ('pip._vendor.pygments.lexers.gsql', 'GSQL', ('gsql',), ('*.gsql',), ()),
- 'GasLexer': ('pip._vendor.pygments.lexers.asm', 'GAS', ('gas', 'asm'), ('*.s', '*.S'), ('text/x-gas',)),
- 'GcodeLexer': ('pip._vendor.pygments.lexers.gcodelexer', 'g-code', ('gcode',), ('*.gcode',), ()),
- 'GenshiLexer': ('pip._vendor.pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')),
- 'GenshiTextLexer': ('pip._vendor.pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')),
- 'GettextLexer': ('pip._vendor.pygments.lexers.textfmts', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')),
- 'GherkinLexer': ('pip._vendor.pygments.lexers.testing', 'Gherkin', ('gherkin', 'cucumber'), ('*.feature',), ('text/x-gherkin',)),
- 'GnuplotLexer': ('pip._vendor.pygments.lexers.graphics', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)),
- 'GoLexer': ('pip._vendor.pygments.lexers.go', 'Go', ('go', 'golang'), ('*.go',), ('text/x-gosrc',)),
- 'GoloLexer': ('pip._vendor.pygments.lexers.jvm', 'Golo', ('golo',), ('*.golo',), ()),
- 'GoodDataCLLexer': ('pip._vendor.pygments.lexers.business', 'GoodData-CL', ('gooddata-cl',), ('*.gdc',), ('text/x-gooddata-cl',)),
- 'GosuLexer': ('pip._vendor.pygments.lexers.jvm', 'Gosu', ('gosu',), ('*.gs', '*.gsx', '*.gsp', '*.vark'), ('text/x-gosu',)),
- 'GosuTemplateLexer': ('pip._vendor.pygments.lexers.jvm', 'Gosu Template', ('gst',), ('*.gst',), ('text/x-gosu-template',)),
- 'GraphvizLexer': ('pip._vendor.pygments.lexers.graphviz', 'Graphviz', ('graphviz', 'dot'), ('*.gv', '*.dot'), ('text/x-graphviz', 'text/vnd.graphviz')),
- 'GroffLexer': ('pip._vendor.pygments.lexers.markup', 'Groff', ('groff', 'nroff', 'man'), ('*.[1-9]', '*.man', '*.1p', '*.3pm'), ('application/x-troff', 'text/troff')),
- 'GroovyLexer': ('pip._vendor.pygments.lexers.jvm', 'Groovy', ('groovy',), ('*.groovy', '*.gradle'), ('text/x-groovy',)),
- 'HLSLShaderLexer': ('pip._vendor.pygments.lexers.graphics', 'HLSL', ('hlsl',), ('*.hlsl', '*.hlsli'), ('text/x-hlsl',)),
- 'HTMLUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'HTML+UL4', ('html+ul4',), ('*.htmlul4',), ()),
- 'HamlLexer': ('pip._vendor.pygments.lexers.html', 'Haml', ('haml',), ('*.haml',), ('text/x-haml',)),
- 'HandlebarsHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Handlebars', ('html+handlebars',), ('*.handlebars', '*.hbs'), ('text/html+handlebars', 'text/x-handlebars-template')),
- 'HandlebarsLexer': ('pip._vendor.pygments.lexers.templates', 'Handlebars', ('handlebars',), (), ()),
- 'HaskellLexer': ('pip._vendor.pygments.lexers.haskell', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)),
- 'HaxeLexer': ('pip._vendor.pygments.lexers.haxe', 'Haxe', ('haxe', 'hxsl', 'hx'), ('*.hx', '*.hxsl'), ('text/haxe', 'text/x-haxe', 'text/x-hx')),
- 'HexdumpLexer': ('pip._vendor.pygments.lexers.hexdump', 'Hexdump', ('hexdump',), (), ()),
- 'HsailLexer': ('pip._vendor.pygments.lexers.asm', 'HSAIL', ('hsail', 'hsa'), ('*.hsail',), ('text/x-hsail',)),
- 'HspecLexer': ('pip._vendor.pygments.lexers.haskell', 'Hspec', ('hspec',), ('*Spec.hs',), ()),
- 'HtmlDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja', 'htmldjango'), ('*.html.j2', '*.htm.j2', '*.xhtml.j2', '*.html.jinja2', '*.htm.jinja2', '*.xhtml.jinja2'), ('text/html+django', 'text/html+jinja')),
- 'HtmlGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)),
- 'HtmlLexer': ('pip._vendor.pygments.lexers.html', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')),
- 'HtmlPhpLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')),
- 'HtmlSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)),
- 'HttpLexer': ('pip._vendor.pygments.lexers.textfmts', 'HTTP', ('http',), (), ()),
- 'HxmlLexer': ('pip._vendor.pygments.lexers.haxe', 'Hxml', ('haxeml', 'hxml'), ('*.hxml',), ()),
- 'HyLexer': ('pip._vendor.pygments.lexers.lisp', 'Hy', ('hylang',), ('*.hy',), ('text/x-hy', 'application/x-hy')),
- 'HybrisLexer': ('pip._vendor.pygments.lexers.scripting', 'Hybris', ('hybris', 'hy'), ('*.hy', '*.hyb'), ('text/x-hybris', 'application/x-hybris')),
- 'IDLLexer': ('pip._vendor.pygments.lexers.idl', 'IDL', ('idl',), ('*.pro',), ('text/idl',)),
- 'IconLexer': ('pip._vendor.pygments.lexers.unicon', 'Icon', ('icon',), ('*.icon', '*.ICON'), ()),
- 'IdrisLexer': ('pip._vendor.pygments.lexers.haskell', 'Idris', ('idris', 'idr'), ('*.idr',), ('text/x-idris',)),
- 'IgorLexer': ('pip._vendor.pygments.lexers.igor', 'Igor', ('igor', 'igorpro'), ('*.ipf',), ('text/ipf',)),
- 'Inform6Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 6', ('inform6', 'i6'), ('*.inf',), ()),
- 'Inform6TemplateLexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 6 template', ('i6t',), ('*.i6t',), ()),
- 'Inform7Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 7', ('inform7', 'i7'), ('*.ni', '*.i7x'), ()),
- 'IniLexer': ('pip._vendor.pygments.lexers.configs', 'INI', ('ini', 'cfg', 'dosini'), ('*.ini', '*.cfg', '*.inf', '.editorconfig', '*.service', '*.socket', '*.device', '*.mount', '*.automount', '*.swap', '*.target', '*.path', '*.timer', '*.slice', '*.scope'), ('text/x-ini', 'text/inf')),
- 'IoLexer': ('pip._vendor.pygments.lexers.iolang', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)),
- 'IokeLexer': ('pip._vendor.pygments.lexers.jvm', 'Ioke', ('ioke', 'ik'), ('*.ik',), ('text/x-iokesrc',)),
- 'IrcLogsLexer': ('pip._vendor.pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)),
- 'IsabelleLexer': ('pip._vendor.pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)),
- 'JLexer': ('pip._vendor.pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)),
- 'JMESPathLexer': ('pip._vendor.pygments.lexers.jmespath', 'JMESPath', ('jmespath', 'jp'), ('*.jp',), ()),
- 'JSLTLexer': ('pip._vendor.pygments.lexers.jslt', 'JSLT', ('jslt',), ('*.jslt',), ('text/x-jslt',)),
- 'JagsLexer': ('pip._vendor.pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()),
- 'JasminLexer': ('pip._vendor.pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()),
- 'JavaLexer': ('pip._vendor.pygments.lexers.jvm', 'Java', ('java',), ('*.java',), ('text/x-java',)),
- 'JavascriptDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Django/Jinja', ('javascript+django', 'js+django', 'javascript+jinja', 'js+jinja'), ('*.js.j2', '*.js.jinja2'), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')),
- 'JavascriptErbLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Ruby', ('javascript+ruby', 'js+ruby', 'javascript+erb', 'js+erb'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')),
- 'JavascriptGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')),
- 'JavascriptLexer': ('pip._vendor.pygments.lexers.javascript', 'JavaScript', ('javascript', 'js'), ('*.js', '*.jsm', '*.mjs', '*.cjs'), ('application/javascript', 'application/x-javascript', 'text/x-javascript', 'text/javascript')),
- 'JavascriptPhpLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+PHP', ('javascript+php', 'js+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')),
- 'JavascriptSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Smarty', ('javascript+smarty', 'js+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')),
- 'JavascriptUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'Javascript+UL4', ('js+ul4',), ('*.jsul4',), ()),
- 'JclLexer': ('pip._vendor.pygments.lexers.scripting', 'JCL', ('jcl',), ('*.jcl',), ('text/x-jcl',)),
- 'JsgfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'JSGF', ('jsgf',), ('*.jsgf',), ('application/jsgf', 'application/x-jsgf', 'text/jsgf')),
- 'JsonBareObjectLexer': ('pip._vendor.pygments.lexers.data', 'JSONBareObject', (), (), ()),
- 'JsonLdLexer': ('pip._vendor.pygments.lexers.data', 'JSON-LD', ('jsonld', 'json-ld'), ('*.jsonld',), ('application/ld+json',)),
- 'JsonLexer': ('pip._vendor.pygments.lexers.data', 'JSON', ('json', 'json-object'), ('*.json', 'Pipfile.lock'), ('application/json', 'application/json-object')),
- 'JsonnetLexer': ('pip._vendor.pygments.lexers.jsonnet', 'Jsonnet', ('jsonnet',), ('*.jsonnet', '*.libsonnet'), ()),
- 'JspLexer': ('pip._vendor.pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)),
- 'JuliaConsoleLexer': ('pip._vendor.pygments.lexers.julia', 'Julia console', ('jlcon', 'julia-repl'), (), ()),
- 'JuliaLexer': ('pip._vendor.pygments.lexers.julia', 'Julia', ('julia', 'jl'), ('*.jl',), ('text/x-julia', 'application/x-julia')),
- 'JuttleLexer': ('pip._vendor.pygments.lexers.javascript', 'Juttle', ('juttle',), ('*.juttle',), ('application/juttle', 'application/x-juttle', 'text/x-juttle', 'text/juttle')),
- 'KLexer': ('pip._vendor.pygments.lexers.q', 'K', ('k',), ('*.k',), ()),
- 'KalLexer': ('pip._vendor.pygments.lexers.javascript', 'Kal', ('kal',), ('*.kal',), ('text/kal', 'application/kal')),
- 'KconfigLexer': ('pip._vendor.pygments.lexers.configs', 'Kconfig', ('kconfig', 'menuconfig', 'linux-config', 'kernel-config'), ('Kconfig*', '*Config.in*', 'external.in*', 'standard-modules.in'), ('text/x-kconfig',)),
- 'KernelLogLexer': ('pip._vendor.pygments.lexers.textfmts', 'Kernel log', ('kmsg', 'dmesg'), ('*.kmsg', '*.dmesg'), ()),
- 'KokaLexer': ('pip._vendor.pygments.lexers.haskell', 'Koka', ('koka',), ('*.kk', '*.kki'), ('text/x-koka',)),
- 'KotlinLexer': ('pip._vendor.pygments.lexers.jvm', 'Kotlin', ('kotlin',), ('*.kt', '*.kts'), ('text/x-kotlin',)),
- 'KuinLexer': ('pip._vendor.pygments.lexers.kuin', 'Kuin', ('kuin',), ('*.kn',), ()),
- 'LSLLexer': ('pip._vendor.pygments.lexers.scripting', 'LSL', ('lsl',), ('*.lsl',), ('text/x-lsl',)),
- 'LassoCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Lasso', ('css+lasso',), (), ('text/css+lasso',)),
- 'LassoHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Lasso', ('html+lasso',), (), ('text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]')),
- 'LassoJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Lasso', ('javascript+lasso', 'js+lasso'), (), ('application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso')),
- 'LassoLexer': ('pip._vendor.pygments.lexers.javascript', 'Lasso', ('lasso', 'lassoscript'), ('*.lasso', '*.lasso[89]'), ('text/x-lasso',)),
- 'LassoXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Lasso', ('xml+lasso',), (), ('application/xml+lasso',)),
- 'LeanLexer': ('pip._vendor.pygments.lexers.theorem', 'Lean', ('lean',), ('*.lean',), ('text/x-lean',)),
- 'LessCssLexer': ('pip._vendor.pygments.lexers.css', 'LessCss', ('less',), ('*.less',), ('text/x-less-css',)),
- 'LighttpdConfLexer': ('pip._vendor.pygments.lexers.configs', 'Lighttpd configuration file', ('lighttpd', 'lighty'), ('lighttpd.conf',), ('text/x-lighttpd-conf',)),
- 'LilyPondLexer': ('pip._vendor.pygments.lexers.lilypond', 'LilyPond', ('lilypond',), ('*.ly',), ()),
- 'LimboLexer': ('pip._vendor.pygments.lexers.inferno', 'Limbo', ('limbo',), ('*.b',), ('text/limbo',)),
- 'LiquidLexer': ('pip._vendor.pygments.lexers.templates', 'liquid', ('liquid',), ('*.liquid',), ()),
- 'LiterateAgdaLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Agda', ('literate-agda', 'lagda'), ('*.lagda',), ('text/x-literate-agda',)),
- 'LiterateCryptolLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Cryptol', ('literate-cryptol', 'lcryptol', 'lcry'), ('*.lcry',), ('text/x-literate-cryptol',)),
- 'LiterateHaskellLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Haskell', ('literate-haskell', 'lhaskell', 'lhs'), ('*.lhs',), ('text/x-literate-haskell',)),
- 'LiterateIdrisLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Idris', ('literate-idris', 'lidris', 'lidr'), ('*.lidr',), ('text/x-literate-idris',)),
- 'LiveScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'LiveScript', ('livescript', 'live-script'), ('*.ls',), ('text/livescript',)),
- 'LlvmLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)),
- 'LlvmMirBodyLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM-MIR Body', ('llvm-mir-body',), (), ()),
- 'LlvmMirLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM-MIR', ('llvm-mir',), ('*.mir',), ()),
- 'LogosLexer': ('pip._vendor.pygments.lexers.objective', 'Logos', ('logos',), ('*.x', '*.xi', '*.xm', '*.xmi'), ('text/x-logos',)),
- 'LogtalkLexer': ('pip._vendor.pygments.lexers.prolog', 'Logtalk', ('logtalk',), ('*.lgt', '*.logtalk'), ('text/x-logtalk',)),
- 'LuaLexer': ('pip._vendor.pygments.lexers.scripting', 'Lua', ('lua',), ('*.lua', '*.wlua'), ('text/x-lua', 'application/x-lua')),
- 'MCFunctionLexer': ('pip._vendor.pygments.lexers.minecraft', 'MCFunction', ('mcfunction', 'mcf'), ('*.mcfunction',), ('text/mcfunction',)),
- 'MCSchemaLexer': ('pip._vendor.pygments.lexers.minecraft', 'MCSchema', ('mcschema',), ('*.mcschema',), ('text/mcschema',)),
- 'MIMELexer': ('pip._vendor.pygments.lexers.mime', 'MIME', ('mime',), (), ('multipart/mixed', 'multipart/related', 'multipart/alternative')),
- 'MIPSLexer': ('pip._vendor.pygments.lexers.mips', 'MIPS', ('mips',), ('*.mips', '*.MIPS'), ()),
- 'MOOCodeLexer': ('pip._vendor.pygments.lexers.scripting', 'MOOCode', ('moocode', 'moo'), ('*.moo',), ('text/x-moocode',)),
- 'MSDOSSessionLexer': ('pip._vendor.pygments.lexers.shell', 'MSDOS Session', ('doscon',), (), ()),
- 'Macaulay2Lexer': ('pip._vendor.pygments.lexers.macaulay2', 'Macaulay2', ('macaulay2',), ('*.m2',), ()),
- 'MakefileLexer': ('pip._vendor.pygments.lexers.make', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', '*.mk', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)),
- 'MakoCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)),
- 'MakoHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)),
- 'MakoJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Mako', ('javascript+mako', 'js+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')),
- 'MakoLexer': ('pip._vendor.pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)),
- 'MakoXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)),
- 'MaqlLexer': ('pip._vendor.pygments.lexers.business', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')),
- 'MarkdownLexer': ('pip._vendor.pygments.lexers.markup', 'Markdown', ('markdown', 'md'), ('*.md', '*.markdown'), ('text/x-markdown',)),
- 'MaskLexer': ('pip._vendor.pygments.lexers.javascript', 'Mask', ('mask',), ('*.mask',), ('text/x-mask',)),
- 'MasonLexer': ('pip._vendor.pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)),
- 'MathematicaLexer': ('pip._vendor.pygments.lexers.algebra', 'Mathematica', ('mathematica', 'mma', 'nb'), ('*.nb', '*.cdf', '*.nbp', '*.ma'), ('application/mathematica', 'application/vnd.wolfram.mathematica', 'application/vnd.wolfram.mathematica.package', 'application/vnd.wolfram.cdf')),
- 'MatlabLexer': ('pip._vendor.pygments.lexers.matlab', 'Matlab', ('matlab',), ('*.m',), ('text/matlab',)),
- 'MatlabSessionLexer': ('pip._vendor.pygments.lexers.matlab', 'Matlab session', ('matlabsession',), (), ()),
- 'MaximaLexer': ('pip._vendor.pygments.lexers.maxima', 'Maxima', ('maxima', 'macsyma'), ('*.mac', '*.max'), ()),
- 'MesonLexer': ('pip._vendor.pygments.lexers.meson', 'Meson', ('meson', 'meson.build'), ('meson.build', 'meson_options.txt'), ('text/x-meson',)),
- 'MiniDLexer': ('pip._vendor.pygments.lexers.d', 'MiniD', ('minid',), (), ('text/x-minidsrc',)),
- 'MiniScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'MiniScript', ('miniscript', 'ms'), ('*.ms',), ('text/x-minicript', 'application/x-miniscript')),
- 'ModelicaLexer': ('pip._vendor.pygments.lexers.modeling', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)),
- 'Modula2Lexer': ('pip._vendor.pygments.lexers.modula2', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)),
- 'MoinWikiLexer': ('pip._vendor.pygments.lexers.markup', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)),
- 'MonkeyLexer': ('pip._vendor.pygments.lexers.basic', 'Monkey', ('monkey',), ('*.monkey',), ('text/x-monkey',)),
- 'MonteLexer': ('pip._vendor.pygments.lexers.monte', 'Monte', ('monte',), ('*.mt',), ()),
- 'MoonScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'MoonScript', ('moonscript', 'moon'), ('*.moon',), ('text/x-moonscript', 'application/x-moonscript')),
- 'MoselLexer': ('pip._vendor.pygments.lexers.mosel', 'Mosel', ('mosel',), ('*.mos',), ()),
- 'MozPreprocCssLexer': ('pip._vendor.pygments.lexers.markup', 'CSS+mozpreproc', ('css+mozpreproc',), ('*.css.in',), ()),
- 'MozPreprocHashLexer': ('pip._vendor.pygments.lexers.markup', 'mozhashpreproc', ('mozhashpreproc',), (), ()),
- 'MozPreprocJavascriptLexer': ('pip._vendor.pygments.lexers.markup', 'Javascript+mozpreproc', ('javascript+mozpreproc',), ('*.js.in',), ()),
- 'MozPreprocPercentLexer': ('pip._vendor.pygments.lexers.markup', 'mozpercentpreproc', ('mozpercentpreproc',), (), ()),
- 'MozPreprocXulLexer': ('pip._vendor.pygments.lexers.markup', 'XUL+mozpreproc', ('xul+mozpreproc',), ('*.xul.in',), ()),
- 'MqlLexer': ('pip._vendor.pygments.lexers.c_like', 'MQL', ('mql', 'mq4', 'mq5', 'mql4', 'mql5'), ('*.mq4', '*.mq5', '*.mqh'), ('text/x-mql',)),
- 'MscgenLexer': ('pip._vendor.pygments.lexers.dsls', 'Mscgen', ('mscgen', 'msc'), ('*.msc',), ()),
- 'MuPADLexer': ('pip._vendor.pygments.lexers.algebra', 'MuPAD', ('mupad',), ('*.mu',), ()),
- 'MxmlLexer': ('pip._vendor.pygments.lexers.actionscript', 'MXML', ('mxml',), ('*.mxml',), ()),
- 'MySqlLexer': ('pip._vendor.pygments.lexers.sql', 'MySQL', ('mysql',), (), ('text/x-mysql',)),
- 'MyghtyCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)),
- 'MyghtyHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)),
- 'MyghtyJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Myghty', ('javascript+myghty', 'js+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')),
- 'MyghtyLexer': ('pip._vendor.pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)),
- 'MyghtyXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)),
- 'NCLLexer': ('pip._vendor.pygments.lexers.ncl', 'NCL', ('ncl',), ('*.ncl',), ('text/ncl',)),
- 'NSISLexer': ('pip._vendor.pygments.lexers.installers', 'NSIS', ('nsis', 'nsi', 'nsh'), ('*.nsi', '*.nsh'), ('text/x-nsis',)),
- 'NasmLexer': ('pip._vendor.pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM', '*.nasm'), ('text/x-nasm',)),
- 'NasmObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'objdump-nasm', ('objdump-nasm',), ('*.objdump-intel',), ('text/x-nasm-objdump',)),
- 'NemerleLexer': ('pip._vendor.pygments.lexers.dotnet', 'Nemerle', ('nemerle',), ('*.n',), ('text/x-nemerle',)),
- 'NesCLexer': ('pip._vendor.pygments.lexers.c_like', 'nesC', ('nesc',), ('*.nc',), ('text/x-nescsrc',)),
- 'NestedTextLexer': ('pip._vendor.pygments.lexers.configs', 'NestedText', ('nestedtext', 'nt'), ('*.nt',), ()),
- 'NewLispLexer': ('pip._vendor.pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl', '*.kif'), ('text/x-newlisp', 'application/x-newlisp')),
- 'NewspeakLexer': ('pip._vendor.pygments.lexers.smalltalk', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)),
- 'NginxConfLexer': ('pip._vendor.pygments.lexers.configs', 'Nginx configuration file', ('nginx',), ('nginx.conf',), ('text/x-nginx-conf',)),
- 'NimrodLexer': ('pip._vendor.pygments.lexers.nimrod', 'Nimrod', ('nimrod', 'nim'), ('*.nim', '*.nimrod'), ('text/x-nim',)),
- 'NitLexer': ('pip._vendor.pygments.lexers.nit', 'Nit', ('nit',), ('*.nit',), ()),
- 'NixLexer': ('pip._vendor.pygments.lexers.nix', 'Nix', ('nixos', 'nix'), ('*.nix',), ('text/x-nix',)),
- 'NodeConsoleLexer': ('pip._vendor.pygments.lexers.javascript', 'Node.js REPL console session', ('nodejsrepl',), (), ('text/x-nodejsrepl',)),
- 'NotmuchLexer': ('pip._vendor.pygments.lexers.textfmts', 'Notmuch', ('notmuch',), (), ()),
- 'NuSMVLexer': ('pip._vendor.pygments.lexers.smv', 'NuSMV', ('nusmv',), ('*.smv',), ()),
- 'NumPyLexer': ('pip._vendor.pygments.lexers.python', 'NumPy', ('numpy',), (), ()),
- 'ObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)),
- 'ObjectiveCLexer': ('pip._vendor.pygments.lexers.objective', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m', '*.h'), ('text/x-objective-c',)),
- 'ObjectiveCppLexer': ('pip._vendor.pygments.lexers.objective', 'Objective-C++', ('objective-c++', 'objectivec++', 'obj-c++', 'objc++'), ('*.mm', '*.hh'), ('text/x-objective-c++',)),
- 'ObjectiveJLexer': ('pip._vendor.pygments.lexers.javascript', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)),
- 'OcamlLexer': ('pip._vendor.pygments.lexers.ml', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
- 'OctaveLexer': ('pip._vendor.pygments.lexers.matlab', 'Octave', ('octave',), ('*.m',), ('text/octave',)),
- 'OdinLexer': ('pip._vendor.pygments.lexers.archetype', 'ODIN', ('odin',), ('*.odin',), ('text/odin',)),
- 'OmgIdlLexer': ('pip._vendor.pygments.lexers.c_like', 'OMG Interface Definition Language', ('omg-idl',), ('*.idl', '*.pidl'), ()),
- 'OocLexer': ('pip._vendor.pygments.lexers.ooc', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)),
- 'OpaLexer': ('pip._vendor.pygments.lexers.ml', 'Opa', ('opa',), ('*.opa',), ('text/x-opa',)),
- 'OpenEdgeLexer': ('pip._vendor.pygments.lexers.business', 'OpenEdge ABL', ('openedge', 'abl', 'progress'), ('*.p', '*.cls'), ('text/x-openedge', 'application/x-openedge')),
- 'OutputLexer': ('pip._vendor.pygments.lexers.special', 'Text output', ('output',), (), ()),
- 'PacmanConfLexer': ('pip._vendor.pygments.lexers.configs', 'PacmanConf', ('pacmanconf',), ('pacman.conf',), ()),
- 'PanLexer': ('pip._vendor.pygments.lexers.dsls', 'Pan', ('pan',), ('*.pan',), ()),
- 'ParaSailLexer': ('pip._vendor.pygments.lexers.parasail', 'ParaSail', ('parasail',), ('*.psi', '*.psl'), ('text/x-parasail',)),
- 'PawnLexer': ('pip._vendor.pygments.lexers.pawn', 'Pawn', ('pawn',), ('*.p', '*.pwn', '*.inc'), ('text/x-pawn',)),
- 'PegLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'PEG', ('peg',), ('*.peg',), ('text/x-peg',)),
- 'Perl6Lexer': ('pip._vendor.pygments.lexers.perl', 'Perl6', ('perl6', 'pl6', 'raku'), ('*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6', '*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod', '*.rakutest', '*.rakudoc'), ('text/x-perl6', 'application/x-perl6')),
- 'PerlLexer': ('pip._vendor.pygments.lexers.perl', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm', '*.t', '*.perl'), ('text/x-perl', 'application/x-perl')),
- 'PhixLexer': ('pip._vendor.pygments.lexers.phix', 'Phix', ('phix',), ('*.exw',), ('text/x-phix',)),
- 'PhpLexer': ('pip._vendor.pygments.lexers.php', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]', '*.inc'), ('text/x-php',)),
- 'PigLexer': ('pip._vendor.pygments.lexers.jvm', 'Pig', ('pig',), ('*.pig',), ('text/x-pig',)),
- 'PikeLexer': ('pip._vendor.pygments.lexers.c_like', 'Pike', ('pike',), ('*.pike', '*.pmod'), ('text/x-pike',)),
- 'PkgConfigLexer': ('pip._vendor.pygments.lexers.configs', 'PkgConfig', ('pkgconfig',), ('*.pc',), ()),
- 'PlPgsqlLexer': ('pip._vendor.pygments.lexers.sql', 'PL/pgSQL', ('plpgsql',), (), ('text/x-plpgsql',)),
- 'PointlessLexer': ('pip._vendor.pygments.lexers.pointless', 'Pointless', ('pointless',), ('*.ptls',), ()),
- 'PonyLexer': ('pip._vendor.pygments.lexers.pony', 'Pony', ('pony',), ('*.pony',), ()),
- 'PortugolLexer': ('pip._vendor.pygments.lexers.pascal', 'Portugol', ('portugol',), ('*.alg', '*.portugol'), ()),
- 'PostScriptLexer': ('pip._vendor.pygments.lexers.graphics', 'PostScript', ('postscript', 'postscr'), ('*.ps', '*.eps'), ('application/postscript',)),
- 'PostgresConsoleLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL console (psql)', ('psql', 'postgresql-console', 'postgres-console'), (), ('text/x-postgresql-psql',)),
- 'PostgresExplainLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL EXPLAIN dialect', ('postgres-explain',), ('*.explain',), ('text/x-postgresql-explain',)),
- 'PostgresLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL SQL dialect', ('postgresql', 'postgres'), (), ('text/x-postgresql',)),
- 'PovrayLexer': ('pip._vendor.pygments.lexers.graphics', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)),
- 'PowerShellLexer': ('pip._vendor.pygments.lexers.shell', 'PowerShell', ('powershell', 'pwsh', 'posh', 'ps1', 'psm1'), ('*.ps1', '*.psm1'), ('text/x-powershell',)),
- 'PowerShellSessionLexer': ('pip._vendor.pygments.lexers.shell', 'PowerShell Session', ('pwsh-session', 'ps1con'), (), ()),
- 'PraatLexer': ('pip._vendor.pygments.lexers.praat', 'Praat', ('praat',), ('*.praat', '*.proc', '*.psc'), ()),
- 'ProcfileLexer': ('pip._vendor.pygments.lexers.procfile', 'Procfile', ('procfile',), ('Procfile',), ()),
- 'PrologLexer': ('pip._vendor.pygments.lexers.prolog', 'Prolog', ('prolog',), ('*.ecl', '*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)),
- 'PromQLLexer': ('pip._vendor.pygments.lexers.promql', 'PromQL', ('promql',), ('*.promql',), ()),
- 'PropertiesLexer': ('pip._vendor.pygments.lexers.configs', 'Properties', ('properties', 'jproperties'), ('*.properties',), ('text/x-java-properties',)),
- 'ProtoBufLexer': ('pip._vendor.pygments.lexers.dsls', 'Protocol Buffer', ('protobuf', 'proto'), ('*.proto',), ()),
- 'PsyshConsoleLexer': ('pip._vendor.pygments.lexers.php', 'PsySH console session for PHP', ('psysh',), (), ()),
- 'PugLexer': ('pip._vendor.pygments.lexers.html', 'Pug', ('pug', 'jade'), ('*.pug', '*.jade'), ('text/x-pug', 'text/x-jade')),
- 'PuppetLexer': ('pip._vendor.pygments.lexers.dsls', 'Puppet', ('puppet',), ('*.pp',), ()),
- 'PyPyLogLexer': ('pip._vendor.pygments.lexers.console', 'PyPy Log', ('pypylog', 'pypy'), ('*.pypylog',), ('application/x-pypylog',)),
- 'Python2Lexer': ('pip._vendor.pygments.lexers.python', 'Python 2.x', ('python2', 'py2'), (), ('text/x-python2', 'application/x-python2')),
- 'Python2TracebackLexer': ('pip._vendor.pygments.lexers.python', 'Python 2.x Traceback', ('py2tb',), ('*.py2tb',), ('text/x-python2-traceback',)),
- 'PythonConsoleLexer': ('pip._vendor.pygments.lexers.python', 'Python console session', ('pycon',), (), ('text/x-python-doctest',)),
- 'PythonLexer': ('pip._vendor.pygments.lexers.python', 'Python', ('python', 'py', 'sage', 'python3', 'py3'), ('*.py', '*.pyw', '*.pyi', '*.jy', '*.sage', '*.sc', 'SConstruct', 'SConscript', '*.bzl', 'BUCK', 'BUILD', 'BUILD.bazel', 'WORKSPACE', '*.tac'), ('text/x-python', 'application/x-python', 'text/x-python3', 'application/x-python3')),
- 'PythonTracebackLexer': ('pip._vendor.pygments.lexers.python', 'Python Traceback', ('pytb', 'py3tb'), ('*.pytb', '*.py3tb'), ('text/x-python-traceback', 'text/x-python3-traceback')),
- 'PythonUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'Python+UL4', ('py+ul4',), ('*.pyul4',), ()),
- 'QBasicLexer': ('pip._vendor.pygments.lexers.basic', 'QBasic', ('qbasic', 'basic'), ('*.BAS', '*.bas'), ('text/basic',)),
- 'QLexer': ('pip._vendor.pygments.lexers.q', 'Q', ('q',), ('*.q',), ()),
- 'QVToLexer': ('pip._vendor.pygments.lexers.qvt', 'QVTO', ('qvto', 'qvt'), ('*.qvto',), ()),
- 'QlikLexer': ('pip._vendor.pygments.lexers.qlik', 'Qlik', ('qlik', 'qlikview', 'qliksense', 'qlikscript'), ('*.qvs', '*.qvw'), ()),
- 'QmlLexer': ('pip._vendor.pygments.lexers.webmisc', 'QML', ('qml', 'qbs'), ('*.qml', '*.qbs'), ('application/x-qml', 'application/x-qt.qbs+qml')),
- 'RConsoleLexer': ('pip._vendor.pygments.lexers.r', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()),
- 'RNCCompactLexer': ('pip._vendor.pygments.lexers.rnc', 'Relax-NG Compact', ('rng-compact', 'rnc'), ('*.rnc',), ()),
- 'RPMSpecLexer': ('pip._vendor.pygments.lexers.installers', 'RPMSpec', ('spec',), ('*.spec',), ('text/x-rpm-spec',)),
- 'RacketLexer': ('pip._vendor.pygments.lexers.lisp', 'Racket', ('racket', 'rkt'), ('*.rkt', '*.rktd', '*.rktl'), ('text/x-racket', 'application/x-racket')),
- 'RagelCLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()),
- 'RagelCppLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()),
- 'RagelDLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()),
- 'RagelEmbeddedLexer': ('pip._vendor.pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()),
- 'RagelJavaLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()),
- 'RagelLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()),
- 'RagelObjectiveCLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()),
- 'RagelRubyLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()),
- 'RawTokenLexer': ('pip._vendor.pygments.lexers.special', 'Raw token data', (), (), ('application/x-pygments-tokens',)),
- 'RdLexer': ('pip._vendor.pygments.lexers.r', 'Rd', ('rd',), ('*.Rd',), ('text/x-r-doc',)),
- 'ReasonLexer': ('pip._vendor.pygments.lexers.ml', 'ReasonML', ('reasonml', 'reason'), ('*.re', '*.rei'), ('text/x-reasonml',)),
- 'RebolLexer': ('pip._vendor.pygments.lexers.rebol', 'REBOL', ('rebol',), ('*.r', '*.r3', '*.reb'), ('text/x-rebol',)),
- 'RedLexer': ('pip._vendor.pygments.lexers.rebol', 'Red', ('red', 'red/system'), ('*.red', '*.reds'), ('text/x-red', 'text/x-red-system')),
- 'RedcodeLexer': ('pip._vendor.pygments.lexers.esoteric', 'Redcode', ('redcode',), ('*.cw',), ()),
- 'RegeditLexer': ('pip._vendor.pygments.lexers.configs', 'reg', ('registry',), ('*.reg',), ('text/x-windows-registry',)),
- 'ResourceLexer': ('pip._vendor.pygments.lexers.resource', 'ResourceBundle', ('resourcebundle', 'resource'), (), ()),
- 'RexxLexer': ('pip._vendor.pygments.lexers.scripting', 'Rexx', ('rexx', 'arexx'), ('*.rexx', '*.rex', '*.rx', '*.arexx'), ('text/x-rexx',)),
- 'RhtmlLexer': ('pip._vendor.pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)),
- 'RideLexer': ('pip._vendor.pygments.lexers.ride', 'Ride', ('ride',), ('*.ride',), ('text/x-ride',)),
- 'RitaLexer': ('pip._vendor.pygments.lexers.rita', 'Rita', ('rita',), ('*.rita',), ('text/rita',)),
- 'RoboconfGraphLexer': ('pip._vendor.pygments.lexers.roboconf', 'Roboconf Graph', ('roboconf-graph',), ('*.graph',), ()),
- 'RoboconfInstancesLexer': ('pip._vendor.pygments.lexers.roboconf', 'Roboconf Instances', ('roboconf-instances',), ('*.instances',), ()),
- 'RobotFrameworkLexer': ('pip._vendor.pygments.lexers.robotframework', 'RobotFramework', ('robotframework',), ('*.robot', '*.resource'), ('text/x-robotframework',)),
- 'RqlLexer': ('pip._vendor.pygments.lexers.sql', 'RQL', ('rql',), ('*.rql',), ('text/x-rql',)),
- 'RslLexer': ('pip._vendor.pygments.lexers.dsls', 'RSL', ('rsl',), ('*.rsl',), ('text/rsl',)),
- 'RstLexer': ('pip._vendor.pygments.lexers.markup', 'reStructuredText', ('restructuredtext', 'rst', 'rest'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')),
- 'RtsLexer': ('pip._vendor.pygments.lexers.trafficscript', 'TrafficScript', ('trafficscript', 'rts'), ('*.rts',), ()),
- 'RubyConsoleLexer': ('pip._vendor.pygments.lexers.ruby', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)),
- 'RubyLexer': ('pip._vendor.pygments.lexers.ruby', 'Ruby', ('ruby', 'rb', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby', 'Gemfile', 'Vagrantfile'), ('text/x-ruby', 'application/x-ruby')),
- 'RustLexer': ('pip._vendor.pygments.lexers.rust', 'Rust', ('rust', 'rs'), ('*.rs', '*.rs.in'), ('text/rust', 'text/x-rust')),
- 'SASLexer': ('pip._vendor.pygments.lexers.sas', 'SAS', ('sas',), ('*.SAS', '*.sas'), ('text/x-sas', 'text/sas', 'application/x-sas')),
- 'SLexer': ('pip._vendor.pygments.lexers.r', 'S', ('splus', 's', 'r'), ('*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'), ('text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile')),
- 'SMLLexer': ('pip._vendor.pygments.lexers.ml', 'Standard ML', ('sml',), ('*.sml', '*.sig', '*.fun'), ('text/x-standardml', 'application/x-standardml')),
- 'SNBTLexer': ('pip._vendor.pygments.lexers.minecraft', 'SNBT', ('snbt',), ('*.snbt',), ('text/snbt',)),
- 'SarlLexer': ('pip._vendor.pygments.lexers.jvm', 'SARL', ('sarl',), ('*.sarl',), ('text/x-sarl',)),
- 'SassLexer': ('pip._vendor.pygments.lexers.css', 'Sass', ('sass',), ('*.sass',), ('text/x-sass',)),
- 'SaviLexer': ('pip._vendor.pygments.lexers.savi', 'Savi', ('savi',), ('*.savi',), ()),
- 'ScalaLexer': ('pip._vendor.pygments.lexers.jvm', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)),
- 'ScamlLexer': ('pip._vendor.pygments.lexers.html', 'Scaml', ('scaml',), ('*.scaml',), ('text/x-scaml',)),
- 'ScdocLexer': ('pip._vendor.pygments.lexers.scdoc', 'scdoc', ('scdoc', 'scd'), ('*.scd', '*.scdoc'), ()),
- 'SchemeLexer': ('pip._vendor.pygments.lexers.lisp', 'Scheme', ('scheme', 'scm'), ('*.scm', '*.ss'), ('text/x-scheme', 'application/x-scheme')),
- 'ScilabLexer': ('pip._vendor.pygments.lexers.matlab', 'Scilab', ('scilab',), ('*.sci', '*.sce', '*.tst'), ('text/scilab',)),
- 'ScssLexer': ('pip._vendor.pygments.lexers.css', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)),
- 'SedLexer': ('pip._vendor.pygments.lexers.textedit', 'Sed', ('sed', 'gsed', 'ssed'), ('*.sed', '*.[gs]sed'), ('text/x-sed',)),
- 'ShExCLexer': ('pip._vendor.pygments.lexers.rdf', 'ShExC', ('shexc', 'shex'), ('*.shex',), ('text/shex',)),
- 'ShenLexer': ('pip._vendor.pygments.lexers.lisp', 'Shen', ('shen',), ('*.shen',), ('text/x-shen', 'application/x-shen')),
- 'SieveLexer': ('pip._vendor.pygments.lexers.sieve', 'Sieve', ('sieve',), ('*.siv', '*.sieve'), ()),
- 'SilverLexer': ('pip._vendor.pygments.lexers.verification', 'Silver', ('silver',), ('*.sil', '*.vpr'), ()),
- 'SingularityLexer': ('pip._vendor.pygments.lexers.configs', 'Singularity', ('singularity',), ('*.def', 'Singularity'), ()),
- 'SlashLexer': ('pip._vendor.pygments.lexers.slash', 'Slash', ('slash',), ('*.sla',), ()),
- 'SlimLexer': ('pip._vendor.pygments.lexers.webmisc', 'Slim', ('slim',), ('*.slim',), ('text/x-slim',)),
- 'SlurmBashLexer': ('pip._vendor.pygments.lexers.shell', 'Slurm', ('slurm', 'sbatch'), ('*.sl',), ()),
- 'SmaliLexer': ('pip._vendor.pygments.lexers.dalvik', 'Smali', ('smali',), ('*.smali',), ('text/smali',)),
- 'SmalltalkLexer': ('pip._vendor.pygments.lexers.smalltalk', 'Smalltalk', ('smalltalk', 'squeak', 'st'), ('*.st',), ('text/x-smalltalk',)),
- 'SmartGameFormatLexer': ('pip._vendor.pygments.lexers.sgf', 'SmartGameFormat', ('sgf',), ('*.sgf',), ()),
- 'SmartyLexer': ('pip._vendor.pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)),
- 'SmithyLexer': ('pip._vendor.pygments.lexers.smithy', 'Smithy', ('smithy',), ('*.smithy',), ()),
- 'SnobolLexer': ('pip._vendor.pygments.lexers.snobol', 'Snobol', ('snobol',), ('*.snobol',), ('text/x-snobol',)),
- 'SnowballLexer': ('pip._vendor.pygments.lexers.dsls', 'Snowball', ('snowball',), ('*.sbl',), ()),
- 'SolidityLexer': ('pip._vendor.pygments.lexers.solidity', 'Solidity', ('solidity',), ('*.sol',), ()),
- 'SophiaLexer': ('pip._vendor.pygments.lexers.sophia', 'Sophia', ('sophia',), ('*.aes',), ()),
- 'SourcePawnLexer': ('pip._vendor.pygments.lexers.pawn', 'SourcePawn', ('sp',), ('*.sp',), ('text/x-sourcepawn',)),
- 'SourcesListLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Sourcelist', ('debsources', 'sourceslist', 'sources.list'), ('sources.list',), ()),
- 'SparqlLexer': ('pip._vendor.pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)),
- 'SpiceLexer': ('pip._vendor.pygments.lexers.spice', 'Spice', ('spice', 'spicelang'), ('*.spice',), ('text/x-spice',)),
- 'SqlJinjaLexer': ('pip._vendor.pygments.lexers.templates', 'SQL+Jinja', ('sql+jinja',), ('*.sql', '*.sql.j2', '*.sql.jinja2'), ()),
- 'SqlLexer': ('pip._vendor.pygments.lexers.sql', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)),
- 'SqliteConsoleLexer': ('pip._vendor.pygments.lexers.sql', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)),
- 'SquidConfLexer': ('pip._vendor.pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)),
- 'SrcinfoLexer': ('pip._vendor.pygments.lexers.srcinfo', 'Srcinfo', ('srcinfo',), ('.SRCINFO',), ()),
- 'SspLexer': ('pip._vendor.pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)),
- 'StanLexer': ('pip._vendor.pygments.lexers.modeling', 'Stan', ('stan',), ('*.stan',), ()),
- 'StataLexer': ('pip._vendor.pygments.lexers.stata', 'Stata', ('stata', 'do'), ('*.do', '*.ado'), ('text/x-stata', 'text/stata', 'application/x-stata')),
- 'SuperColliderLexer': ('pip._vendor.pygments.lexers.supercollider', 'SuperCollider', ('supercollider', 'sc'), ('*.sc', '*.scd'), ('application/supercollider', 'text/supercollider')),
- 'SwiftLexer': ('pip._vendor.pygments.lexers.objective', 'Swift', ('swift',), ('*.swift',), ('text/x-swift',)),
- 'SwigLexer': ('pip._vendor.pygments.lexers.c_like', 'SWIG', ('swig',), ('*.swg', '*.i'), ('text/swig',)),
- 'SystemVerilogLexer': ('pip._vendor.pygments.lexers.hdl', 'systemverilog', ('systemverilog', 'sv'), ('*.sv', '*.svh'), ('text/x-systemverilog',)),
- 'TAPLexer': ('pip._vendor.pygments.lexers.testing', 'TAP', ('tap',), ('*.tap',), ()),
- 'TNTLexer': ('pip._vendor.pygments.lexers.tnt', 'Typographic Number Theory', ('tnt',), ('*.tnt',), ()),
- 'TOMLLexer': ('pip._vendor.pygments.lexers.configs', 'TOML', ('toml',), ('*.toml', 'Pipfile', 'poetry.lock'), ()),
- 'Tads3Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'TADS 3', ('tads3',), ('*.t',), ()),
- 'TalLexer': ('pip._vendor.pygments.lexers.tal', 'Tal', ('tal', 'uxntal'), ('*.tal',), ('text/x-uxntal',)),
- 'TasmLexer': ('pip._vendor.pygments.lexers.asm', 'TASM', ('tasm',), ('*.asm', '*.ASM', '*.tasm'), ('text/x-tasm',)),
- 'TclLexer': ('pip._vendor.pygments.lexers.tcl', 'Tcl', ('tcl',), ('*.tcl', '*.rvt'), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')),
- 'TcshLexer': ('pip._vendor.pygments.lexers.shell', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)),
- 'TcshSessionLexer': ('pip._vendor.pygments.lexers.shell', 'Tcsh Session', ('tcshcon',), (), ()),
- 'TeaTemplateLexer': ('pip._vendor.pygments.lexers.templates', 'Tea', ('tea',), ('*.tea',), ('text/x-tea',)),
- 'TealLexer': ('pip._vendor.pygments.lexers.teal', 'teal', ('teal',), ('*.teal',), ()),
- 'TeraTermLexer': ('pip._vendor.pygments.lexers.teraterm', 'Tera Term macro', ('teratermmacro', 'teraterm', 'ttl'), ('*.ttl',), ('text/x-teratermmacro',)),
- 'TermcapLexer': ('pip._vendor.pygments.lexers.configs', 'Termcap', ('termcap',), ('termcap', 'termcap.src'), ()),
- 'TerminfoLexer': ('pip._vendor.pygments.lexers.configs', 'Terminfo', ('terminfo',), ('terminfo', 'terminfo.src'), ()),
- 'TerraformLexer': ('pip._vendor.pygments.lexers.configs', 'Terraform', ('terraform', 'tf', 'hcl'), ('*.tf', '*.hcl'), ('application/x-tf', 'application/x-terraform')),
- 'TexLexer': ('pip._vendor.pygments.lexers.markup', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')),
- 'TextLexer': ('pip._vendor.pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)),
- 'ThingsDBLexer': ('pip._vendor.pygments.lexers.thingsdb', 'ThingsDB', ('ti', 'thingsdb'), ('*.ti',), ()),
- 'ThriftLexer': ('pip._vendor.pygments.lexers.dsls', 'Thrift', ('thrift',), ('*.thrift',), ('application/x-thrift',)),
- 'TiddlyWiki5Lexer': ('pip._vendor.pygments.lexers.markup', 'tiddler', ('tid',), ('*.tid',), ('text/vnd.tiddlywiki',)),
- 'TlbLexer': ('pip._vendor.pygments.lexers.tlb', 'Tl-b', ('tlb',), ('*.tlb',), ()),
- 'TodotxtLexer': ('pip._vendor.pygments.lexers.textfmts', 'Todotxt', ('todotxt',), ('todo.txt', '*.todotxt'), ('text/x-todo',)),
- 'TransactSqlLexer': ('pip._vendor.pygments.lexers.sql', 'Transact-SQL', ('tsql', 't-sql'), ('*.sql',), ('text/x-tsql',)),
- 'TreetopLexer': ('pip._vendor.pygments.lexers.parsers', 'Treetop', ('treetop',), ('*.treetop', '*.tt'), ()),
- 'TurtleLexer': ('pip._vendor.pygments.lexers.rdf', 'Turtle', ('turtle',), ('*.ttl',), ('text/turtle', 'application/x-turtle')),
- 'TwigHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Twig', ('html+twig',), ('*.twig',), ('text/html+twig',)),
- 'TwigLexer': ('pip._vendor.pygments.lexers.templates', 'Twig', ('twig',), (), ('application/x-twig',)),
- 'TypeScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'TypeScript', ('typescript', 'ts'), ('*.ts',), ('application/x-typescript', 'text/x-typescript')),
- 'TypoScriptCssDataLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScriptCssData', ('typoscriptcssdata',), (), ()),
- 'TypoScriptHtmlDataLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScriptHtmlData', ('typoscripthtmldata',), (), ()),
- 'TypoScriptLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScript', ('typoscript',), ('*.typoscript',), ('text/x-typoscript',)),
- 'UL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'UL4', ('ul4',), ('*.ul4',), ()),
- 'UcodeLexer': ('pip._vendor.pygments.lexers.unicon', 'ucode', ('ucode',), ('*.u', '*.u1', '*.u2'), ()),
- 'UniconLexer': ('pip._vendor.pygments.lexers.unicon', 'Unicon', ('unicon',), ('*.icn',), ('text/unicon',)),
- 'UnixConfigLexer': ('pip._vendor.pygments.lexers.configs', 'Unix/Linux config files', ('unixconfig', 'linuxconfig'), (), ()),
- 'UrbiscriptLexer': ('pip._vendor.pygments.lexers.urbi', 'UrbiScript', ('urbiscript',), ('*.u',), ('application/x-urbiscript',)),
- 'UsdLexer': ('pip._vendor.pygments.lexers.usd', 'USD', ('usd', 'usda'), ('*.usd', '*.usda'), ()),
- 'VBScriptLexer': ('pip._vendor.pygments.lexers.basic', 'VBScript', ('vbscript',), ('*.vbs', '*.VBS'), ()),
- 'VCLLexer': ('pip._vendor.pygments.lexers.varnish', 'VCL', ('vcl',), ('*.vcl',), ('text/x-vclsrc',)),
- 'VCLSnippetLexer': ('pip._vendor.pygments.lexers.varnish', 'VCLSnippets', ('vclsnippets', 'vclsnippet'), (), ('text/x-vclsnippet',)),
- 'VCTreeStatusLexer': ('pip._vendor.pygments.lexers.console', 'VCTreeStatus', ('vctreestatus',), (), ()),
- 'VGLLexer': ('pip._vendor.pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()),
- 'ValaLexer': ('pip._vendor.pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)),
- 'VbNetAspxLexer': ('pip._vendor.pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
- 'VbNetLexer': ('pip._vendor.pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet', 'lobas', 'oobas', 'sobas'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')),
- 'VelocityHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)),
- 'VelocityLexer': ('pip._vendor.pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()),
- 'VelocityXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)),
- 'VerilogLexer': ('pip._vendor.pygments.lexers.hdl', 'verilog', ('verilog', 'v'), ('*.v',), ('text/x-verilog',)),
- 'VhdlLexer': ('pip._vendor.pygments.lexers.hdl', 'vhdl', ('vhdl',), ('*.vhdl', '*.vhd'), ('text/x-vhdl',)),
- 'VimLexer': ('pip._vendor.pygments.lexers.textedit', 'VimL', ('vim',), ('*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'), ('text/x-vim',)),
- 'WDiffLexer': ('pip._vendor.pygments.lexers.diff', 'WDiff', ('wdiff',), ('*.wdiff',), ()),
- 'WatLexer': ('pip._vendor.pygments.lexers.webassembly', 'WebAssembly', ('wast', 'wat'), ('*.wat', '*.wast'), ()),
- 'WebIDLLexer': ('pip._vendor.pygments.lexers.webidl', 'Web IDL', ('webidl',), ('*.webidl',), ()),
- 'WgslLexer': ('pip._vendor.pygments.lexers.wgsl', 'WebGPU Shading Language', ('wgsl',), ('*.wgsl',), ('text/wgsl',)),
- 'WhileyLexer': ('pip._vendor.pygments.lexers.whiley', 'Whiley', ('whiley',), ('*.whiley',), ('text/x-whiley',)),
- 'WikitextLexer': ('pip._vendor.pygments.lexers.markup', 'Wikitext', ('wikitext', 'mediawiki'), (), ('text/x-wiki',)),
- 'WoWTocLexer': ('pip._vendor.pygments.lexers.wowtoc', 'World of Warcraft TOC', ('wowtoc',), ('*.toc',), ()),
- 'WrenLexer': ('pip._vendor.pygments.lexers.wren', 'Wren', ('wren',), ('*.wren',), ()),
- 'X10Lexer': ('pip._vendor.pygments.lexers.x10', 'X10', ('x10', 'xten'), ('*.x10',), ('text/x-x10',)),
- 'XMLUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'XML+UL4', ('xml+ul4',), ('*.xmlul4',), ()),
- 'XQueryLexer': ('pip._vendor.pygments.lexers.webmisc', 'XQuery', ('xquery', 'xqy', 'xq', 'xql', 'xqm'), ('*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'), ('text/xquery', 'application/xquery')),
- 'XmlDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), ('*.xml.j2', '*.xml.jinja2'), ('application/xml+django', 'application/xml+jinja')),
- 'XmlErbLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Ruby', ('xml+ruby', 'xml+erb'), (), ('application/xml+ruby',)),
- 'XmlLexer': ('pip._vendor.pygments.lexers.html', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl', '*.wsf'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml')),
- 'XmlPhpLexer': ('pip._vendor.pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)),
- 'XmlSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)),
- 'XorgLexer': ('pip._vendor.pygments.lexers.xorg', 'Xorg', ('xorg.conf',), ('xorg.conf',), ()),
- 'XppLexer': ('pip._vendor.pygments.lexers.dotnet', 'X++', ('xpp', 'x++'), ('*.xpp',), ()),
- 'XsltLexer': ('pip._vendor.pygments.lexers.html', 'XSLT', ('xslt',), ('*.xsl', '*.xslt', '*.xpl'), ('application/xsl+xml', 'application/xslt+xml')),
- 'XtendLexer': ('pip._vendor.pygments.lexers.jvm', 'Xtend', ('xtend',), ('*.xtend',), ('text/x-xtend',)),
- 'XtlangLexer': ('pip._vendor.pygments.lexers.lisp', 'xtlang', ('extempore',), ('*.xtm',), ()),
- 'YamlJinjaLexer': ('pip._vendor.pygments.lexers.templates', 'YAML+Jinja', ('yaml+jinja', 'salt', 'sls'), ('*.sls', '*.yaml.j2', '*.yml.j2', '*.yaml.jinja2', '*.yml.jinja2'), ('text/x-yaml+jinja', 'text/x-sls')),
- 'YamlLexer': ('pip._vendor.pygments.lexers.data', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',)),
- 'YangLexer': ('pip._vendor.pygments.lexers.yang', 'YANG', ('yang',), ('*.yang',), ('application/yang',)),
- 'ZeekLexer': ('pip._vendor.pygments.lexers.dsls', 'Zeek', ('zeek', 'bro'), ('*.zeek', '*.bro'), ()),
- 'ZephirLexer': ('pip._vendor.pygments.lexers.php', 'Zephir', ('zephir',), ('*.zep',), ()),
- 'ZigLexer': ('pip._vendor.pygments.lexers.zig', 'Zig', ('zig',), ('*.zig',), ('text/zig',)),
- 'apdlexer': ('pip._vendor.pygments.lexers.apdlexer', 'ANSYS parametric design language', ('ansys', 'apdl'), ('*.ans',), ()),
-}
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/alias.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/alias.py
deleted file mode 100644
index 452a9244ea6766d8cf94425fb583583ef740baee..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/alias.py
+++ /dev/null
@@ -1,78 +0,0 @@
-from distutils.errors import DistutilsOptionError
-
-from setuptools.command.setopt import edit_config, option_base, config_file
-
-
-def shquote(arg):
- """Quote an argument for later parsing by shlex.split()"""
- for c in '"', "'", "\\", "#":
- if c in arg:
- return repr(arg)
- if arg.split() != [arg]:
- return repr(arg)
- return arg
-
-
-class alias(option_base):
- """Define a shortcut that invokes one or more commands"""
-
- description = "define a shortcut to invoke one or more commands"
- command_consumes_arguments = True
-
- user_options = [
- ('remove', 'r', 'remove (unset) the alias'),
- ] + option_base.user_options
-
- boolean_options = option_base.boolean_options + ['remove']
-
- def initialize_options(self):
- option_base.initialize_options(self)
- self.args = None
- self.remove = None
-
- def finalize_options(self):
- option_base.finalize_options(self)
- if self.remove and len(self.args) != 1:
- raise DistutilsOptionError(
- "Must specify exactly one argument (the alias name) when "
- "using --remove"
- )
-
- def run(self):
- aliases = self.distribution.get_option_dict('aliases')
-
- if not self.args:
- print("Command Aliases")
- print("---------------")
- for alias in aliases:
- print("setup.py alias", format_alias(alias, aliases))
- return
-
- elif len(self.args) == 1:
- alias, = self.args
- if self.remove:
- command = None
- elif alias in aliases:
- print("setup.py alias", format_alias(alias, aliases))
- return
- else:
- print("No alias definition found for %r" % alias)
- return
- else:
- alias = self.args[0]
- command = ' '.join(map(shquote, self.args[1:]))
-
- edit_config(self.filename, {'aliases': {alias: command}}, self.dry_run)
-
-
-def format_alias(name, aliases):
- source, command = aliases[name]
- if source == config_file('global'):
- source = '--global-config '
- elif source == config_file('user'):
- source = '--user-config '
- elif source == config_file('local'):
- source = ''
- else:
- source = '--filename=%r' % source
- return source + name + ' ' + command
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/upload_docs.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/upload_docs.py
deleted file mode 100644
index 077c9d2fcdc22ff0a6f8ea51bfd77695f81bcf5d..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/upload_docs.py
+++ /dev/null
@@ -1,215 +0,0 @@
-"""upload_docs
-
-Implements a Distutils 'upload_docs' subcommand (upload documentation to
-sites other than PyPi such as devpi).
-"""
-
-from base64 import standard_b64encode
-from distutils import log
-from distutils.errors import DistutilsOptionError
-import os
-import socket
-import zipfile
-import tempfile
-import shutil
-import itertools
-import functools
-import http.client
-import urllib.parse
-
-from .._importlib import metadata
-from ..warnings import SetuptoolsDeprecationWarning
-
-from .upload import upload
-
-
-def _encode(s):
- return s.encode('utf-8', 'surrogateescape')
-
-
-class upload_docs(upload):
- # override the default repository as upload_docs isn't
- # supported by Warehouse (and won't be).
- DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'
-
- description = 'Upload documentation to sites other than PyPi such as devpi'
-
- user_options = [
- ('repository=', 'r',
- "url of repository [default: %s]" % upload.DEFAULT_REPOSITORY),
- ('show-response', None,
- 'display full response text from server'),
- ('upload-dir=', None, 'directory to upload'),
- ]
- boolean_options = upload.boolean_options
-
- def has_sphinx(self):
- return bool(
- self.upload_dir is None
- and metadata.entry_points(group='distutils.commands', name='build_sphinx')
- )
-
- sub_commands = [('build_sphinx', has_sphinx)]
-
- def initialize_options(self):
- upload.initialize_options(self)
- self.upload_dir = None
- self.target_dir = None
-
- def finalize_options(self):
- log.warn(
- "Upload_docs command is deprecated. Use Read the Docs "
- "(https://readthedocs.org) instead.")
- upload.finalize_options(self)
- if self.upload_dir is None:
- if self.has_sphinx():
- build_sphinx = self.get_finalized_command('build_sphinx')
- self.target_dir = dict(build_sphinx.builder_target_dirs)['html']
- else:
- build = self.get_finalized_command('build')
- self.target_dir = os.path.join(build.build_base, 'docs')
- else:
- self.ensure_dirname('upload_dir')
- self.target_dir = self.upload_dir
- self.announce('Using upload directory %s' % self.target_dir)
-
- def create_zipfile(self, filename):
- zip_file = zipfile.ZipFile(filename, "w")
- try:
- self.mkpath(self.target_dir) # just in case
- for root, dirs, files in os.walk(self.target_dir):
- if root == self.target_dir and not files:
- tmpl = "no files found in upload directory '%s'"
- raise DistutilsOptionError(tmpl % self.target_dir)
- for name in files:
- full = os.path.join(root, name)
- relative = root[len(self.target_dir):].lstrip(os.path.sep)
- dest = os.path.join(relative, name)
- zip_file.write(full, dest)
- finally:
- zip_file.close()
-
- def run(self):
- SetuptoolsDeprecationWarning.emit(
- "Deprecated command",
- """
- upload_docs is deprecated and will be removed in a future version.
- Instead, use tools like devpi and Read the Docs; or lower level tools like
- httpie and curl to interact directly with your hosting service API.
- """,
- due_date=(2023, 9, 26), # warning introduced in 27 Jul 2022
- )
-
- # Run sub commands
- for cmd_name in self.get_sub_commands():
- self.run_command(cmd_name)
-
- tmp_dir = tempfile.mkdtemp()
- name = self.distribution.metadata.get_name()
- zip_file = os.path.join(tmp_dir, "%s.zip" % name)
- try:
- self.create_zipfile(zip_file)
- self.upload_file(zip_file)
- finally:
- shutil.rmtree(tmp_dir)
-
- @staticmethod
- def _build_part(item, sep_boundary):
- key, values = item
- title = '\nContent-Disposition: form-data; name="%s"' % key
- # handle multiple entries for the same name
- if not isinstance(values, list):
- values = [values]
- for value in values:
- if isinstance(value, tuple):
- title += '; filename="%s"' % value[0]
- value = value[1]
- else:
- value = _encode(value)
- yield sep_boundary
- yield _encode(title)
- yield b"\n\n"
- yield value
- if value and value[-1:] == b'\r':
- yield b'\n' # write an extra newline (lurve Macs)
-
- @classmethod
- def _build_multipart(cls, data):
- """
- Build up the MIME payload for the POST data
- """
- boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'
- sep_boundary = b'\n--' + boundary.encode('ascii')
- end_boundary = sep_boundary + b'--'
- end_items = end_boundary, b"\n",
- builder = functools.partial(
- cls._build_part,
- sep_boundary=sep_boundary,
- )
- part_groups = map(builder, data.items())
- parts = itertools.chain.from_iterable(part_groups)
- body_items = itertools.chain(parts, end_items)
- content_type = 'multipart/form-data; boundary=%s' % boundary
- return b''.join(body_items), content_type
-
- def upload_file(self, filename):
- with open(filename, 'rb') as f:
- content = f.read()
- meta = self.distribution.metadata
- data = {
- ':action': 'doc_upload',
- 'name': meta.get_name(),
- 'content': (os.path.basename(filename), content),
- }
- # set up the authentication
- credentials = _encode(self.username + ':' + self.password)
- credentials = standard_b64encode(credentials).decode('ascii')
- auth = "Basic " + credentials
-
- body, ct = self._build_multipart(data)
-
- msg = "Submitting documentation to %s" % (self.repository)
- self.announce(msg, log.INFO)
-
- # build the Request
- # We can't use urllib2 since we need to send the Basic
- # auth right with the first request
- schema, netloc, url, params, query, fragments = \
- urllib.parse.urlparse(self.repository)
- assert not params and not query and not fragments
- if schema == 'http':
- conn = http.client.HTTPConnection(netloc)
- elif schema == 'https':
- conn = http.client.HTTPSConnection(netloc)
- else:
- raise AssertionError("unsupported schema " + schema)
-
- data = ''
- try:
- conn.connect()
- conn.putrequest("POST", url)
- content_type = ct
- conn.putheader('Content-type', content_type)
- conn.putheader('Content-length', str(len(body)))
- conn.putheader('Authorization', auth)
- conn.endheaders()
- conn.send(body)
- except socket.error as e:
- self.announce(str(e), log.ERROR)
- return
-
- r = conn.getresponse()
- if r.status == 200:
- msg = 'Server response (%s): %s' % (r.status, r.reason)
- self.announce(msg, log.INFO)
- elif r.status == 301:
- location = r.getheader('Location')
- if location is None:
- location = 'https://pythonhosted.org/%s/' % meta.get_name()
- msg = 'Upload successful. Visit %s' % location
- self.announce(msg, log.INFO)
- else:
- msg = 'Upload failed (%s): %s' % (r.status, r.reason)
- self.announce(msg, log.ERROR)
- if self.show_response:
- print('-' * 75, r.read(), '-' * 75)
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/README.md
deleted file mode 100644
index f560384045ab4f6bc2beabef1170308fca117eb3..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-## Unit Tests
-
-To run the unittests, do:
-```
-cd detectron2
-python -m unittest discover -v -s ./tests
-```
-
-There are also end-to-end inference & training tests, in [dev/run_*_tests.sh](../dev).
diff --git a/spaces/TheThanos/anything-v3.0_krn/utils.py b/spaces/TheThanos/anything-v3.0_krn/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/TheThanos/anything-v3.0_krn/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/process.py b/spaces/UserXTheUnknown/stablediffusion-infinity/process.py
deleted file mode 100644
index 5db1495ac8098c0260f5fdf5a60ca35a043b461c..0000000000000000000000000000000000000000
--- a/spaces/UserXTheUnknown/stablediffusion-infinity/process.py
+++ /dev/null
@@ -1,395 +0,0 @@
-"""
-https://github.com/Trinkle23897/Fast-Poisson-Image-Editing
-MIT License
-
-Copyright (c) 2022 Jiayi Weng
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
-"""
-import os
-from abc import ABC, abstractmethod
-from typing import Any, Optional, Tuple
-
-import numpy as np
-
-from fpie import np_solver
-
-import scipy
-import scipy.signal
-
-CPU_COUNT = os.cpu_count() or 1
-DEFAULT_BACKEND = "numpy"
-ALL_BACKEND = ["numpy"]
-
-try:
- from fpie import numba_solver
- ALL_BACKEND += ["numba"]
- DEFAULT_BACKEND = "numba"
-except ImportError:
- numba_solver = None # type: ignore
-
-try:
- from fpie import taichi_solver
- ALL_BACKEND += ["taichi-cpu", "taichi-gpu"]
- DEFAULT_BACKEND = "taichi-cpu"
-except ImportError:
- taichi_solver = None # type: ignore
-
-# try:
-# from fpie import core_gcc # type: ignore
-# DEFAULT_BACKEND = "gcc"
-# ALL_BACKEND.append("gcc")
-# except ImportError:
-# core_gcc = None
-
-# try:
-# from fpie import core_openmp # type: ignore
-# DEFAULT_BACKEND = "openmp"
-# ALL_BACKEND.append("openmp")
-# except ImportError:
-# core_openmp = None
-
-# try:
-# from mpi4py import MPI
-
-# from fpie import core_mpi # type: ignore
-# ALL_BACKEND.append("mpi")
-# except ImportError:
-# MPI = None # type: ignore
-# core_mpi = None
-
-try:
- from fpie import core_cuda # type: ignore
- DEFAULT_BACKEND = "cuda"
- ALL_BACKEND.append("cuda")
-except ImportError:
- core_cuda = None
-
-
-class BaseProcessor(ABC):
- """API definition for processor class."""
-
- def __init__(
- self, gradient: str, rank: int, backend: str, core: Optional[Any]
- ):
- if core is None:
- error_msg = {
- "numpy":
- "Please run `pip install numpy`.",
- "numba":
- "Please run `pip install numba`.",
- "gcc":
- "Please install cmake and gcc in your operating system.",
- "openmp":
- "Please make sure your gcc is compatible with `-fopenmp` option.",
- "mpi":
- "Please install MPI and run `pip install mpi4py`.",
- "cuda":
- "Please make sure nvcc and cuda-related libraries are available.",
- "taichi":
- "Please run `pip install taichi`.",
- }
- print(error_msg[backend.split("-")[0]])
-
- raise AssertionError(f"Invalid backend {backend}.")
-
- self.gradient = gradient
- self.rank = rank
- self.backend = backend
- self.core = core
- self.root = rank == 0
-
- def mixgrad(self, a: np.ndarray, b: np.ndarray) -> np.ndarray:
- if self.gradient == "src":
- return a
- if self.gradient == "avg":
- return (a + b) / 2
- # mix gradient, see Equ. 12 in PIE paper
- mask = np.abs(a) < np.abs(b)
- a[mask] = b[mask]
- return a
-
- @abstractmethod
- def reset(
- self,
- src: np.ndarray,
- mask: np.ndarray,
- tgt: np.ndarray,
- mask_on_src: Tuple[int, int],
- mask_on_tgt: Tuple[int, int],
- ) -> int:
- pass
-
- def sync(self) -> None:
- self.core.sync()
-
- @abstractmethod
- def step(self, iteration: int) -> Optional[Tuple[np.ndarray, np.ndarray]]:
- pass
-
-
-class EquProcessor(BaseProcessor):
- """PIE Jacobi equation processor."""
-
- def __init__(
- self,
- gradient: str = "max",
- backend: str = DEFAULT_BACKEND,
- n_cpu: int = CPU_COUNT,
- min_interval: int = 100,
- block_size: int = 1024,
- ):
- core: Optional[Any] = None
- rank = 0
-
- if backend == "numpy":
- core = np_solver.EquSolver()
- elif backend == "numba" and numba_solver is not None:
- core = numba_solver.EquSolver()
- elif backend == "gcc":
- core = core_gcc.EquSolver()
- elif backend == "openmp" and core_openmp is not None:
- core = core_openmp.EquSolver(n_cpu)
- elif backend == "mpi" and core_mpi is not None:
- core = core_mpi.EquSolver(min_interval)
- rank = MPI.COMM_WORLD.Get_rank()
- elif backend == "cuda" and core_cuda is not None:
- core = core_cuda.EquSolver(block_size)
- elif backend.startswith("taichi") and taichi_solver is not None:
- core = taichi_solver.EquSolver(backend, n_cpu, block_size)
-
- super().__init__(gradient, rank, backend, core)
-
- def mask2index(
- self, mask: np.ndarray
- ) -> Tuple[np.ndarray, int, np.ndarray, np.ndarray]:
- x, y = np.nonzero(mask)
- max_id = x.shape[0] + 1
- index = np.zeros((max_id, 3))
- ids = self.core.partition(mask)
- ids[mask == 0] = 0 # reserve id=0 for constant
- index = ids[x, y].argsort()
- return ids, max_id, x[index], y[index]
-
- def reset(
- self,
- src: np.ndarray,
- mask: np.ndarray,
- tgt: np.ndarray,
- mask_on_src: Tuple[int, int],
- mask_on_tgt: Tuple[int, int],
- ) -> int:
- assert self.root
- # check validity
- # assert 0 <= mask_on_src[0] and 0 <= mask_on_src[1]
- # assert mask_on_src[0] + mask.shape[0] <= src.shape[0]
- # assert mask_on_src[1] + mask.shape[1] <= src.shape[1]
- # assert mask_on_tgt[0] + mask.shape[0] <= tgt.shape[0]
- # assert mask_on_tgt[1] + mask.shape[1] <= tgt.shape[1]
-
- if len(mask.shape) == 3:
- mask = mask.mean(-1)
- mask = (mask >= 128).astype(np.int32)
-
- # zero-out edge
- mask[0] = 0
- mask[-1] = 0
- mask[:, 0] = 0
- mask[:, -1] = 0
-
- x, y = np.nonzero(mask)
- x0, x1 = x.min() - 1, x.max() + 2
- y0, y1 = y.min() - 1, y.max() + 2
- mask_on_src = (x0 + mask_on_src[0], y0 + mask_on_src[1])
- mask_on_tgt = (x0 + mask_on_tgt[0], y0 + mask_on_tgt[1])
- mask = mask[x0:x1, y0:y1]
- ids, max_id, index_x, index_y = self.mask2index(mask)
-
- src_x, src_y = index_x + mask_on_src[0], index_y + mask_on_src[1]
- tgt_x, tgt_y = index_x + mask_on_tgt[0], index_y + mask_on_tgt[1]
-
- src_C = src[src_x, src_y].astype(np.float32)
- src_U = src[src_x - 1, src_y].astype(np.float32)
- src_D = src[src_x + 1, src_y].astype(np.float32)
- src_L = src[src_x, src_y - 1].astype(np.float32)
- src_R = src[src_x, src_y + 1].astype(np.float32)
- tgt_C = tgt[tgt_x, tgt_y].astype(np.float32)
- tgt_U = tgt[tgt_x - 1, tgt_y].astype(np.float32)
- tgt_D = tgt[tgt_x + 1, tgt_y].astype(np.float32)
- tgt_L = tgt[tgt_x, tgt_y - 1].astype(np.float32)
- tgt_R = tgt[tgt_x, tgt_y + 1].astype(np.float32)
-
- grad = self.mixgrad(src_C - src_L, tgt_C - tgt_L) \
- + self.mixgrad(src_C - src_R, tgt_C - tgt_R) \
- + self.mixgrad(src_C - src_U, tgt_C - tgt_U) \
- + self.mixgrad(src_C - src_D, tgt_C - tgt_D)
-
- A = np.zeros((max_id, 4), np.int32)
- X = np.zeros((max_id, 3), np.float32)
- B = np.zeros((max_id, 3), np.float32)
-
- X[1:] = tgt[index_x + mask_on_tgt[0], index_y + mask_on_tgt[1]]
- # four-way
- A[1:, 0] = ids[index_x - 1, index_y]
- A[1:, 1] = ids[index_x + 1, index_y]
- A[1:, 2] = ids[index_x, index_y - 1]
- A[1:, 3] = ids[index_x, index_y + 1]
- B[1:] = grad
- m = (mask[index_x - 1, index_y] == 0).astype(float).reshape(-1, 1)
- B[1:] += m * tgt[index_x + mask_on_tgt[0] - 1, index_y + mask_on_tgt[1]]
- m = (mask[index_x, index_y - 1] == 0).astype(float).reshape(-1, 1)
- B[1:] += m * tgt[index_x + mask_on_tgt[0], index_y + mask_on_tgt[1] - 1]
- m = (mask[index_x, index_y + 1] == 0).astype(float).reshape(-1, 1)
- B[1:] += m * tgt[index_x + mask_on_tgt[0], index_y + mask_on_tgt[1] + 1]
- m = (mask[index_x + 1, index_y] == 0).astype(float).reshape(-1, 1)
- B[1:] += m * tgt[index_x + mask_on_tgt[0] + 1, index_y + mask_on_tgt[1]]
-
- self.tgt = tgt.copy()
- self.tgt_index = (index_x + mask_on_tgt[0], index_y + mask_on_tgt[1])
- self.core.reset(max_id, A, X, B)
- return max_id
-
- def step(self, iteration: int) -> Optional[Tuple[np.ndarray, np.ndarray]]:
- result = self.core.step(iteration)
- if self.root:
- x, err = result
- self.tgt[self.tgt_index] = x[1:]
- return self.tgt, err
- return None
-
-
-class GridProcessor(BaseProcessor):
- """PIE grid processor."""
-
- def __init__(
- self,
- gradient: str = "max",
- backend: str = DEFAULT_BACKEND,
- n_cpu: int = CPU_COUNT,
- min_interval: int = 100,
- block_size: int = 1024,
- grid_x: int = 8,
- grid_y: int = 8,
- ):
- core: Optional[Any] = None
- rank = 0
-
- if backend == "numpy":
- core = np_solver.GridSolver()
- elif backend == "numba" and numba_solver is not None:
- core = numba_solver.GridSolver()
- elif backend == "gcc":
- core = core_gcc.GridSolver(grid_x, grid_y)
- elif backend == "openmp" and core_openmp is not None:
- core = core_openmp.GridSolver(grid_x, grid_y, n_cpu)
- elif backend == "mpi" and core_mpi is not None:
- core = core_mpi.GridSolver(min_interval)
- rank = MPI.COMM_WORLD.Get_rank()
- elif backend == "cuda" and core_cuda is not None:
- core = core_cuda.GridSolver(grid_x, grid_y)
- elif backend.startswith("taichi") and taichi_solver is not None:
- core = taichi_solver.GridSolver(
- grid_x, grid_y, backend, n_cpu, block_size
- )
-
- super().__init__(gradient, rank, backend, core)
-
- def reset(
- self,
- src: np.ndarray,
- mask: np.ndarray,
- tgt: np.ndarray,
- mask_on_src: Tuple[int, int],
- mask_on_tgt: Tuple[int, int],
- ) -> int:
- assert self.root
- # check validity
- # assert 0 <= mask_on_src[0] and 0 <= mask_on_src[1]
- # assert mask_on_src[0] + mask.shape[0] <= src.shape[0]
- # assert mask_on_src[1] + mask.shape[1] <= src.shape[1]
- # assert mask_on_tgt[0] + mask.shape[0] <= tgt.shape[0]
- # assert mask_on_tgt[1] + mask.shape[1] <= tgt.shape[1]
-
- if len(mask.shape) == 3:
- mask = mask.mean(-1)
- mask = (mask >= 128).astype(np.int32)
-
- # zero-out edge
- mask[0] = 0
- mask[-1] = 0
- mask[:, 0] = 0
- mask[:, -1] = 0
-
- x, y = np.nonzero(mask)
- x0, x1 = x.min() - 1, x.max() + 2
- y0, y1 = y.min() - 1, y.max() + 2
- mask = mask[x0:x1, y0:y1]
- max_id = np.prod(mask.shape)
-
- src_crop = src[mask_on_src[0] + x0:mask_on_src[0] + x1,
- mask_on_src[1] + y0:mask_on_src[1] + y1].astype(np.float32)
- tgt_crop = tgt[mask_on_tgt[0] + x0:mask_on_tgt[0] + x1,
- mask_on_tgt[1] + y0:mask_on_tgt[1] + y1].astype(np.float32)
- grad = np.zeros([*mask.shape, 3], np.float32)
- grad[1:] += self.mixgrad(
- src_crop[1:] - src_crop[:-1], tgt_crop[1:] - tgt_crop[:-1]
- )
- grad[:-1] += self.mixgrad(
- src_crop[:-1] - src_crop[1:], tgt_crop[:-1] - tgt_crop[1:]
- )
- grad[:, 1:] += self.mixgrad(
- src_crop[:, 1:] - src_crop[:, :-1], tgt_crop[:, 1:] - tgt_crop[:, :-1]
- )
- grad[:, :-1] += self.mixgrad(
- src_crop[:, :-1] - src_crop[:, 1:], tgt_crop[:, :-1] - tgt_crop[:, 1:]
- )
-
- grad[mask == 0] = 0
- if True:
- kernel = [[1] * 3 for _ in range(3)]
- nmask = mask.copy()
- nmask[nmask > 0] = 1
- res = scipy.signal.convolve2d(
- nmask, kernel, mode="same", boundary="fill", fillvalue=1
- )
- res[nmask < 1] = 0
- res[res == 9] = 0
- res[res > 0] = 1
- grad[res>0]=0
- # ylst, xlst = res.nonzero()
- # for y, x in zip(ylst, xlst):
- # grad[y,x]=0
- # for yi in range(-1,2):
- # for xi in range(-1,2):
- # grad[y+yi,x+xi]=0
- self.x0 = mask_on_tgt[0] + x0
- self.x1 = mask_on_tgt[0] + x1
- self.y0 = mask_on_tgt[1] + y0
- self.y1 = mask_on_tgt[1] + y1
- self.tgt = tgt.copy()
- self.core.reset(max_id, mask, tgt_crop, grad)
- return max_id
-
- def step(self, iteration: int) -> Optional[Tuple[np.ndarray, np.ndarray]]:
- result = self.core.step(iteration)
- if self.root:
- tgt, err = result
- self.tgt[self.x0:self.x1, self.y0:self.y1] = tgt
- return self.tgt, err
- return None
diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/__init__.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/__init__.py
deleted file mode 100644
index 2af819d61d589cfec2e0ca46612a7456f42b831a..0000000000000000000000000000000000000000
--- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-from .groundingdino import build_groundingdino
diff --git a/spaces/Widium/Style-Recreation/functions/core.py b/spaces/Widium/Style-Recreation/functions/core.py
deleted file mode 100644
index 026b1fb1d66072963bd92337c7b7b6a2e168d166..0000000000000000000000000000000000000000
--- a/spaces/Widium/Style-Recreation/functions/core.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# *************************************************************************** #
-# #
-# core.py #
-# #
-# By: Widium #
-# Github : https://github.com/widium #
-# #
-# Created: 2023/05/05 15:59:03 by Widium #
-# Updated: 2023/05/05 15:59:03 by Widium #
-# #
-# **************************************************************************** #
-
-import tensorflow as tf
-
-from .image import load_image_path
-from .image import tensor_to_image
-from .model import StyleRecreationModel
-
-EPOCHS = 135
-
-# Protect tf.function
-TENSOR_EAGERLY = True
-tf.config.run_functions_eagerly(TENSOR_EAGERLY)
-
-# **************************************************************************** #
-
-def style_generation(style_img_path : str):
- """
- Generate an image with the style of the given style image using StyleRecreationModel.
-
- Args:
- style_img_path (str): Path to the style image file.
-
- Returns:
- final_img (Image): Generated image with the style applied.
- total_time (float): Time taken to generate the styled image in seconds.
- """
- if style_img_path == None:
- return (None, None)
-
- style_img = load_image_path(style_img_path)
-
- print(f"Input Image Shape : {style_img.shape}")
-
- model = StyleRecreationModel()
-
- style_generated, total_time = model.recreate_style(
- style_img_array=style_img,
- num_epochs=EPOCHS,
- )
-
- final_img = tensor_to_image(style_generated)
-
- return (final_img, total_time)
diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/models.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/models.py
deleted file mode 100644
index 2022c245c905b3213c974ef4a30b30eafe5ee77f..0000000000000000000000000000000000000000
--- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/models.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from ..torch_core import *
-from ..layers import *
-from ..basic_data import *
-from ..basic_train import *
-from ..train import ClassificationInterpretation
-
-__all__ = ['TabularModel']
-
-class TabularModel(Module):
- "Basic model for tabular data."
- def __init__(self, emb_szs:ListSizes, n_cont:int, out_sz:int, layers:Collection[int], ps:Collection[float]=None,
- emb_drop:float=0., y_range:OptRange=None, use_bn:bool=True, bn_final:bool=False):
- super().__init__()
- ps = ifnone(ps, [0]*len(layers))
- ps = listify(ps, layers)
- self.embeds = nn.ModuleList([embedding(ni, nf) for ni,nf in emb_szs])
- self.emb_drop = nn.Dropout(emb_drop)
- self.bn_cont = nn.BatchNorm1d(n_cont)
- n_emb = sum(e.embedding_dim for e in self.embeds)
- self.n_emb,self.n_cont,self.y_range = n_emb,n_cont,y_range
- sizes = self.get_sizes(layers, out_sz)
- actns = [nn.ReLU(inplace=True) for _ in range(len(sizes)-2)] + [None]
- layers = []
- for i,(n_in,n_out,dp,act) in enumerate(zip(sizes[:-1],sizes[1:],[0.]+ps,actns)):
- layers += bn_drop_lin(n_in, n_out, bn=use_bn and i!=0, p=dp, actn=act)
- if bn_final: layers.append(nn.BatchNorm1d(sizes[-1]))
- self.layers = nn.Sequential(*layers)
-
- def get_sizes(self, layers, out_sz):
- return [self.n_emb + self.n_cont] + layers + [out_sz]
-
- def forward(self, x_cat:Tensor, x_cont:Tensor) -> Tensor:
- if self.n_emb != 0:
- x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
- x = torch.cat(x, 1)
- x = self.emb_drop(x)
- if self.n_cont != 0:
- x_cont = self.bn_cont(x_cont)
- x = torch.cat([x, x_cont], 1) if self.n_emb != 0 else x_cont
- x = self.layers(x)
- if self.y_range is not None:
- x = (self.y_range[1]-self.y_range[0]) * torch.sigmoid(x) + self.y_range[0]
- return x
-
-@classmethod
-def _cl_int_from_learner(cls, learn:Learner, ds_type=DatasetType.Valid, activ:nn.Module=None):
- "Creates an instance of 'ClassificationInterpretation"
- preds = learn.get_preds(ds_type=ds_type, activ=activ, with_loss=True)
- return cls(learn, *preds, ds_type=ds_type)
-
-def _cl_int_plot_top_losses(self, k, largest:bool=True, return_table:bool=False)->Optional[plt.Figure]:
- "Generates a dataframe of 'top_losses' along with their prediction, actual, loss, and probability of the actual class."
- tl_val, tl_idx = self.top_losses(k, largest)
- classes = self.data.classes
- cat_names = self.data.x.cat_names
- cont_names = self.data.x.cont_names
- df = pd.DataFrame(columns=[['Prediction', 'Actual', 'Loss', 'Probability'] + cat_names + cont_names])
- for i, idx in enumerate(tl_idx):
- da, cl = self.data.dl(self.ds_type).dataset[idx]
- cl = int(cl)
- t1 = str(da)
- t1 = t1.split(';')
- arr = []
- arr.extend([classes[self.pred_class[idx]], classes[cl], f'{self.losses[idx]:.2f}',
- f'{self.preds[idx][cl]:.2f}'])
- for x in range(len(t1)-1):
- _, value = t1[x].rsplit(' ', 1)
- arr.append(value)
- df.loc[i] = arr
- display(df)
- return_fig = return_table
- if ifnone(return_fig, defaults.return_fig): return df
-
-
-ClassificationInterpretation.from_learner = _cl_int_from_learner
-ClassificationInterpretation.plot_top_losses = _cl_int_plot_top_losses
-
-def _learner_interpret(learn:Learner, ds_type:DatasetType = DatasetType.Valid):
- "Create a 'ClassificationInterpretation' object from 'learner' on 'ds_type'."
- return ClassificationInterpretation.from_learner(learn, ds_type=ds_type)
-
-Learner.interpret = _learner_interpret
diff --git a/spaces/Xhaheen/Hyper_Bot_openai/README.md b/spaces/Xhaheen/Hyper_Bot_openai/README.md
deleted file mode 100644
index 8a53e73dca924a500157d5e9523642f80afe0b9a..0000000000000000000000000000000000000000
--- a/spaces/Xhaheen/Hyper_Bot_openai/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Hyper Bot
-emoji: 🤖
-colorFrom: gray
-colorTo: yellow
-sdk: static
-pinned: false
-duplicated_from: Xhaheen/Hyper_Bot_ben
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_infer.py b/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_infer.py
deleted file mode 100644
index af04e614c8f1ac43faf363b1a9f6bfd667fbde21..0000000000000000000000000000000000000000
--- a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_infer.py
+++ /dev/null
@@ -1,201 +0,0 @@
-import torch
-import commons
-import models
-
-import math
-from torch import nn
-from torch.nn import functional as F
-
-import modules
-import attentions
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emotion_embedding = emotion_embedding
-
- if self.n_vocab != 0:
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- if emotion_embedding:
- self.emo_proj = nn.Linear(1024, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, emotion_embedding=None):
- if self.n_vocab != 0:
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- if emotion_embedding is not None:
- print("emotion added")
- x = x + self.emo_proj(emotion_embedding.unsqueeze(1))
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class SynthesizerTrn(models.SynthesizerTrn):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- emotion_embedding=False,
- ONNX_dir="./ONNX_net/",
- **kwargs):
-
- super().__init__(
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=n_speakers,
- gin_channels=gin_channels,
- use_sdp=use_sdp,
- **kwargs
- )
- self.ONNX_dir = ONNX_dir
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None,
- emotion_embedding=None):
- from ONNXVITS_utils import runonnx
- with torch.no_grad():
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding)
-
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- # logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- logw = runonnx(f"{self.ONNX_dir}dp.onnx", x=x.numpy(), x_mask=x_mask.numpy(), g=g.numpy())
- logw = torch.from_numpy(logw[0])
-
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
-
- # z = self.flow(z_p, y_mask, g=g, reverse=True)
- z = runonnx(f"{self.ONNX_dir}flow.onnx", z_p=z_p.numpy(), y_mask=y_mask.numpy(), g=g.numpy())
- z = torch.from_numpy(z[0])
-
- # o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- o = runonnx(f"{self.ONNX_dir}dec.onnx", z_in=(z * y_mask)[:, :, :max_len].numpy(), g=g.numpy())
- o = torch.from_numpy(o[0])
-
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
\ No newline at end of file
diff --git a/spaces/Y-T-G/Blur-Anything/tracker/inference/kv_memory_store.py b/spaces/Y-T-G/Blur-Anything/tracker/inference/kv_memory_store.py
deleted file mode 100644
index ffe2378170e6a6dc905ca2567deafb66410827b4..0000000000000000000000000000000000000000
--- a/spaces/Y-T-G/Blur-Anything/tracker/inference/kv_memory_store.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import torch
-from typing import List
-
-
-class KeyValueMemoryStore:
- """
- Works for key/value pairs type storage
- e.g., working and long-term memory
- """
-
- """
- An object group is created when new objects enter the video
- Objects in the same group share the same temporal extent
- i.e., objects initialized in the same frame are in the same group
- For DAVIS/interactive, there is only one object group
- For YouTubeVOS, there can be multiple object groups
- """
-
- def __init__(self, count_usage: bool):
- self.count_usage = count_usage
-
- # keys are stored in a single tensor and are shared between groups/objects
- # values are stored as a list indexed by object groups
- self.k = None
- self.v = []
- self.obj_groups = []
- # for debugging only
- self.all_objects = []
-
- # shrinkage and selection are also single tensors
- self.s = self.e = None
-
- # usage
- if self.count_usage:
- self.use_count = self.life_count = None
-
- def add(self, key, value, shrinkage, selection, objects: List[int]):
- new_count = torch.zeros(
- (key.shape[0], 1, key.shape[2]), device=key.device, dtype=torch.float32
- )
- new_life = (
- torch.zeros(
- (key.shape[0], 1, key.shape[2]), device=key.device, dtype=torch.float32
- )
- + 1e-7
- )
-
- # add the key
- if self.k is None:
- self.k = key
- self.s = shrinkage
- self.e = selection
- if self.count_usage:
- self.use_count = new_count
- self.life_count = new_life
- else:
- self.k = torch.cat([self.k, key], -1)
- if shrinkage is not None:
- self.s = torch.cat([self.s, shrinkage], -1)
- if selection is not None:
- self.e = torch.cat([self.e, selection], -1)
- if self.count_usage:
- self.use_count = torch.cat([self.use_count, new_count], -1)
- self.life_count = torch.cat([self.life_count, new_life], -1)
-
- # add the value
- if objects is not None:
- # When objects is given, v is a tensor; used in working memory
- assert isinstance(value, torch.Tensor)
- # First consume objects that are already in the memory bank
- # cannot use set here because we need to preserve order
- # shift by one as background is not part of value
- remaining_objects = [obj - 1 for obj in objects]
- for gi, group in enumerate(self.obj_groups):
- for obj in group:
- # should properly raise an error if there are overlaps in obj_groups
- remaining_objects.remove(obj)
- self.v[gi] = torch.cat([self.v[gi], value[group]], -1)
-
- # If there are remaining objects, add them as a new group
- if len(remaining_objects) > 0:
- new_group = list(remaining_objects)
- self.v.append(value[new_group])
- self.obj_groups.append(new_group)
- self.all_objects.extend(new_group)
-
- assert (
- sorted(self.all_objects) == self.all_objects
- ), "Objects MUST be inserted in sorted order "
- else:
- # When objects is not given, v is a list that already has the object groups sorted
- # used in long-term memory
- assert isinstance(value, list)
- for gi, gv in enumerate(value):
- if gv is None:
- continue
- if gi < self.num_groups:
- self.v[gi] = torch.cat([self.v[gi], gv], -1)
- else:
- self.v.append(gv)
-
- def update_usage(self, usage):
- # increase all life count by 1
- # increase use of indexed elements
- if not self.count_usage:
- return
-
- self.use_count += usage.view_as(self.use_count)
- self.life_count += 1
-
- def sieve_by_range(self, start: int, end: int, min_size: int):
- # keep only the elements *outside* of this range (with some boundary conditions)
- # i.e., concat (a[:start], a[end:])
- # min_size is only used for values, we do not sieve values under this size
- # (because they are not consolidated)
-
- if end == 0:
- # negative 0 would not work as the end index!
- self.k = self.k[:, :, :start]
- if self.count_usage:
- self.use_count = self.use_count[:, :, :start]
- self.life_count = self.life_count[:, :, :start]
- if self.s is not None:
- self.s = self.s[:, :, :start]
- if self.e is not None:
- self.e = self.e[:, :, :start]
-
- for gi in range(self.num_groups):
- if self.v[gi].shape[-1] >= min_size:
- self.v[gi] = self.v[gi][:, :, :start]
- else:
- self.k = torch.cat([self.k[:, :, :start], self.k[:, :, end:]], -1)
- if self.count_usage:
- self.use_count = torch.cat(
- [self.use_count[:, :, :start], self.use_count[:, :, end:]], -1
- )
- self.life_count = torch.cat(
- [self.life_count[:, :, :start], self.life_count[:, :, end:]], -1
- )
- if self.s is not None:
- self.s = torch.cat([self.s[:, :, :start], self.s[:, :, end:]], -1)
- if self.e is not None:
- self.e = torch.cat([self.e[:, :, :start], self.e[:, :, end:]], -1)
-
- for gi in range(self.num_groups):
- if self.v[gi].shape[-1] >= min_size:
- self.v[gi] = torch.cat(
- [self.v[gi][:, :, :start], self.v[gi][:, :, end:]], -1
- )
-
- def remove_obsolete_features(self, max_size: int):
- # normalize with life duration
- usage = self.get_usage().flatten()
-
- values, _ = torch.topk(
- usage, k=(self.size - max_size), largest=False, sorted=True
- )
- survived = usage > values[-1]
-
- self.k = self.k[:, :, survived]
- self.s = self.s[:, :, survived] if self.s is not None else None
- # Long-term memory does not store ek so this should not be needed
- self.e = self.e[:, :, survived] if self.e is not None else None
- if self.num_groups > 1:
- raise NotImplementedError(
- """The current data structure does not support feature removal with
- multiple object groups (e.g., some objects start to appear later in the video)
- The indices for "survived" is based on keys but not all values are present for every key
- Basically we need to remap the indices for keys to values
- """
- )
- for gi in range(self.num_groups):
- self.v[gi] = self.v[gi][:, :, survived]
-
- self.use_count = self.use_count[:, :, survived]
- self.life_count = self.life_count[:, :, survived]
-
- def get_usage(self):
- # return normalized usage
- if not self.count_usage:
- raise RuntimeError("I did not count usage!")
- else:
- usage = self.use_count / self.life_count
- return usage
-
- def get_all_sliced(self, start: int, end: int):
- # return k, sk, ek, usage in order, sliced by start and end
-
- if end == 0:
- # negative 0 would not work as the end index!
- k = self.k[:, :, start:]
- sk = self.s[:, :, start:] if self.s is not None else None
- ek = self.e[:, :, start:] if self.e is not None else None
- usage = self.get_usage()[:, :, start:]
- else:
- k = self.k[:, :, start:end]
- sk = self.s[:, :, start:end] if self.s is not None else None
- ek = self.e[:, :, start:end] if self.e is not None else None
- usage = self.get_usage()[:, :, start:end]
-
- return k, sk, ek, usage
-
- def get_v_size(self, ni: int):
- return self.v[ni].shape[2]
-
- def engaged(self):
- return self.k is not None
-
- @property
- def size(self):
- if self.k is None:
- return 0
- else:
- return self.k.shape[-1]
-
- @property
- def num_groups(self):
- return len(self.v)
-
- @property
- def key(self):
- return self.k
-
- @property
- def value(self):
- return self.v
-
- @property
- def shrinkage(self):
- return self.s
-
- @property
- def selection(self):
- return self.e
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/logging.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/logging.py
deleted file mode 100644
index 8c1c77d10b2a6b06a0c57d4fdf1802e3bd5f705f..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/logging.py
+++ /dev/null
@@ -1,340 +0,0 @@
-# coding=utf-8
-# Copyright 2020 Optuna, Hugging Face
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Logging utilities."""
-
-import logging
-import os
-import sys
-import threading
-from logging import CRITICAL # NOQA
-from logging import DEBUG # NOQA
-from logging import ERROR # NOQA
-from logging import FATAL # NOQA
-from logging import INFO # NOQA
-from logging import NOTSET # NOQA
-from logging import WARN # NOQA
-from logging import WARNING # NOQA
-from typing import Optional
-
-from tqdm import auto as tqdm_lib
-
-
-_lock = threading.Lock()
-_default_handler: Optional[logging.Handler] = None
-
-log_levels = {
- "debug": logging.DEBUG,
- "info": logging.INFO,
- "warning": logging.WARNING,
- "error": logging.ERROR,
- "critical": logging.CRITICAL,
-}
-
-_default_log_level = logging.WARNING
-
-_tqdm_active = True
-
-
-def _get_default_logging_level():
- """
- If DIFFUSERS_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
- not - fall back to `_default_log_level`
- """
- env_level_str = os.getenv("DIFFUSERS_VERBOSITY", None)
- if env_level_str:
- if env_level_str in log_levels:
- return log_levels[env_level_str]
- else:
- logging.getLogger().warning(
- f"Unknown option DIFFUSERS_VERBOSITY={env_level_str}, "
- f"has to be one of: { ', '.join(log_levels.keys()) }"
- )
- return _default_log_level
-
-
-def _get_library_name() -> str:
- return __name__.split(".")[0]
-
-
-def _get_library_root_logger() -> logging.Logger:
- return logging.getLogger(_get_library_name())
-
-
-def _configure_library_root_logger() -> None:
- global _default_handler
-
- with _lock:
- if _default_handler:
- # This library has already configured the library root logger.
- return
- _default_handler = logging.StreamHandler() # Set sys.stderr as stream.
- _default_handler.flush = sys.stderr.flush
-
- # Apply our default configuration to the library root logger.
- library_root_logger = _get_library_root_logger()
- library_root_logger.addHandler(_default_handler)
- library_root_logger.setLevel(_get_default_logging_level())
- library_root_logger.propagate = False
-
-
-def _reset_library_root_logger() -> None:
- global _default_handler
-
- with _lock:
- if not _default_handler:
- return
-
- library_root_logger = _get_library_root_logger()
- library_root_logger.removeHandler(_default_handler)
- library_root_logger.setLevel(logging.NOTSET)
- _default_handler = None
-
-
-def get_log_levels_dict():
- return log_levels
-
-
-def get_logger(name: Optional[str] = None) -> logging.Logger:
- """
- Return a logger with the specified name.
-
- This function is not supposed to be directly accessed unless you are writing a custom diffusers module.
- """
-
- if name is None:
- name = _get_library_name()
-
- _configure_library_root_logger()
- return logging.getLogger(name)
-
-
-def get_verbosity() -> int:
- """
- Return the current level for the 🤗 Diffusers' root logger as an int.
-
- Returns:
- `int`: The logging level.
-
-
-
- 🤗 Diffusers has following logging levels:
-
- - 50: `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
- - 40: `diffusers.logging.ERROR`
- - 30: `diffusers.logging.WARNING` or `diffusers.logging.WARN`
- - 20: `diffusers.logging.INFO`
- - 10: `diffusers.logging.DEBUG`
-
- """
-
- _configure_library_root_logger()
- return _get_library_root_logger().getEffectiveLevel()
-
-
-def set_verbosity(verbosity: int) -> None:
- """
- Set the verbosity level for the 🤗 Diffusers' root logger.
-
- Args:
- verbosity (`int`):
- Logging level, e.g., one of:
-
- - `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
- - `diffusers.logging.ERROR`
- - `diffusers.logging.WARNING` or `diffusers.logging.WARN`
- - `diffusers.logging.INFO`
- - `diffusers.logging.DEBUG`
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().setLevel(verbosity)
-
-
-def set_verbosity_info():
- """Set the verbosity to the `INFO` level."""
- return set_verbosity(INFO)
-
-
-def set_verbosity_warning():
- """Set the verbosity to the `WARNING` level."""
- return set_verbosity(WARNING)
-
-
-def set_verbosity_debug():
- """Set the verbosity to the `DEBUG` level."""
- return set_verbosity(DEBUG)
-
-
-def set_verbosity_error():
- """Set the verbosity to the `ERROR` level."""
- return set_verbosity(ERROR)
-
-
-def disable_default_handler() -> None:
- """Disable the default handler of the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert _default_handler is not None
- _get_library_root_logger().removeHandler(_default_handler)
-
-
-def enable_default_handler() -> None:
- """Enable the default handler of the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert _default_handler is not None
- _get_library_root_logger().addHandler(_default_handler)
-
-
-def add_handler(handler: logging.Handler) -> None:
- """adds a handler to the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert handler is not None
- _get_library_root_logger().addHandler(handler)
-
-
-def remove_handler(handler: logging.Handler) -> None:
- """removes given handler from the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert handler is not None and handler not in _get_library_root_logger().handlers
- _get_library_root_logger().removeHandler(handler)
-
-
-def disable_propagation() -> None:
- """
- Disable propagation of the library log outputs. Note that log propagation is disabled by default.
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().propagate = False
-
-
-def enable_propagation() -> None:
- """
- Enable propagation of the library log outputs. Please disable the HuggingFace Diffusers' default handler to prevent
- double logging if the root logger has been configured.
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().propagate = True
-
-
-def enable_explicit_format() -> None:
- """
- Enable explicit formatting for every HuggingFace Diffusers' logger. The explicit formatter is as follows:
- ```
- [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
- ```
- All handlers currently bound to the root logger are affected by this method.
- """
- handlers = _get_library_root_logger().handlers
-
- for handler in handlers:
- formatter = logging.Formatter("[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s")
- handler.setFormatter(formatter)
-
-
-def reset_format() -> None:
- """
- Resets the formatting for HuggingFace Diffusers' loggers.
-
- All handlers currently bound to the root logger are affected by this method.
- """
- handlers = _get_library_root_logger().handlers
-
- for handler in handlers:
- handler.setFormatter(None)
-
-
-def warning_advice(self, *args, **kwargs):
- """
- This method is identical to `logger.warning()`, but if env var DIFFUSERS_NO_ADVISORY_WARNINGS=1 is set, this
- warning will not be printed
- """
- no_advisory_warnings = os.getenv("DIFFUSERS_NO_ADVISORY_WARNINGS", False)
- if no_advisory_warnings:
- return
- self.warning(*args, **kwargs)
-
-
-logging.Logger.warning_advice = warning_advice
-
-
-class EmptyTqdm:
- """Dummy tqdm which doesn't do anything."""
-
- def __init__(self, *args, **kwargs): # pylint: disable=unused-argument
- self._iterator = args[0] if args else None
-
- def __iter__(self):
- return iter(self._iterator)
-
- def __getattr__(self, _):
- """Return empty function."""
-
- def empty_fn(*args, **kwargs): # pylint: disable=unused-argument
- return
-
- return empty_fn
-
- def __enter__(self):
- return self
-
- def __exit__(self, type_, value, traceback):
- return
-
-
-class _tqdm_cls:
- def __call__(self, *args, **kwargs):
- if _tqdm_active:
- return tqdm_lib.tqdm(*args, **kwargs)
- else:
- return EmptyTqdm(*args, **kwargs)
-
- def set_lock(self, *args, **kwargs):
- self._lock = None
- if _tqdm_active:
- return tqdm_lib.tqdm.set_lock(*args, **kwargs)
-
- def get_lock(self):
- if _tqdm_active:
- return tqdm_lib.tqdm.get_lock()
-
-
-tqdm = _tqdm_cls()
-
-
-def is_progress_bar_enabled() -> bool:
- """Return a boolean indicating whether tqdm progress bars are enabled."""
- global _tqdm_active
- return bool(_tqdm_active)
-
-
-def enable_progress_bar():
- """Enable tqdm progress bar."""
- global _tqdm_active
- _tqdm_active = True
-
-
-def disable_progress_bar():
- """Disable tqdm progress bar."""
- global _tqdm_active
- _tqdm_active = False
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md
deleted file mode 100644
index 5db8f22415ff5c857ce83fb0d3de68211f775080..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-name: "😩 Unexpected behaviors"
-about: Report unexpected behaviors when using detectron2
-title: Please read & provide the following
-
----
-
-If you do not know the root cause of the problem, please post according to this template:
-
-## Instructions To Reproduce the Issue:
-
-Check https://stackoverflow.com/help/minimal-reproducible-example for how to ask good questions.
-Simplify the steps to reproduce the issue using suggestions from the above link, and provide them below:
-
-1. Full runnable code or full changes you made:
-```
-If making changes to the project itself, please use output of the following command:
-git rev-parse HEAD; git diff
-
-
-```
-2. What exact command you run:
-3. __Full logs__ or other relevant observations:
-```
-
-```
-
-## Expected behavior:
-
-If there are no obvious crash in "full logs" provided above,
-please tell us the expected behavior.
-
-If you expect a model to converge / work better, we do not help with such issues, unless
-a model fails to reproduce the results in detectron2 model zoo, or proves existence of bugs.
-
-## Environment:
-
-Paste the output of the following command:
-```
-wget -nc -nv https://github.com/facebookresearch/detectron2/raw/main/detectron2/utils/collect_env.py && python collect_env.py
-```
-
-If your issue looks like an installation issue / environment issue,
-please first check common issues in https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues
diff --git a/spaces/Zhenhong/text-to-image-Stable-Diffusion-demo/README.md b/spaces/Zhenhong/text-to-image-Stable-Diffusion-demo/README.md
deleted file mode 100644
index 1f938d1199c6c4a70063fe512fa5cbdde15358f2..0000000000000000000000000000000000000000
--- a/spaces/Zhenhong/text-to-image-Stable-Diffusion-demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stable Diffusion v1-5
-emoji: 🛬
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Zulqrnain/FAST_NU_PAST_PAPERS/app.py b/spaces/Zulqrnain/FAST_NU_PAST_PAPERS/app.py
deleted file mode 100644
index 788a768298cd9cdaddee888fe3c344a760c4409a..0000000000000000000000000000000000000000
--- a/spaces/Zulqrnain/FAST_NU_PAST_PAPERS/app.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import openpyxl
-import nltk
-import string
-from sklearn.feature_extraction.text import TfidfVectorizer
-from sklearn.metrics.pairwise import cosine_similarity
-import os
-import gradio as gr
-
-
-def remove_stopwords_and_punctuation(text):
- # remove punctuation
- f = open('stopwords.txt', 'r')
- stopwords = [line.strip() for line in f]
-
- no_punct = "".join([char for char in text if char not in string.punctuation])
-
-
- # remove stopwords
- words = no_punct.split()
- no_stopwords = [word for word in words if word.lower() not in stopwords]
-
- # rejoin the words without stopwords and punctuation
- clean_text = " ".join(no_stopwords)
-
- return clean_text
-
-
-def fastpastpapers(query,mylist,filenames):
- query=remove_stopwords_and_punctuation(query)
- tokens = query.split()
- if len(tokens) == 1:
- ngram_range = (1, 1) # Use unigrams
- elif len(tokens) == 2:
- ngram_range = (2, 2) # Use bigrams
- else:
- ngram_range = (3, 3) # Use trigrams
-
- # Compute tf-idf vectors for the documents using the selected n-gram range
- vectorizer = TfidfVectorizer(ngram_range=ngram_range)
- tfidf_vectors = vectorizer.fit_transform(mylist)
-
- # Compute cosine similarity matrix for all pairs of documents
- cosine_sim_matrix = cosine_similarity(tfidf_vectors)
-
- # Compute the tf-idf vector for the query
- query_vector = vectorizer.transform([query])
-
- # Calculate the cosine similarity between the query vector and each document vector
- cosine_similarities = cosine_similarity(query_vector, tfidf_vectors)[0]
-
- # Sort the documents based on their similarity score to the query
- document_scores = [(filenames[i], cosine_similarities[i]) for i in range(len(mylist))]
- document_scores.sort(key=lambda x: x[1], reverse=True)
-
- doclisrresult = []
- scorelist =[]
-
- # Print the ranked list of documents and their similarity scores
- for i, (document, score) in enumerate(document_scores):
- doclisrresult.append(document)
- scorelist.append(score)
- if i==25:
- break
-
-
- return doclisrresult,scorelist
-
-
-def check(list1, list2):
- # create a dictionary to keep track of seen elements
- seen = {}
- # create new lists to store unique elements
- new_list1 = []
- new_list2 = []
- # iterate over both lists simultaneously
- for file_name, file_data in zip(list1, list2):
- # check if file_name has been seen before
- if file_name not in seen:
- # if not, add it to the dictionary and new lists
- seen[file_name] = True
- new_list1.append(file_name)
- new_list2.append(file_data)
- # return the updated lists
- return new_list1, new_list2
-
-
-def pastpaperssearchengine(query):
-
- # Load the workbook
- workbook = openpyxl.load_workbook('complete data word+pdf.xlsx')
-
- # Select the first worksheet
- worksheet = workbook.worksheets[0]
-
- # Initialize empty lists
- filename_list = []
- data_list = []
-
- # Loop over the rows, starting from the second row (skipping the first row)
- for row in worksheet.iter_rows(min_row=2, values_only=True):
- # Append the first column value to the filename list
- filename_list.append(row[0])
- # Append the second column value to the data list
- data_list.append(row[1])
-
-
-
- filename_list,data_list=check(filename_list,data_list)
- l1,l2 =fastpastpapers(query,data_list,filename_list)
- #l1,l2=check(l1,l2)
- engineresult = list()
- for i in range(0,len(l1)):
- item = "document ="+str(l1[i])+" => : {score ="+str(l2[i])+"}\n"
- engineresult.insert(i,item)
-
-
- string_list = "\n".join(engineresult)
-
- return string_list
-
-
-demo=gr.Interface(fn=pastpaperssearchengine,
- inputs=gr.inputs.Textbox(label="Enter Phraze to Search in documents"),
- outputs=gr.inputs.Textbox(label="Results==>"),
- title="FAST NUCES Past papers search engine")
-demo.launch(debug=True)
-
-
diff --git a/spaces/aadnk/faster-whisper-webui/src/vad.py b/spaces/aadnk/faster-whisper-webui/src/vad.py
deleted file mode 100644
index e68ee7391e93f539a05d548601f2d87168bb1282..0000000000000000000000000000000000000000
--- a/spaces/aadnk/faster-whisper-webui/src/vad.py
+++ /dev/null
@@ -1,568 +0,0 @@
-from abc import ABC, abstractmethod
-from collections import Counter, deque
-import time
-
-from typing import Any, Deque, Iterator, List, Dict
-
-from pprint import pprint
-from src.hooks.progressListener import ProgressListener
-from src.hooks.subTaskProgressListener import SubTaskProgressListener
-from src.hooks.whisperProgressHook import create_progress_listener_handle
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-from src.segments import merge_timestamps
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback
-
-# Workaround for https://github.com/tensorflow/tensorflow/issues/48797
-try:
- import tensorflow as tf
-except ModuleNotFoundError:
- # Error handling
- pass
-
-import torch
-
-import ffmpeg
-import numpy as np
-
-from src.utils import format_timestamp
-from enum import Enum
-
-class NonSpeechStrategy(Enum):
- """
- Ignore non-speech frames segments.
- """
- SKIP = 1
- """
- Just treat non-speech segments as speech.
- """
- CREATE_SEGMENT = 2
- """
- Expand speech segments into subsequent non-speech segments.
- """
- EXPAND_SEGMENT = 3
-
-# Defaults for Silero
-SPEECH_TRESHOLD = 0.3
-
-# Minimum size of segments to process
-MIN_SEGMENT_DURATION = 1
-
-# The maximum time for texts from old segments to be used in the next segment
-MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled)
-PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this
-
-VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio
-
-class TranscriptionConfig(ABC):
- def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- self.non_speech_strategy = non_speech_strategy
- self.segment_padding_left = segment_padding_left
- self.segment_padding_right = segment_padding_right
- self.max_silent_period = max_silent_period
- self.max_merge_size = max_merge_size
- self.max_prompt_window = max_prompt_window
- self.initial_segment_index = initial_segment_index
-
-class PeriodicTranscriptionConfig(TranscriptionConfig):
- def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index)
- self.periodic_duration = periodic_duration
-
-class AbstractTranscription(ABC):
- def __init__(self, sampling_rate: int = 16000):
- self.sampling_rate = sampling_rate
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- return load_audio(str, self.sampling_rate, start_time, duration)
-
- def is_transcribe_timestamps_fast(self):
- """
- Determine if get_transcribe_timestamps is fast enough to not need parallelization.
- """
- return False
-
- @abstractmethod
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- return
-
- def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method,
- after merging the given segments using the specified configuration.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size,
- config.segment_padding_left, config.segment_padding_right)
-
- if config.non_speech_strategy != NonSpeechStrategy.SKIP:
- # Expand segments to include the gaps between them
- if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT):
- # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size
- merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size)
- elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT:
- # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment)
- merged = self.expand_gaps(merged, total_duration=total_duration)
- else:
- raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy))
-
- print("Transcribing non-speech:")
- pprint(merged)
- return merged
-
- def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig,
- progressListener: ProgressListener = None):
- """
- Transcribe the given audo file.
-
- Parameters
- ----------
- audio: str
- The audio file.
- whisperCallable: WhisperCallback
- A callback object to call to transcribe each segment.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
-
- try:
- max_audio_duration = self.get_audio_duration(audio, config)
- timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration)
-
- # Get speech timestamps from full audio file
- merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration)
-
- # A deque of transcribed segments that is passed to the next segment as a prompt
- prompt_window = deque()
-
- print("Processing timestamps:")
- pprint(merged)
-
- result = {
- 'text': "",
- 'segments': [],
- 'language': ""
- }
- languageCounter = Counter()
- detected_language = None
-
- segment_index = config.initial_segment_index
-
- # Calculate progress
- progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0
- progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged])
-
- # For each time segment, run whisper
- for segment in merged:
- segment_index += 1
- segment_start = segment['start']
- segment_end = segment['end']
- segment_expand_amount = segment.get('expand_amount', 0)
- segment_gap = segment.get('gap', False)
-
- segment_duration = segment_end - segment_start
-
- if segment_duration < MIN_SEGMENT_DURATION:
- continue
-
- # Audio to run on Whisper
- segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration))
- # Previous segments to use as a prompt
- segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None
-
- # Detected language
- detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None
-
- print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ",
- segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language)
-
- perf_start_time = time.perf_counter()
-
- scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration,
- sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration)
- segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener)
-
- perf_end_time = time.perf_counter()
- print("Whisper took {} seconds".format(perf_end_time - perf_start_time))
-
- adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration)
-
- # Propagate expand amount to the segments
- if (segment_expand_amount > 0):
- segment_without_expansion = segment_duration - segment_expand_amount
-
- for adjusted_segment in adjusted_segments:
- adjusted_segment_end = adjusted_segment['end']
-
- # Add expand amount if the segment got expanded
- if (adjusted_segment_end > segment_without_expansion):
- adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion
-
- # Append to output
- result['text'] += segment_result['text']
- result['segments'].extend(adjusted_segments)
-
- # Increment detected language
- if not segment_gap:
- languageCounter[segment_result['language']] += 1
-
- # Update prompt window
- self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config)
-
- if detected_language is not None:
- result['language'] = detected_language
- finally:
- # Notify progress listener that we are done
- if progressListener is not None:
- progressListener.on_finished()
- return result
-
- def get_audio_duration(self, audio: str, config: TranscriptionConfig):
- return get_audio_duration(audio)
-
- def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig):
- if (config.max_prompt_window is not None and config.max_prompt_window > 0):
- # Add segments to the current prompt window (unless it is a speech gap)
- if not segment_gap:
- for segment in adjusted_segments:
- if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB:
- prompt_window.append(segment)
-
- while (len(prompt_window) > 0):
- first_end_time = prompt_window[0].get('end', 0)
- # Time expanded in the segments should be discounted from the prompt window
- first_expand_time = prompt_window[0].get('expand_amount', 0)
-
- if (first_end_time - first_expand_time < segment_end - config.max_prompt_window):
- prompt_window.popleft()
- else:
- break
-
- def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float):
- result = []
- last_end_time = 0
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- if (last_end_time != segment_start):
- delta = segment_start - last_end_time
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } )
-
- last_end_time = segment_end
- result.append(segment)
-
- # Also include total duration if specified
- if (total_duration is not None and last_end_time < total_duration):
- delta = total_duration - segment_start
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } )
-
- return result
-
- # Expand the end time of each segment to the start of the next segment
- def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- # Expand if the gap actually exists
- if (delta >= 0):
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
-
- result.append(current_segment)
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- if (last_segment['end'] < total_duration):
- last_segment = last_segment.copy()
- last_segment['end'] = total_duration
- result[-1] = last_segment
-
- return result
-
- def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- expanded = False
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- if (max_expand_size is not None and delta <= max_expand_size):
- # Just expand the current segment
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
- expanded = True
-
- result.append(current_segment)
-
- # Add a gap to the next segment if needed
- if (delta >= 0 and not expanded):
- result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } )
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- delta = total_duration - last_segment['end']
-
- if (delta > 0):
- if (max_expand_size is not None and delta <= max_expand_size):
- # Expand the last segment
- last_segment = last_segment.copy()
- last_segment['expand_amount'] = delta
- last_segment['end'] = total_duration
- result[-1] = last_segment
- else:
- result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } )
-
- return result
-
- def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None):
- result = []
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- # Filter segments?
- if (max_source_time is not None):
- if (segment_start > max_source_time):
- continue
- segment_end = min(max_source_time, segment_end)
-
- new_segment = segment.copy()
-
- # Add to start and end
- new_segment['start'] = segment_start + adjust_seconds
- new_segment['end'] = segment_end + adjust_seconds
-
- # Handle words
- if ('words' in new_segment):
- for word in new_segment['words']:
- # Adjust start and end
- word['start'] = word['start'] + adjust_seconds
- word['end'] = word['end'] + adjust_seconds
-
- result.append(new_segment)
- return result
-
- def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float):
- result = []
-
- for entry in timestamps:
- start = entry['start']
- end = entry['end']
-
- result.append({
- 'start': start * factor,
- 'end': end * factor
- })
- return result
-
-
-class VadSileroTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None):
- super().__init__(sampling_rate=sampling_rate)
- self.model = None
- self.cache = cache
- self._initialize_model()
-
- def _initialize_model(self):
- if (self.cache is not None):
- model_key = "VadSileroTranscription"
- self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model)
- print("Loaded Silerio model from cache.")
- else:
- self.model, self.get_speech_timestamps = self._create_model()
- print("Created Silerio model")
-
- def _create_model(self):
- model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad')
-
- # Silero does not benefit from multi-threading
- torch.set_num_threads(1) # JIT
- (get_speech_timestamps, _, _, _, _) = utils
-
- return model, get_speech_timestamps
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time))
- perf_start_time = time.perf_counter()
-
- # Divide procesisng of audio into chunks
- chunk_start = start_time
-
- while (chunk_start < end_time):
- chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK)
-
- print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration)))
- wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration))
-
- sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD)
- seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate)
- adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration)
-
- #pprint(adjusted)
-
- result.extend(adjusted)
- chunk_start += chunk_duration
-
- perf_end_time = time.perf_counter()
- print("VAD processing took {} seconds".format(perf_end_time - perf_start_time))
-
- return result
-
- def __getstate__(self):
- # We only need the sampling rate
- return { 'sampling_rate': self.sampling_rate }
-
- def __setstate__(self, state):
- self.sampling_rate = state['sampling_rate']
- self.model = None
- # Use the global cache
- self.cache = GLOBAL_MODEL_CACHE
- self._initialize_model()
-
-# A very simple VAD that just marks every N seconds as speech
-class VadPeriodicTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def is_transcribe_timestamps_fast(self):
- # This is a very fast VAD - no need to parallelize it
- return True
-
- def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- # Generate a timestamp every N seconds
- start_timestamp = start_time
-
- while (start_timestamp < end_time):
- end_timestamp = min(start_timestamp + config.periodic_duration, end_time)
- segment_duration = end_timestamp - start_timestamp
-
- # Minimum duration is 1 second
- if (segment_duration >= 1):
- result.append( { 'start': start_timestamp, 'end': end_timestamp } )
-
- start_timestamp = end_timestamp
-
- return result
-
-def get_audio_duration(file: str):
- return float(ffmpeg.probe(file)["format"]["duration"])
-
-def load_audio(file: str, sample_rate: int = 16000,
- start_time: str = None, duration: str = None):
- """
- Open an audio file and read as mono waveform, resampling as necessary
-
- Parameters
- ----------
- file: str
- The audio file to open
-
- sr: int
- The sample rate to resample the audio if necessary
-
- start_time: str
- The start time, using the standard FFMPEG time duration syntax, or None to disable.
-
- duration: str
- The duration, using the standard FFMPEG time duration syntax, or None to disable.
-
- Returns
- -------
- A NumPy array containing the audio waveform, in float32 dtype.
- """
- try:
- inputArgs = {'threads': 0}
-
- if (start_time is not None):
- inputArgs['ss'] = start_time
- if (duration is not None):
- inputArgs['t'] = duration
-
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- out, _ = (
- ffmpeg.input(file, **inputArgs)
- .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate)
- .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True)
- )
- except ffmpeg.Error as e:
- raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}")
-
- return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/random_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/random_sampler.py
deleted file mode 100644
index f34b006e8bb0b55c74aa1c3b792f3664ada93162..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/random_sampler.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .base_sampler import BaseSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class RandomSampler(BaseSampler):
- """Random sampler.
-
- Args:
- num (int): Number of samples
- pos_fraction (float): Fraction of positive samples
- neg_pos_up (int, optional): Upper bound number of negative and
- positive samples. Defaults to -1.
- add_gt_as_proposals (bool, optional): Whether to add ground truth
- boxes as proposals. Defaults to True.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- **kwargs):
- from mmdet.core.bbox import demodata
- super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub,
- add_gt_as_proposals)
- self.rng = demodata.ensure_rng(kwargs.get('rng', None))
-
- def random_choice(self, gallery, num):
- """Random select some elements from the gallery.
-
- If `gallery` is a Tensor, the returned indices will be a Tensor;
- If `gallery` is a ndarray or list, the returned indices will be a
- ndarray.
-
- Args:
- gallery (Tensor | ndarray | list): indices pool.
- num (int): expected sample num.
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- assert len(gallery) >= num
-
- is_tensor = isinstance(gallery, torch.Tensor)
- if not is_tensor:
- if torch.cuda.is_available():
- device = torch.cuda.current_device()
- else:
- device = 'cpu'
- gallery = torch.tensor(gallery, dtype=torch.long, device=device)
- perm = torch.randperm(gallery.numel(), device=gallery.device)[:num]
- rand_inds = gallery[perm]
- if not is_tensor:
- rand_inds = rand_inds.cpu().numpy()
- return rand_inds
-
- def _sample_pos(self, assign_result, num_expected, **kwargs):
- """Randomly sample some positive samples."""
- pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
- if pos_inds.numel() != 0:
- pos_inds = pos_inds.squeeze(1)
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.random_choice(pos_inds, num_expected)
-
- def _sample_neg(self, assign_result, num_expected, **kwargs):
- """Randomly sample some negative samples."""
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- return self.random_choice(neg_inds, num_expected)
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/cocoa.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/cocoa.py
deleted file mode 100644
index 30f01d65642e0af9b6205fa65cbcbb3df81030eb..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/cocoa.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Note: The display mode API used here is Mac OS 10.6 only.
-
-from ctypes import *
-
-from .base import Display, Screen, ScreenMode, Canvas
-
-from pyglet.libs.darwin.cocoapy import CGDirectDisplayID, quartz, cf
-from pyglet.libs.darwin.cocoapy import cfstring_to_string, cfarray_to_list
-
-
-class CocoaDisplay(Display):
-
- def get_screens(self):
- maxDisplays = 256
- activeDisplays = (CGDirectDisplayID * maxDisplays)()
- count = c_uint32()
- quartz.CGGetActiveDisplayList(maxDisplays, activeDisplays, byref(count))
- return [CocoaScreen(self, displayID) for displayID in list(activeDisplays)[:count.value]]
-
-
-class CocoaScreen(Screen):
-
- def __init__(self, display, displayID):
- bounds = quartz.CGDisplayBounds(displayID)
- # FIX ME:
- # Probably need to convert the origin coordinates depending on context:
- # http://www.cocoabuilder.com/archive/cocoa/233492-ns-cg-rect-conversion-and-screen-coordinates.html
- x, y = bounds.origin.x, bounds.origin.y
- width, height = bounds.size.width, bounds.size.height
- super(CocoaScreen, self).__init__(display, int(x), int(y), int(width), int(height))
- self._cg_display_id = displayID
- # Save the default mode so we can restore to it.
- self._default_mode = self.get_mode()
-
- # FIX ME:
- # This method is needed to get multi-monitor support working properly.
- # However the NSScreens.screens() message currently sends out a warning:
- # "*** -[NSLock unlock]: lock ( '(null)') unlocked when not locked"
- # on Snow Leopard and apparently causes python to crash on Lion.
- #
- # def get_nsscreen(self):
- # """Returns the NSScreen instance that matches our CGDirectDisplayID."""
- # NSScreen = ObjCClass('NSScreen')
- # # Get a list of all currently active NSScreens and then search through
- # # them until we find one that matches our CGDisplayID.
- # screen_array = NSScreen.screens()
- # count = screen_array.count()
- # for i in range(count):
- # nsscreen = screen_array.objectAtIndex_(i)
- # screenInfo = nsscreen.deviceDescription()
- # displayID = screenInfo.objectForKey_(get_NSString('NSScreenNumber'))
- # displayID = displayID.intValue()
- # if displayID == self._cg_display_id:
- # return nsscreen
- # return None
-
- def get_matching_configs(self, template):
- canvas = CocoaCanvas(self.display, self, None)
- return template.match(canvas)
-
- def get_modes(self):
- cgmodes = c_void_p(quartz.CGDisplayCopyAllDisplayModes(self._cg_display_id, None))
- modes = [CocoaScreenMode(self, cgmode) for cgmode in cfarray_to_list(cgmodes)]
- cf.CFRelease(cgmodes)
- return modes
-
- def get_mode(self):
- cgmode = c_void_p(quartz.CGDisplayCopyDisplayMode(self._cg_display_id))
- mode = CocoaScreenMode(self, cgmode)
- quartz.CGDisplayModeRelease(cgmode)
- return mode
-
- def set_mode(self, mode):
- assert mode.screen is self
- quartz.CGDisplayCapture(self._cg_display_id)
- quartz.CGDisplaySetDisplayMode(self._cg_display_id, mode.cgmode, None)
- self.width = mode.width
- self.height = mode.height
-
- def restore_mode(self):
- quartz.CGDisplaySetDisplayMode(self._cg_display_id, self._default_mode.cgmode, None)
- quartz.CGDisplayRelease(self._cg_display_id)
-
- def capture_display(self):
- quartz.CGDisplayCapture(self._cg_display_id)
-
- def release_display(self):
- quartz.CGDisplayRelease(self._cg_display_id)
-
-
-class CocoaScreenMode(ScreenMode):
-
- def __init__(self, screen, cgmode):
- super(CocoaScreenMode, self).__init__(screen)
- quartz.CGDisplayModeRetain(cgmode)
- self.cgmode = cgmode
- self.width = int(quartz.CGDisplayModeGetWidth(cgmode))
- self.height = int(quartz.CGDisplayModeGetHeight(cgmode))
- self.depth = self.getBitsPerPixel(cgmode)
- self.rate = quartz.CGDisplayModeGetRefreshRate(cgmode)
-
- def __del__(self):
- quartz.CGDisplayModeRelease(self.cgmode)
- self.cgmode = None
-
- def getBitsPerPixel(self, cgmode):
- # from /System/Library/Frameworks/IOKit.framework/Headers/graphics/IOGraphicsTypes.h
- IO8BitIndexedPixels = "PPPPPPPP"
- IO16BitDirectPixels = "-RRRRRGGGGGBBBBB"
- IO32BitDirectPixels = "--------RRRRRRRRGGGGGGGGBBBBBBBB"
-
- cfstring = c_void_p(quartz.CGDisplayModeCopyPixelEncoding(cgmode))
- pixelEncoding = cfstring_to_string(cfstring)
- cf.CFRelease(cfstring)
-
- if pixelEncoding == IO8BitIndexedPixels: return 8
- if pixelEncoding == IO16BitDirectPixels: return 16
- if pixelEncoding == IO32BitDirectPixels: return 32
- return 0
-
-
-class CocoaCanvas(Canvas):
-
- def __init__(self, display, screen, nsview):
- super(CocoaCanvas, self).__init__(display)
- self.screen = screen
- self.nsview = nsview
diff --git a/spaces/agueroooooooooo/Transport_Mode_Detector/data_enrich.py b/spaces/agueroooooooooo/Transport_Mode_Detector/data_enrich.py
deleted file mode 100644
index 1f7b5dc705ab7ece2697fe62e95efd90a4fd0a23..0000000000000000000000000000000000000000
--- a/spaces/agueroooooooooo/Transport_Mode_Detector/data_enrich.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import os
-import pickle
-from math import cos, sin, atan2
-
-import numpy as np
-from geopy import distance
-
-class DataEnrich:
-
- def __init__(self):
- pass
-
- def _load_raw_pickle(self):
- return pickle.load(open("data/raw_labeled.pkl","rb"))
-
- def consolidate_trajectories(self):
- raw_dfs = self._load_raw_pickle()
- trajectories = []
- for traj_of_person in raw_dfs:
- dfs_with_label = []
- for traj in traj_of_person:
- if "label" in traj.columns:
- traj = traj.replace(to_replace='None', value=np.nan).dropna()
- traj.reset_index(inplace=True)
- dfs_with_label.append(traj)
- if dfs_with_label:
- trajectories.extend(dfs_with_label)
- return trajectories
-
- def _calc_speed(self, distance, ts_a, ts_b):
- time_delta = ts_b - ts_a
- if time_delta.total_seconds() == 0:
- return 0
- return distance / time_delta.total_seconds() # m/s
-
- def _calc_accel(self, speed_a, speed_b, ts_a, ts_b):
- time_delta = ts_b - ts_a
- speed_delta = speed_b - speed_a
- if time_delta.total_seconds() == 0:
- return 0
- return speed_delta / time_delta.total_seconds() # m/s^2
-
- def _calc_jerk(self, acc_a, acc_b, ts_a, ts_b):
- time_delta = ts_b - ts_a
- acc_delta = acc_b - acc_a
- if time_delta.total_seconds() == 0:
- return 0
- return acc_delta / time_delta.total_seconds()
-
- def _calc_bearing_rate(self, bearing_a, bearing_b, ts_a, ts_b):
- time_delta = ts_b - ts_a
- bear_delta = bearing_b - bearing_a
- if time_delta.total_seconds() == 0:
- return 0
- return bear_delta / time_delta.total_seconds()
-
- def calc_dist_for_row(self, trajectory_frame, i):
- lat_1 = trajectory_frame["lat"][i-1]
- lat_2 = trajectory_frame["lat"][i]
- if lat_1 > 90:
- print("Faulty", lat_1)
- lat_1 /= 10
- if lat_2 > 90:
- print("Faulty", lat_2)
- lat_2 /= 10
-
- point_a = (lat_1, trajectory_frame["lon"][i-1])
- point_b = (lat_2, trajectory_frame["lon"][i])
- if point_a[0] == point_b[0] and point_a[1] == point_b[1]:
- trajectory_frame["dist"][i] = 0
- else:
- trajectory_frame["dist"][i] = distance.distance((point_a[0], point_a[1]), (point_b[0], point_b[1])).m
-
- def calc_speed_for_row(self, trajectory_frame, i):
- trajectory_frame["speed"][i] = self._calc_speed(trajectory_frame["dist"][i],
- trajectory_frame["datetime"][i-1],
- trajectory_frame["datetime"][i]
- )
-
- def calc_accel_for_row(self, trajectory_frame, i):
- trajectory_frame["accel"][i] = self._calc_accel(trajectory_frame["speed"][i-1],
- trajectory_frame["speed"][i],
- trajectory_frame["datetime"][i - 1],
- trajectory_frame["datetime"][i]
- )
-
- def set_sample_rate(self, trajectory_frame, min_sec_distance_between_points):
- i = 1
- indices_to_del = []
- deleted = 1
- while i < len(trajectory_frame)-deleted:
- ts1 = trajectory_frame["datetime"][i]
- ts2 = trajectory_frame["datetime"][i+deleted]
- delta = ts2-ts1
- if delta.seconds < min_sec_distance_between_points:
- deleted+=1
- indices_to_del.append(i)
- continue
- i+=deleted
- deleted = 1
- if indices_to_del:
- trajectory_frame.drop(trajectory_frame.index[indices_to_del],inplace=True)
- trajectory_frame.reset_index(inplace=True)
-
- def set_time_between_points(self, trajectory_frame, i):
- trajectory_frame["timedelta"][i] = (trajectory_frame["datetime"][i]-trajectory_frame["datetime"][i-1]).total_seconds()
-
- def calc_jerk_for_row(self, trajectory_frame, i):
- trajectory_frame["jerk"][i] = self._calc_jerk(trajectory_frame["accel"][i - 1],
- trajectory_frame["accel"][i],
- trajectory_frame["datetime"][i - 1],
- trajectory_frame["datetime"][i]
- )
-
- def calc_bearing_for_row(self, trajectory_frame, i):
- a_lat = trajectory_frame["lat"][i - 1]
- a_lon = trajectory_frame["lon"][i - 1]
- b_lat = trajectory_frame["lat"][i]
- b_lon = trajectory_frame["lon"][i]
- x = cos(b_lat) * sin(b_lon-a_lon)
- y = cos(a_lat) * sin(b_lat) - sin(a_lat) * cos(b_lat) * cos(b_lon-a_lon)
- trajectory_frame["bearing"][i] = atan2(x, y)
-
- def calc_bearing_rate_for_row(self, trajectory_frame, i):
- trajectory_frame["bearing_rate"][i] = self._calc_bearing_rate(trajectory_frame["bearing"][i - 1],
- trajectory_frame["bearing"][i],
- trajectory_frame["datetime"][i - 1],
- trajectory_frame["datetime"][i]
- )
-
- def calc_features_for_frame(self, traj_frame):
- traj_frame["dist"] = 0
- traj_frame["timedelta"] = 0
- traj_frame["speed"] = 0
- traj_frame["accel"] = 0
- traj_frame["jerk"] = 0
- traj_frame["bearing"] = 0
- traj_frame["bearing_rate"] = 0
-
- for i, elem in traj_frame.iterrows():
- if i == 0:
- continue
- self.set_time_between_points(traj_frame, i)
- self.calc_dist_for_row(traj_frame, i)
- self.calc_speed_for_row(traj_frame, i)
- self.calc_accel_for_row(traj_frame, i)
- self.calc_jerk_for_row(traj_frame, i)
- self.calc_bearing_for_row(traj_frame, i)
- self.calc_bearing_rate_for_row(traj_frame, i)
-
- def get_enriched_data(self, from_pickle):
- if from_pickle:
- if os.path.isfile("data/raw_enriched.pkl"):
- print("Reading raw_enriched.pkl")
- return pickle.load(open("data/raw_enriched.pkl", "rb"))
- else:
- print("No pickled enriched dataset, creating. This will take a while.")
- traj = self.consolidate_trajectories()
- for elem in traj:
- self.set_sample_rate(elem, 5)
- self.calc_features_for_frame(elem)
- print("Done, dumping")
- pickle.dump(traj, open("data/raw_enriched.pkl", "wb"))
-
- return traj
-
-
-if __name__ == '__main__':
- a=DataEnrich()
- z=a.get_enriched_data(False)
- print(z)
- print("DOneP")
-
-
-
diff --git a/spaces/akhaliq/Kapao/utils/torch_utils.py b/spaces/akhaliq/Kapao/utils/torch_utils.py
deleted file mode 100644
index 04e1446bb908c0fad0990468c6eb426905b59767..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Kapao/utils/torch_utils.py
+++ /dev/null
@@ -1,350 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-PyTorch utils
-"""
-
-import datetime
-import logging
-import math
-import os
-import platform
-import subprocess
-import time
-from contextlib import contextmanager
-from copy import deepcopy
-from pathlib import Path
-
-import torch
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-LOGGER = logging.getLogger(__name__)
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- """
- Decorator to make all processes in distributed training wait for each local_master to do something.
- """
- if local_rank not in [-1, 0]:
- dist.barrier(device_ids=[local_rank])
- yield
- if local_rank == 0:
- dist.barrier(device_ids=[0])
-
-
-def init_torch_seeds(seed=0):
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
- torch.manual_seed(seed)
- if seed == 0: # slower, more reproducible
- cudnn.benchmark, cudnn.deterministic = False, True
- else: # faster, less reproducible
- cudnn.benchmark, cudnn.deterministic = True, False
-
-
-def date_modified(path=__file__):
- # return human-readable file modification date, i.e. '2021-3-26'
- t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
- return f'{t.year}-{t.month}-{t.day}'
-
-
-def git_describe(path=Path(__file__).parent): # path must be a directory
- # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
- s = f'git -C {path} describe --tags --long --always'
- try:
- return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
- except subprocess.CalledProcessError as e:
- return '' # not a git repository
-
-
-def select_device(device='', batch_size=None):
- # device = 'cpu' or '0' or '0,1,2,3'
- s = f'YOLOv5 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
- device = str(device).strip().lower().replace('cuda:', '') # to string, 'cuda:0' to '0'
- cpu = device == 'cpu'
- if cpu:
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
- elif device: # non-cpu device requested
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
- assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
-
- cuda = not cpu and torch.cuda.is_available()
- if cuda:
- devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
- n = len(devices) # device count
- if n > 1 and batch_size: # check batch_size is divisible by device_count
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
- space = ' ' * (len(s) + 1)
- for i, d in enumerate(devices):
- p = torch.cuda.get_device_properties(i)
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
- else:
- s += 'CPU\n'
-
- LOGGER.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
- return torch.device('cuda:0' if cuda else 'cpu')
-
-
-def time_sync():
- # pytorch-accurate time
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def profile(input, ops, n=10, device=None):
- # YOLOv5 speed/memory/FLOPs profiler
- #
- # Usage:
- # input = torch.randn(16, 3, 640, 640)
- # m1 = lambda x: x * torch.sigmoid(x)
- # m2 = nn.SiLU()
- # profile(input, [m1, m2], n=100) # profile over 100 iterations
-
- results = []
- logging.basicConfig(format="%(message)s", level=logging.INFO)
- device = device or select_device()
- print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}"
- f"{'input':>24s}{'output':>24s}")
-
- for x in input if isinstance(input, list) else [input]:
- x = x.to(device)
- x.requires_grad = True
- for m in ops if isinstance(ops, list) else [ops]:
- m = m.to(device) if hasattr(m, 'to') else m # device
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m
- tf, tb, t = 0., 0., [0., 0., 0.] # dt forward, backward
- try:
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
- except:
- flops = 0
-
- try:
- for _ in range(n):
- t[0] = time_sync()
- y = m(x)
- t[1] = time_sync()
- try:
- _ = (sum([yi.sum() for yi in y]) if isinstance(y, list) else y).sum().backward()
- t[2] = time_sync()
- except Exception as e: # no backward method
- print(e)
- t[2] = float('nan')
- tf += (t[1] - t[0]) * 1000 / n # ms per op forward
- tb += (t[2] - t[1]) * 1000 / n # ms per op backward
- mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB)
- s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
- s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
- p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
- print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}')
- results.append([p, flops, mem, tf, tb, s_in, s_out])
- except Exception as e:
- print(e)
- results.append(None)
- torch.cuda.empty_cache()
- return results
-
-
-def is_parallel(model):
- # Returns True if model is of type DP or DDP
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
-
-
-def de_parallel(model):
- # De-parallelize a model: returns single-GPU model if model is of type DP or DDP
- return model.module if is_parallel(model) else model
-
-
-def intersect_dicts(da, db, exclude=()):
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
- return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
- m.inplace = True
-
-
-def find_modules(model, mclass=nn.Conv2d):
- # Finds layer indices matching module class 'mclass'
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
-
-
-def sparsity(model):
- # Return global model sparsity
- a, b = 0., 0.
- for p in model.parameters():
- a += p.numel()
- b += (p == 0).sum()
- return b / a
-
-
-def prune(model, amount=0.3):
- # Prune model to requested global sparsity
- import torch.nn.utils.prune as prune
- print('Pruning model... ', end='')
- for name, m in model.named_modules():
- if isinstance(m, nn.Conv2d):
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
- prune.remove(m, 'weight') # make permanent
- print(' %.3g global sparsity' % sparsity(model))
-
-
-def fuse_conv_and_bn(conv, bn):
- # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
- fusedconv = nn.Conv2d(conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- groups=conv.groups,
- bias=True).requires_grad_(False).to(conv.weight.device)
-
- # prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # prepare spatial bias
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
-
- return fusedconv
-
-
-def model_info(model, verbose=False, img_size=640):
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
- if verbose:
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
- for i, (name, p) in enumerate(model.named_parameters()):
- name = name.replace('module_list.', '')
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
-
- try: # FLOPs
- from thop import profile
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
- img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
- flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
- fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPs
- except (ImportError, Exception):
- fs = ''
-
- LOGGER.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
-
-
-def load_classifier(name='resnet101', n=2):
- # Loads a pretrained model reshaped to n-class output
- model = torchvision.models.__dict__[name](pretrained=True)
-
- # ResNet model properties
- # input_size = [3, 224, 224]
- # input_space = 'RGB'
- # input_range = [0, 1]
- # mean = [0.485, 0.456, 0.406]
- # std = [0.229, 0.224, 0.225]
-
- # Reshape output to n classes
- filters = model.fc.weight.shape[1]
- model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
- model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
- model.fc.out_features = n
- return model
-
-
-def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
- # scales img(bs,3,y,x) by ratio constrained to gs-multiple
- if ratio == 1.0:
- return img
- else:
- h, w = img.shape[2:]
- s = (int(h * ratio), int(w * ratio)) # new size
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
- if not same_shape: # pad/crop img
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
-
-
-def copy_attr(a, b, include=(), exclude=()):
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
- for k, v in b.__dict__.items():
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
- continue
- else:
- setattr(a, k, v)
-
-
-class EarlyStopping:
- # YOLOv5 simple early stopper
- def __init__(self, patience=30):
- self.best_fitness = 0.0 # i.e. mAP
- self.best_epoch = 0
- self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop
- self.possible_stop = False # possible stop may occur next epoch
-
- def __call__(self, epoch, fitness):
- if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training
- self.best_epoch = epoch
- self.best_fitness = fitness
- delta = epoch - self.best_epoch # epochs without improvement
- self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch
- stop = delta >= self.patience # stop training if patience exceeded
- if stop:
- LOGGER.info(f'EarlyStopping patience {self.patience} exceeded, stopping training.')
- return stop
-
-
-class ModelEMA:
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
- Keep a moving average of everything in the model state_dict (parameters and buffers).
- This is intended to allow functionality like
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- A smoothed version of the weights is necessary for some training schemes to perform well.
- This class is sensitive where it is initialized in the sequence of model init,
- GPU assignment and distributed training wrappers.
- """
-
- def __init__(self, model, decay=0.9999, updates=0):
- # Create EMA
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
- # if next(model.parameters()).device.type != 'cpu':
- # self.ema.half() # FP16 EMA
- self.updates = updates # number of EMA updates
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
- for p in self.ema.parameters():
- p.requires_grad_(False)
-
- def update(self, model):
- # Update EMA parameters
- with torch.no_grad():
- self.updates += 1
- d = self.decay(self.updates)
-
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
- for k, v in self.ema.state_dict().items():
- if v.dtype.is_floating_point:
- v *= d
- v += (1. - d) * msd[k].detach()
-
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
- # Update EMA attributes
- copy_attr(self.ema, model, include, exclude)
diff --git a/spaces/akhaliq/hassanblend1.4/README.md b/spaces/akhaliq/hassanblend1.4/README.md
deleted file mode 100644
index b6e050507fd44eedd99571a5fdf484f90224036a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/hassanblend1.4/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Hassanblend1.4
-emoji: 📚
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/alandavidgrunberg/Cannes_Chatbot/app.py b/spaces/alandavidgrunberg/Cannes_Chatbot/app.py
deleted file mode 100644
index 196295fd5928a6cdcb70ed8229adcb0663a78562..0000000000000000000000000000000000000000
--- a/spaces/alandavidgrunberg/Cannes_Chatbot/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import gradio as gr
-import pandas as pd
-import time
-
-from langchain.llms import OpenAI
-from langchain.memory import ConversationBufferWindowMemory
-from langchain.chains import LLMChain
-
-from langchain.llms import OpenAI
-from langchain.agents import create_pandas_dataframe_agent, Tool, ZeroShotAgent, AgentExecutor
-from langchain.document_loaders import DirectoryLoader
-from langchain.indexes import VectorstoreIndexCreator
-from langchain.text_splitter import TokenTextSplitter
-
-### CREATING DATAFRAME AGENT:
-
-df = pd.read_csv('data/complete_data_one_hot.csv')
-# ^dataframe of all movies
-# English title, Original title, Director(s), Production countrie(s), + 11 screening categories (one hot encoded)
-
-with open('data/df_agent_prefix.txt', 'r') as file:
- df_agent_prefix = file.read()
-# ^prefix is prompt that is fed to the bot prepending user's question every time agent used. See text file for content
-
-df_agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, prefix=df_agent_prefix, verbose=True)
-# ^create agent (tool for the bot to use) which can read dataframes in a virtual python repl
-
-
-### CREATING TEXT VECTORSTORES:
-
-wiki_film_loader = DirectoryLoader("data/film_summaries/from_wikipedia", glob="*.txt")
-# # ^loading movie summaries (pre-scraped from wikipedia)
-search_film_loader = DirectoryLoader("data/film_summaries/from_search", glob="*.txt")
- # ^loading more movie summaries (pre-scraped from google search top result)
-
-festival_info_loader = DirectoryLoader("data/festival_info", glob="*.txt")
- # ^loading festival info (pre-scraped from google search top result)
-
-film_summaries_index = VectorstoreIndexCreator(text_splitter=TokenTextSplitter(chunk_size=500, chunk_overlap=20)).from_loaders([wiki_film_loader, search_film_loader])
-# # ^creating vector index of movie summaries
-
-festival_info_index = VectorstoreIndexCreator(text_splitter=TokenTextSplitter(chunk_size=200, chunk_overlap=20)).from_loaders([festival_info_loader])
-# ^creating vector index of movie summaries
-
-
-
-### PUTTING TOOLBOX TOGETHER:
-
-tools = []
-
-tools.append(
- Tool(
- name="python_repl_ast",
- func=df_agent.run,
- description="Useful when you need to count movies, directors, countries, etc. at the upcoming Cannes Film Festival. Useful when asked 'How many' Do not use for finding film genres. Do not use for questions about juries or the red carpet.",
- verbose = True # change to false to not show agent 'thinking' through its actions, and just output final answer
- )
-)
-
-tools.append(
- Tool(
- name="film summaries",
- func=film_summaries_index.query,
- description="Useful when you are asked about the plot of a film at the upcoming Cannes Film Festival, the actors in the film, and the film's genre. Use for finding film genres. Do not use for questions about juries or the red carpet=.",
- verbose = True # change to false to not show agent 'thinking' through its actions, and just output final answer
- )
-)
-
-tools.append(
- Tool(
- name="festival general info",
- func=festival_info_index.query,
- description="Useful when you are asked for general info about the upcoming Cannes Film Festival, such as: When it will take place? Who will judge the films? Who is on the jury? Who was on the red carpet?",
- verbose = True # change to false to not show agent 'thinking' through its actions, and just output final answer
- )
-)
-# ^bot will pick which tool to use depending on the question asked and the tool description
-
-### BUILDING MEMORY CHAIN
-
-prefix = """Have a conversation with a human, answering the following questions about the upcoming Cannes Film Festival as best you can. You have access to the following tools:"""
-suffix = """Begin!"
-
-{chat_history}
-Question: {input}
-{agent_scratchpad}"""
-
-prompt = ZeroShotAgent.create_prompt(
- tools,
- prefix=prefix,
- suffix=suffix,
- input_variables=["input", "chat_history", "agent_scratchpad"]
-)
-memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=3)
-
-### CREATING MASTER AGENT CHAIN WITH MEMORY AND ACCESS TO TOOLBOX
-
-llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
-agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
-agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
-# ^agentchain ready for queries
-
-### CONNECTING TO GRADIO FRONTEND
-
-spacing = "
"
-header_content = "Hello there! I am a conversation bot trained on Cannes 2023 data a few weeks before the festival. I was designed to help cinephiles learn more before the big event. Ask me about the festival as if it hasn’t happened yet and you’d like to learn more. I’ll be happy to answer your questions.
"
-footer_content = "Check out my GitHub Repo to learn how I was created.
"
-
-with gr.Blocks(title="Cannes 2023 Q&A", theme="gradio/monochrome") as demo:
- spacer = gr.Markdown(spacing)
- header = gr.Markdown(header_content)
- chatbot = gr.Chatbot(label = 'Cannes Bot')
- textbox = gr.Textbox(label = 'Input:', value = 'Tell me about the upcoming festival!')
- button = gr.Button("Submit")
- clear = gr.ClearButton([textbox, chatbot])
- footer = gr.Markdown(footer_content)
- spacer = gr.Markdown(spacing)
-
- def user(user_message, history):
- return gr.update(value="", interactive=False), history + [[user_message, None]]
-
- def bot(history):
- bot_message = agent_chain.run(f"Answer the following question using the tools provided. Do not make up the answer if you can't find it using the tools. Always talk about the festival in the future tense, it hasn't happened yet. Question: {history[-1][0]}")
- # where the magic happens (connecting model)
- history[-1][1] = ""
- for character in bot_message:
- history[-1][1] += character
- time.sleep(0.02)
- yield history
-
- response = textbox.submit(user, inputs=[textbox, chatbot], outputs=[textbox, chatbot], queue=False).then(
- bot, inputs=chatbot, outputs=chatbot
- )
- response.then(lambda: gr.update(interactive=True), None, [textbox], queue=False)
-
- response = button.click(user, inputs=[textbox, chatbot], outputs=[textbox, chatbot], queue=False).then(
- bot, inputs=chatbot, outputs=chatbot
- )
- response.then(lambda: gr.update(interactive=True), None, [textbox], queue=False)
-
-demo.queue()
-demo.launch()
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/repr.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/repr.py
deleted file mode 100644
index 17147fd4be2efedeb625c2b58293d0588c2c5d64..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/repr.py
+++ /dev/null
@@ -1,151 +0,0 @@
-from functools import partial
-import inspect
-
-from typing import (
- Any,
- Callable,
- Iterable,
- List,
- Optional,
- overload,
- Union,
- Tuple,
- Type,
- TypeVar,
-)
-
-
-T = TypeVar("T")
-
-
-Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]]
-RichReprResult = Result
-
-
-class ReprError(Exception):
- """An error occurred when attempting to build a repr."""
-
-
-@overload
-def auto(cls: Optional[T]) -> T:
- ...
-
-
-@overload
-def auto(*, angular: bool = False) -> Callable[[T], T]:
- ...
-
-
-def auto(
- cls: Optional[T] = None, *, angular: Optional[bool] = None
-) -> Union[T, Callable[[T], T]]:
- """Class decorator to create __repr__ from __rich_repr__"""
-
- def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]:
- def auto_repr(self: Type[T]) -> str:
- """Create repr string from __rich_repr__"""
- repr_str: List[str] = []
- append = repr_str.append
-
- angular = getattr(self.__rich_repr__, "angular", False) # type: ignore
- for arg in self.__rich_repr__(): # type: ignore
- if isinstance(arg, tuple):
- if len(arg) == 1:
- append(repr(arg[0]))
- else:
- key, value, *default = arg
- if key is None:
- append(repr(value))
- else:
- if len(default) and default[0] == value:
- continue
- append(f"{key}={value!r}")
- else:
- append(repr(arg))
- if angular:
- return f"<{self.__class__.__name__} {' '.join(repr_str)}>"
- else:
- return f"{self.__class__.__name__}({', '.join(repr_str)})"
-
- def auto_rich_repr(self: Type[T]) -> Result:
- """Auto generate __rich_rep__ from signature of __init__"""
- try:
- signature = inspect.signature(self.__init__) ## type: ignore
- for name, param in signature.parameters.items():
- if param.kind == param.POSITIONAL_ONLY:
- yield getattr(self, name)
- elif param.kind in (
- param.POSITIONAL_OR_KEYWORD,
- param.KEYWORD_ONLY,
- ):
- if param.default == param.empty:
- yield getattr(self, param.name)
- else:
- yield param.name, getattr(self, param.name), param.default
- except Exception as error:
- raise ReprError(
- f"Failed to auto generate __rich_repr__; {error}"
- ) from None
-
- if not hasattr(cls, "__rich_repr__"):
- auto_rich_repr.__doc__ = "Build a rich repr"
- cls.__rich_repr__ = auto_rich_repr # type: ignore
-
- auto_repr.__doc__ = "Return repr(self)"
- cls.__repr__ = auto_repr # type: ignore
- if angular is not None:
- cls.__rich_repr__.angular = angular # type: ignore
- return cls
-
- if cls is None:
- return partial(do_replace, angular=angular) # type: ignore
- else:
- return do_replace(cls, angular=angular) # type: ignore
-
-
-@overload
-def rich_repr(cls: Optional[T]) -> T:
- ...
-
-
-@overload
-def rich_repr(*, angular: bool = False) -> Callable[[T], T]:
- ...
-
-
-def rich_repr(
- cls: Optional[T] = None, *, angular: bool = False
-) -> Union[T, Callable[[T], T]]:
- if cls is None:
- return auto(angular=angular)
- else:
- return auto(cls)
-
-
-if __name__ == "__main__":
-
- @auto
- class Foo:
- def __rich_repr__(self) -> Result:
- yield "foo"
- yield "bar", {"shopping": ["eggs", "ham", "pineapple"]}
- yield "buy", "hand sanitizer"
-
- foo = Foo()
- from pip._vendor.rich.console import Console
-
- console = Console()
-
- console.rule("Standard repr")
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
-
- console.rule("Angular repr")
- Foo.__rich_repr__.angular = True # type: ignore
-
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
diff --git a/spaces/ali-ghamdan/deoldify/fastai/text/transform.py b/spaces/ali-ghamdan/deoldify/fastai/text/transform.py
deleted file mode 100644
index 9948ddc5845305da51262521a9f5f47935a37ea5..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/text/transform.py
+++ /dev/null
@@ -1,164 +0,0 @@
-"NLP data processing; tokenizes text and creates vocab indexes"
-from ..torch_core import *
-
-import spacy
-from spacy.symbols import ORTH
-
-__all__ = ['BaseTokenizer', 'SpacyTokenizer', 'Tokenizer', 'Vocab', 'fix_html', 'replace_all_caps', 'replace_rep', 'replace_wrep',
- 'rm_useless_spaces', 'spec_add_spaces', 'BOS', 'EOS', 'FLD', 'UNK', 'PAD', 'TK_MAJ', 'TK_UP', 'TK_REP', 'TK_REP', 'TK_WREP',
- 'deal_caps']
-
-BOS,EOS,FLD,UNK,PAD = 'xxbos','xxeos','xxfld','xxunk','xxpad'
-TK_MAJ,TK_UP,TK_REP,TK_WREP = 'xxmaj','xxup','xxrep','xxwrep'
-defaults.text_spec_tok = [UNK,PAD,BOS,EOS,FLD,TK_MAJ,TK_UP,TK_REP,TK_WREP]
-
-
-class BaseTokenizer():
- "Basic class for a tokenizer function."
- def __init__(self, lang:str): self.lang = lang
- def tokenizer(self, t:str) -> List[str]: return t.split(' ')
- def add_special_cases(self, toks:Collection[str]): pass
-
-class SpacyTokenizer(BaseTokenizer):
- "Wrapper around a spacy tokenizer to make it a `BaseTokenizer`."
- def __init__(self, lang:str):
- self.tok = spacy.blank(lang, disable=["parser","tagger","ner"])
-
- def tokenizer(self, t:str) -> List[str]:
- return [t.text for t in self.tok.tokenizer(t)]
-
- def add_special_cases(self, toks:Collection[str]):
- for w in toks:
- self.tok.tokenizer.add_special_case(w, [{ORTH: w}])
-
-def spec_add_spaces(t:str) -> str:
- "Add spaces around / and # in `t`. \n"
- return re.sub(r'([/#\n])', r' \1 ', t)
-
-def rm_useless_spaces(t:str) -> str:
- "Remove multiple spaces in `t`."
- return re.sub(' {2,}', ' ', t)
-
-def replace_rep(t:str) -> str:
- "Replace repetitions at the character level in `t`."
- def _replace_rep(m:Collection[str]) -> str:
- c,cc = m.groups()
- return f' {TK_REP} {len(cc)+1} {c} '
- re_rep = re.compile(r'(\S)(\1{3,})')
- return re_rep.sub(_replace_rep, t)
-
-def replace_wrep(t:str) -> str:
- "Replace word repetitions in `t`."
- def _replace_wrep(m:Collection[str]) -> str:
- c,cc = m.groups()
- return f' {TK_WREP} {len(cc.split())+1} {c} '
- re_wrep = re.compile(r'(\b\w+\W+)(\1{3,})')
- return re_wrep.sub(_replace_wrep, t)
-
-def fix_html(x:str) -> str:
- "List of replacements from html strings in `x`."
- re1 = re.compile(r' +')
- x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace(
- 'nbsp;', ' ').replace('#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace(
- '
', "\n").replace('\\"', '"').replace('',UNK).replace(' @.@ ','.').replace(
- ' @-@ ','-').replace(' @,@ ',',').replace('\\', ' \\ ')
- return re1.sub(' ', html.unescape(x))
-
-def replace_all_caps(x:Collection[str]) -> Collection[str]:
- "Replace tokens in ALL CAPS in `x` by their lower version and add `TK_UP` before."
- res = []
- for t in x:
- if t.isupper() and len(t) > 1: res.append(TK_UP); res.append(t.lower())
- else: res.append(t)
- return res
-
-def deal_caps(x:Collection[str]) -> Collection[str]:
- "Replace all Capitalized tokens in `x` by their lower version and add `TK_MAJ` before."
- res = []
- for t in x:
- if t == '': continue
- if t[0].isupper() and len(t) > 1 and t[1:].islower(): res.append(TK_MAJ)
- res.append(t.lower())
- return res
-
-defaults.text_pre_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces]
-defaults.text_post_rules = [replace_all_caps, deal_caps]
-
-class Tokenizer():
- "Put together rules and a tokenizer function to tokenize text with multiprocessing."
- def __init__(self, tok_func:Callable=SpacyTokenizer, lang:str='en', pre_rules:ListRules=None,
- post_rules:ListRules=None, special_cases:Collection[str]=None, n_cpus:int=None):
- self.tok_func,self.lang,self.special_cases = tok_func,lang,special_cases
- self.pre_rules = ifnone(pre_rules, defaults.text_pre_rules )
- self.post_rules = ifnone(post_rules, defaults.text_post_rules)
- self.special_cases = special_cases if special_cases else defaults.text_spec_tok
- self.n_cpus = ifnone(n_cpus, defaults.cpus)
-
- def __repr__(self) -> str:
- res = f'Tokenizer {self.tok_func.__name__} in {self.lang} with the following rules:\n'
- for rule in self.pre_rules: res += f' - {rule.__name__}\n'
- for rule in self.post_rules: res += f' - {rule.__name__}\n'
- return res
-
- def process_text(self, t:str, tok:BaseTokenizer) -> List[str]:
- "Process one text `t` with tokenizer `tok`."
- for rule in self.pre_rules: t = rule(t)
- toks = tok.tokenizer(t)
- for rule in self.post_rules: toks = rule(toks)
- return toks
-
- def _process_all_1(self, texts:Collection[str]) -> List[List[str]]:
- "Process a list of `texts` in one process."
- tok = self.tok_func(self.lang)
- if self.special_cases: tok.add_special_cases(self.special_cases)
- return [self.process_text(str(t), tok) for t in texts]
-
- def process_all(self, texts:Collection[str]) -> List[List[str]]:
- "Process a list of `texts`."
- if self.n_cpus <= 1: return self._process_all_1(texts)
- with ProcessPoolExecutor(self.n_cpus) as e:
- return sum(e.map(self._process_all_1, partition_by_cores(texts, self.n_cpus)), [])
-
-class Vocab():
- "Contain the correspondence between numbers and tokens and numericalize."
- def __init__(self, itos:Collection[str]):
- self.itos = itos
- self.stoi = collections.defaultdict(int,{v:k for k,v in enumerate(self.itos)})
-
- def numericalize(self, t:Collection[str]) -> List[int]:
- "Convert a list of tokens `t` to their ids."
- return [self.stoi[w] for w in t]
-
- def textify(self, nums:Collection[int], sep=' ') -> List[str]:
- "Convert a list of `nums` to their tokens."
- return sep.join([self.itos[i] for i in nums]) if sep is not None else [self.itos[i] for i in nums]
-
- def __getstate__(self):
- return {'itos':self.itos}
-
- def __setstate__(self, state:dict):
- self.itos = state['itos']
- self.stoi = collections.defaultdict(int,{v:k for k,v in enumerate(self.itos)})
-
- def save(self, path):
- "Save `self.itos` in `path`"
- pickle.dump(self.itos, open(path, 'wb'))
-
- @classmethod
- def create(cls, tokens:Tokens, max_vocab:int, min_freq:int) -> 'Vocab':
- "Create a vocabulary from a set of `tokens`."
- freq = Counter(p for o in tokens for p in o)
- itos = [o for o,c in freq.most_common(max_vocab) if c >= min_freq]
- for o in reversed(defaults.text_spec_tok):
- if o in itos: itos.remove(o)
- itos.insert(0, o)
- itos = itos[:max_vocab]
- if len(itos) < max_vocab: #Make sure vocab size is a multiple of 8 for fast mixed precision training
- while len(itos)%8 !=0: itos.append('xxfake')
- return cls(itos)
-
- @classmethod
- def load(cls, path):
- "Load the `Vocab` contained in `path`"
- itos = pickle.load(open(path, 'rb'))
- return cls(itos)
diff --git a/spaces/alitrack/ChatPDF/app.py b/spaces/alitrack/ChatPDF/app.py
deleted file mode 100644
index 94d557c41de506faad14592cdb121432348c9fab..0000000000000000000000000000000000000000
--- a/spaces/alitrack/ChatPDF/app.py
+++ /dev/null
@@ -1,282 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-@author:XuMing(xuming624@qq.com)
-@description:
-modified from https://github.com/imClumsyPanda/langchain-ChatGLM/blob/master/webui.py
-"""
-import gradio as gr
-import os
-import shutil
-from loguru import logger
-from chatpdf import ChatPDF
-import hashlib
-
-pwd_path = os.path.abspath(os.path.dirname(__file__))
-
-CONTENT_DIR = os.path.join(pwd_path, "content")
-logger.info(f"CONTENT_DIR: {CONTENT_DIR}")
-VECTOR_SEARCH_TOP_K = 3
-MAX_INPUT_LEN = 2048
-
-embedding_model_dict = {
- "text2vec-large": "GanymedeNil/text2vec-large-chinese",
- "text2vec-base": "shibing624/text2vec-base-chinese",
- "sentence-transformers": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
- "ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
- "ernie-base": "nghuyong/ernie-3.0-base-zh",
-
-}
-
-# supported LLM models
-llm_model_dict = {
- "chatglm-6b-int4": "THUDM/chatglm-6b-int4",
- "chatglm-6b-int4-qe": "THUDM/chatglm-6b-int4-qe",
- "chatglm-6b": "THUDM/chatglm-6b",
- "llama-7b": "decapoda-research/llama-7b-hf",
- "llama-13b": "decapoda-research/llama-13b-hf",
-}
-
-llm_model_dict_list = list(llm_model_dict.keys())
-embedding_model_dict_list = list(embedding_model_dict.keys())
-
-model = None
-
-
-def get_file_list():
- if not os.path.exists("content"):
- return []
- return [f for f in os.listdir("content") if
- f.endswith(".txt") or f.endswith(".pdf") or f.endswith(".docx") or f.endswith(".md")]
-
-
-file_list = get_file_list()
-
-
-def upload_file(file):
- if not os.path.exists(CONTENT_DIR):
- os.mkdir(CONTENT_DIR)
- filename = os.path.basename(file.name)
- shutil.move(file.name, os.path.join(CONTENT_DIR, filename))
- # file_list首位插入新上传的文件
- file_list.insert(0, filename)
- return gr.Dropdown.update(choices=file_list, value=filename)
-
-
-def parse_text(text):
- """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/"""
- lines = text.split("\n")
- lines = [line for line in lines if line != ""]
- count = 0
- for i, line in enumerate(lines):
- if "```" in line:
- count += 1
- items = line.split('`')
- if count % 2 == 1:
- lines[i] = f''
- else:
- lines[i] = f'
'
- else:
- if i > 0:
- if count % 2 == 1:
- line = line.replace("`", "\`")
- line = line.replace("<", "<")
- line = line.replace(">", ">")
- line = line.replace(" ", " ")
- line = line.replace("*", "*")
- line = line.replace("_", "_")
- line = line.replace("-", "-")
- line = line.replace(".", ".")
- line = line.replace("!", "!")
- line = line.replace("(", "(")
- line = line.replace(")", ")")
- line = line.replace("$", "$")
- lines[i] = "
" + line
- text = "".join(lines)
- return text
-
-
-def get_answer(query, index_path, history, topn=VECTOR_SEARCH_TOP_K, max_input_size=1024, only_chat=False):
- if model is None:
- return [None, "模型还未加载"], query
- if index_path and not only_chat:
- if not model.sim_model.corpus_embeddings:
- model.load_index(index_path)
- response, empty_history, reference_results = model.query(query=query, topn=topn, max_input_size=max_input_size)
-
- logger.debug(f"query: {query}, response with content: {response}")
- for i in range(len(reference_results)):
- r = reference_results[i]
- response += f"\n{r.strip()}"
- response = parse_text(response)
- history = history + [[query, response]]
- else:
- # 未加载文件,仅返回生成模型结果
- response, empty_history = model.gen_model.chat(query)
- response = parse_text(response)
- history = history + [[query, response]]
- logger.debug(f"query: {query}, response: {response}")
- return history, ""
-
-
-def update_status(history, status):
- history = history + [[None, status]]
- logger.info(status)
- return history
-
-
-def reinit_model(llm_model, embedding_model, history):
- try:
- global model
- if model is not None:
- del model
- model = ChatPDF(
- sim_model_name_or_path=embedding_model_dict.get(
- embedding_model,
- "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
- ),
- gen_model_type=llm_model.split('-')[0],
- gen_model_name_or_path=llm_model_dict.get(llm_model, "THUDM/chatglm-6b-int4"),
- lora_model_name_or_path=None,
- )
-
- model_status = """模型已成功重新加载,请选择文件后点击"加载文件"按钮"""
- except Exception as e:
- model = None
- logger.error(e)
- model_status = """模型未成功重新加载,请重新选择后点击"加载模型"按钮"""
- return history + [[None, model_status]]
-
-
-def get_file_hash(fpath):
- return hashlib.md5(open(fpath, 'rb').read()).hexdigest()
-
-
-def get_vector_store(filepath, history, embedding_model):
- logger.info(filepath, history)
- index_path = None
- file_status = ''
- if model is not None:
-
- local_file_path = os.path.join(CONTENT_DIR, filepath)
-
- local_file_hash = get_file_hash(local_file_path)
- index_file_name = f"{filepath}.{embedding_model}.{local_file_hash}.index.json"
-
- local_index_path = os.path.join(CONTENT_DIR, index_file_name)
-
- if os.path.exists(local_index_path):
- model.load_index(local_index_path)
- index_path = local_index_path
- file_status = "文件已成功加载,请开始提问"
-
- elif os.path.exists(local_file_path):
- model.load_pdf_file(local_file_path)
- model.save_index(local_index_path)
- index_path = local_index_path
- if index_path:
- file_status = "文件索引并成功加载,请开始提问"
- else:
- file_status = "文件未成功加载,请重新上传文件"
- else:
- file_status = "模型未完成加载,请先在加载模型后再导入文件"
-
- return index_path, history + [[None, file_status]]
-
-
-def reset_chat(chatbot, state):
- return None, None
-
-
-def change_max_input_size(input_size):
- if model is not None:
- model.max_input_size = input_size
- return
-
-
-block_css = """.importantButton {
- background: linear-gradient(45deg, #7e0570,#5d1c99, #6e00ff) !important;
- border: none !important;
-}
-.importantButton:hover {
- background: linear-gradient(45deg, #ff00e0,#8500ff, #6e00ff) !important;
- border: none !important;
-}"""
-
-webui_title = """
-# 🎉ChatPDF WebUI🎉
-Link in: [https://github.com/shibing624/ChatPDF](https://github.com/shibing624/ChatPDF) PS: 2核CPU 16G内存机器,约2min一条😭
-"""
-
-init_message = """欢迎使用 ChatPDF Web UI,可以直接提问或上传文件后提问 """
-
-with gr.Blocks(css=block_css) as demo:
- index_path, file_status, model_status = gr.State(""), gr.State(""), gr.State("")
- gr.Markdown(webui_title)
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([[None, init_message], [None, None]],
- elem_id="chat-box",
- show_label=False).style(height=700)
- query = gr.Textbox(show_label=False,
- placeholder="请输入提问内容,按回车进行提交",
- ).style(container=False)
- clear_btn = gr.Button('🔄Clear!', elem_id='clear').style(full_width=True)
- with gr.Column(scale=1):
- llm_model = gr.Radio(llm_model_dict_list,
- label="LLM 模型",
- value=list(llm_model_dict.keys())[0],
- interactive=True)
- embedding_model = gr.Radio(embedding_model_dict_list,
- label="Embedding 模型",
- value=embedding_model_dict_list[0],
- interactive=True)
-
- load_model_button = gr.Button("重新加载模型")
-
- with gr.Row():
- only_chat = gr.Checkbox(False, label="不加载文件(纯聊天)")
-
- with gr.Row():
- topn = gr.Slider(1, 100, 20, step=1, label="最大搜索数量")
- max_input_size = gr.Slider(512, 4096, MAX_INPUT_LEN, step=10, label="摘要最大长度")
- with gr.Tab("select"):
- selectFile = gr.Dropdown(
- file_list,
- label="content file",
- interactive=True,
- value=file_list[0] if len(file_list) > 0 else None
- )
- with gr.Tab("upload"):
- file = gr.File(
- label="content file",
- file_types=['.txt', '.md', '.docx', '.pdf']
- )
- load_file_button = gr.Button("加载文件")
- max_input_size.change(
- change_max_input_size,
- inputs=max_input_size
- )
- load_model_button.click(
- reinit_model,
- show_progress=True,
- inputs=[llm_model, embedding_model, chatbot],
- outputs=chatbot
- )
- # 将上传的文件保存到content文件夹下,并更新下拉框
- file.upload(upload_file, inputs=file, outputs=selectFile)
- load_file_button.click(
- get_vector_store,
- show_progress=True,
- inputs=[selectFile, chatbot, embedding_model],
- outputs=[index_path, chatbot],
- )
- query.submit(
- get_answer,
- [query, index_path, chatbot, topn, max_input_size, only_chat],
- [chatbot, query],
- )
- clear_btn.click(reset_chat, [chatbot, query], [chatbot, query])
-
-demo.queue(concurrency_count=3).launch(
- server_name='0.0.0.0', share=False, inbrowser=False
-)
\ No newline at end of file
diff --git a/spaces/allinaigc/GPTAdvanceTemp0801/app.py b/spaces/allinaigc/GPTAdvanceTemp0801/app.py
deleted file mode 100644
index 4ae33d60b1601bdc145d13e049a935b5962e2d7f..0000000000000000000000000000000000000000
--- a/spaces/allinaigc/GPTAdvanceTemp0801/app.py
+++ /dev/null
@@ -1,395 +0,0 @@
-'''
-相比v1的更新:
-1. chatbot添加的stream功能。
-2. 更新了layout和配色方案。
-3. 添加了prompt作为Tab展现的形式。
-4. 优化了聊天历史记忆的功能(支持到-1)。
-5. 上传了网上收集的prompt数据。
-6. 解决了maxtoken=4096报错进而导致服务器down的exception,将错误显示在output的textbox里面。
-7. 将输出改成了chatbot格式,然后可以进行多轮对话。按键改成了button,而不是icon。
-8. 升级为GPT 3.5-16K的版本。
-'''
-import gradio as gr
-import openai
-import requests
-import csv
-import os
-from rich import print
-import os
-# from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper
-# from langchain.chat_models import ChatOpenAI
-# from llama_index import ServiceContext
-# from llama_index import download_loader
-import sys
-import time
-import pandas as pd
-# from langchain.chat_models import ChatOpenAI
-# import numpy as np
-# # from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper #* working in the previous version.
-# ##* in the latest version: GPTSimpleVectorIndex was renamed to GPTVectorStoreIndex, try removing it from the end of your imports
-from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTVectorStoreIndex, LLMPredictor, PromptHelper
-from llama_index import StorageContext, load_index_from_storage, GPTVectorStoreIndex, LLMPredictor, PromptHelper
-from llama_index import ServiceContext, QuestionAnswerPrompt
-# import llama_index
-from llama_index import download_loader
-import sys
-import time
-import pandas as pd
-# import PyPDF2
-# from PyPDF2 import PdfReader
-# import PyPDF4
-# from PyPDF4 import PdfFileReader
-
-# prompt_templates = {"Default ChatGPT": ""}
-
-## 这里设置openai的api key。在space中是secret。
-openai.api_key = os.environ['user_token'] ## working.
-os.environ["OPENAI_API_KEY"] = os.environ['user_token']
-
-
-bing_search_api_key = os.environ['bing_api_key']
-bing_search_endpoint = 'https://api.bing.microsoft.com/v7.0/search'
-
-def get_empty_state():
- return {"total_tokens": 0, "messages": []}
-
-# system_prompt = [{"role": "system", "content": 'you are a kind and helpful AI assistant'}]
-system_prompt = [{"role": "system", "content": '你是一个专业和友好的AI助手。'}]
-
-
-# prompt_templates = {
-# '默认角色': "你是一个专业的人工智能助手。",
-# '周报写作': "使用下面提供的文本作为中文周报的基础,生成一个简洁的摘要,突出最重要的内容。该报告应以 markdown 格式编写,并应易于阅读和理解,以满足一般受众的需要。特别是要注重提供对利益相关者和决策者有用的见解和分析。你也可以根据需要使用任何额外的信息或来源。",
-# '写作建议': "我希望你能充当一名人工智能写作导师。我将为你提供一个需要帮助提高写作水平的学生,你的任务是使用人工智能工具,如自然语言处理,给学生反馈如何提高他们的写作水平。你还应该利用你的修辞学知识和关于有效写作技巧的经验,以建议该学生如何以书面形式更好地表达他们的思想和观点。我的第一个要求是 [修改文本]",
-# '资料收集': "生成一份与 [主题] 有关的十大事实、统计数据和趋势的清单,包括其来源。",
-# '作家角色': "作为一名中文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请从编辑以下文本开始",
-# '写作标题生成器': "我想让你充当书面作品的标题生成器。我将向你提供一篇文章的主题和关键词,你将生成五个吸引人的标题。请保持标题简洁,不超过 20 个字,并确保保持其含义。答复时要利用题目的语言类型。我的第一个题目是 [文章内容]",
-# '调研报告助手': "请根据以下提示撰写一份【报告主题】调研报告。您可以根据您的研究领域自由发挥,但请确保您的报告具有以下特征:1. 具有明确的问题陈述和研究目的;2. 包含对现有文献和数据的全面分析和综述;3. 采用适当的方法和技术进行数据收集和分析;4. 提供准确的结论和建议,以回答研究问题并解决研究目的。",
-# }
-
-### 导入收集到的有用prompts。
-raw_prompts = pd.read_excel("raw_prompts.xlsx", usecols=['category','prompt'], index_col='category')
-prompt_templates = raw_prompts.to_dict()['prompt']
-
-def on_prompt_template_change(prompt_template):
- if not isinstance(prompt_template, str): return
- # print(prompt_template)
- return prompt_templates[prompt_template]
-
-def search(query):
- # Construct a request
- # mkt = 'en-EN'
- mkt = 'zh-CN'
- params = {'q': query, 'mkt': mkt}
- headers = {'Ocp-Apim-Subscription-Key': bing_search_api_key}
-
- # Call the API
- try:
- response = requests.get(bing_search_endpoint, headers=headers, params=params)
- response.raise_for_status()
- json = response.json()
- return json["webPages"]["value"]
- # print("\nJSON Response:\n")
- # pprint(response.json())
- except Exception as e:
- raise e
-
-def submit_message(radio, chatbot_history, temperature, max_tokens,top_p,presence_penalty): ## working.
- input_prompt = chatbot_history
- # print("chat_history",chatbot_history)
-
- ###NOTE: 保留2次历史记录,原生ChatGPT的上下文也只能到这里了。
- try:
- if chatbot_history[-1][1]:
- prompt = chatbot_history[-1][0] + chatbot_history[-1][1]
- # print('3333')
- elif chatbot_history[-2][1]:
- prompt = chatbot_history[-2][1] + "\n" + chatbot_history[-1][0]
- # print('2222')
- # print(chatbot_history[-2][0])
- elif chatbot_history[-3][1]:
- prompt = chatbot_history[-3][1] + "\n" + chatbot_history[-2][1] + "\n" + chatbot_history[-1][1] + "\n" + chatbot_history[-1][0]
- # print('1111')
- except Exception as e:
- # print(e)
- prompt = chatbot_history[-1][0]
- # print('4444')
-
-
- print('prompt now is:', prompt)
- prompt_msg = {"role": "user", "content": prompt}
-
- if radio == "联网增强模式":
- try:
- # global messages #! 通过制定messages可以在非增强模式中,记忆对话。
-
- history = []
- print('start the internet version of ChatGPT')
-
- #NOTE: 重置messages,等于遗忘了之前的所有记录。
- messages = [
- # {"role": "system", "content": "You are a helpful and kind AI Assistant."},
- {"role": "system", "content": "你是一个专业和友好的AI助手。"},
- ]
-
- # input_message = chatbot_history[-1][0] ## 只有一轮对话的设置。
- input_message = prompt
- internet_search_result = search(input_message)
- search_prompt = [f"Source:\nTitle: {result['name']}\nURL: {result['url']}\nContent: {result['snippet']}" for result in internet_search_result]
- # print('content:\n', search_prompt[0])
- prompt = "基于如下的互联网公开信息, 回答问题:\n\n" + "\n\n".join(search_prompt[:3]) + "\n\n问题: " + input_message + "你需要注意的是回答问题时必须用提问的语言(如英文或者中文)来提示:'答案基于互联网公开信息。'" + "\n\n答案: " ## 限制了只有3个搜索结果。
- # prompt = "Use these sources to answer the question:\n\n" + "\n\n".join(search_prompt[0:3]) + "\n\nQuestion: " + input_message + "(注意:回答问题时请提示'以下答案基于互联网公开信息。')\n\n" + "\n\nAnswer: "
-
- # print('the internet prompt now is:\n', prompt)
- messages.append({"role": "user", "content": prompt})
-
- input_prompt[-1][1] = ""
-
- ## streaming version. typewriter effect, word by word output.
- # for resp in openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages, stream=True, max_tokens=2048, temperature=0.9):
- for resp in openai.ChatCompletion.create(model="gpt-3.5-turbo-16k", messages=messages, stream=True, max_tokens=4096, temperature=0.9):
-
- #* 以下内容在Gradio中是working的。
- answer = str(resp['choices'][0]['delta'].get('content'))
- if answer != "None":
- # history.append(answer)
- # result = "".join(history).strip() #* working!
-
- input_prompt[-1][1] += answer
-
- # yield result
- # yield [[prompt, result]] ## working in the Chatbot advance GPT version.
- yield input_prompt ## working in the Chatbot advance GPT version. `
-
- except Exception as e:
- print(e)
- error = str(e)
- messages = [{"role": "system", "content": "你是一个专业和友好的AI助手。"},]
- messages.append({"role": "user", "content": ""})
- input_prompt[-1][1] = error
- yield input_prompt ## 将错误打印到output的textbox里面。
- # messages = [{"role": "system", "content": "你是一个专业和友好的AI助手。"},] ## reset the memory of messages.
-
- # 接入本地知识库的stream版本。
- # elif radio == '接入本地知识库':
- # print('now starts the local KB version of ChatGPT')
- # max_input_size = 4096
- # # set number of output tokens
- # # num_outputs = 3000 #* working
- # num_outputs = 1000
- # # set maximum chunk overlap
- # max_chunk_overlap = -1000 #* working
- # # set chunk size limit
- # # chunk_size_limit = 600
- # chunk_size_limit = 6000 #* working
-
- # history = []
- # try:
- # if chatbot_history:
- # # ! 这里需要重新装载一下storage_context。
-
- # QA_PROMPT_TMPL = (
- # "We have provided context information below. \n"
- # "---------------------\n"
- # "{context_str}"
- # "\n---------------------\n"
- # "Given all this information, please answer the following questions,"
- # "You MUST use the SAME language as the question:\n"
- # "{query_str}\n")
- # QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
-
- # llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.8, model_name="gpt-3.5-turbo", max_tokens=8024,streaming=True))
- # prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
- # service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
-
- # # # index = load_index_from_storage(storage_context)
- # storage_context = StorageContext.from_defaults(persist_dir="./")
- # index = load_index_from_storage(storage_context,service_context=service_context)
- # # query_engine = index.as_query_engine(streaming=True, similarity_top_k=3, text_qa_template=QA_PROMPT)
- # # query_engine = index.as_query_engine(streaming=True)
- # query_engine = index.as_query_engine(streaming=True, text_qa_template=QA_PROMPT)
- # # reply = query_engine.query(input_prompt[-1][0]) ## 一轮会话
- # reply = query_engine.query(prompt) ## 多轮会话(三次历史记忆),
- # input_prompt[-1][1] = ""
-
- # for resp in reply.response_gen:
- # answer = resp
- # if answer != "None":
- # # history.append(answer)
- # # result = "".join(history).strip() #* working!
-
- # input_prompt[-1][1] += answer
-
- # # yield result
- # yield input_prompt
-
- # #TODO:好像在全新llama_index中,不需要以下的内容了,上面的函数已经可以完成任务了。
- # # #NOTE: reroute the original version of ChatGPT
- # # if ('context' in str(reply)) and ('Howerver' not in str(reply)):
- # # print("local KB doesn't find useful information")
- # # messages = [{"role": "system", "content": "You are a helpful and kind AI Assistant."},]
- # # messages.append({"role": "user", "content": input})
- # # chat = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages)
- # # reply = chat.choices[0].message.content
- # # messages.append({"role": "assistant", "content": reply})
-
- # # return reply
- # except Exception as e:
- # print(e)
- # error = str(e)
- # messages = [{"role": "system", "content": "你是一个专业和友好的AI助手。"},]
- # messages.append({"role": "user", "content": ""})
- # input_prompt[-1][1] = error
- # yield input_prompt ## 将错误打印到output的textbox里面。
-
- # return input_prompt
-
- else:
- print('start the default version of ChatGPT')
- system_prompt = [{"role": "system", "content": '你是一个专业和友好的AI助手。'}]
- history = []
-
- # 这里是默认版本GPT, 3.5 turbo。
- # Chatbot版本。
- try:
- ## no stream version.
- # completion_1 = openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=system_prompt + [prompt_msg], temperature=0.7, max_tokens=1024)
- # history.append(prompt_msg)
- # history.append(completion_1.choices[0].message.to_dict())
- # print('completion_1:',completion_1.choices[0].message.content)
- # # state['total_tokens'] += completion_1['usage']['total_tokens']
-
- messages = system_prompt + [prompt_msg]
- input_prompt[-1][1] = ""
- for resp in openai.ChatCompletion.create(model="gpt-3.5-turbo-16k", messages=messages, stream=True, temperature=temperature, max_tokens=max_tokens,top_p=top_p,presence_penalty=presence_penalty):
- answer = str(resp['choices'][0]['delta'].get('content'))
- if answer != "None":
-
- ##NOTE: 这里是单论聊天的版本。
- # resp_history.append(answer) #* working!
- # result = "".join(resp_history).strip() #* working!
- # yield [[prompt, result]] #* 记得这个格式。这只能单论聊天。
-
- ##* 多轮聊天的版本。
- input_prompt[-1][1] += answer
- yield input_prompt
-
- except Exception as e:
- print(e)
- error = str(e)
- messages = [{"role": "system", "content": "你是一个专业和友好的AI助手。"},]
- messages.append({"role": "user", "content": ""})
- input_prompt[-1][1] += error
- yield input_prompt ## 将错误打印到output的textbox里面。
-
- return input_prompt
-
-
-## 插入chatbot的user问题。
-def user(user_message, chat_history):
- # print('chat_history:', chat_history)
- return "", chat_history + [[user_message, None]]
-
-def clear_conversation():
- return gr.update(value=None, visible=True), None, "", get_empty_state()
- # return "", "", []
-
-css = """
-#mybutton {background-color: #CEFAFE; color: #06B6D4;}
-#textarea {-webkit-text-fill-color:black; -webkit-opacity: 1;}
-.message {font: 12px Arial, sans-serif, 'ui-sans-serif', Montserrat, 'system-ui';}
-"""
-# css = None
-
-with gr.Blocks(theme=gr.themes.Soft(primary_hue='sky', text_size='md'), css=css, title="ChatGPT人工智能工具") as demo:
- state = gr.State(get_empty_state())
- with gr.Row():
- with gr.Column(elem_id="col-container",scale=4):
- gr.Markdown("""## **欢迎使用ChatGPT人工智能** """, elem_id="header")
- gr.Markdown("""注意事项:
-
- 1. 推荐使用”默认模式“进行问题/任务提交(回答文字质量最佳),仅在需要查询2021年之后的信息或者中文垂直领域知识时才选择”联网增强模式“。
- 2. 目前ChatGPT本身不稳定会影响部分时段的使用体验,有输出问题时,刷新页面即可解决。如果问题持续存在,一般等待1-2个小时左右即可恢复。
- 3. 每次提交新问题时,须先点击”重启一轮新的对话“或直接刷新页面。以免答案与之前的问题关联。
-
- """)
-
- with gr.Row():
- with gr.Column():
- # gr.Markdown("""### 企业级大语言模型 """)
- chatbot = gr.Chatbot(elem_id="message").style(height=400) ## style来设置对话框高度。
- # output_message = gr.Textbox(label='大语言模型的回答',lines=10).style(show_copy_button=True) ## textbox version。style来设置对话框高度。
- # radio = gr.Radio(['默认模式', '联网增强模式','接入本地知识库'], label="ChatGPT模型运行模式")
- radio = gr.Radio(['默认模式', '联网增强模式'], value='默认模式',label="ChatGPT模型运行模式")
-
- ## 根据要求选择不同的按键类型,button或者icon。
- with gr.Row():
- with gr.Column(min_width=837):
- # with gr.Column(scale=8):
- input_message = gr.Textbox(lines=1, label="输入您的问题/任务", show_label=True, placeholder="在这里输入您的问题或任务按Enter提交,按Shift+Enter换行", visible=True).style(container=True, show_copy_button=True)
-
- with gr.Row():
- # with gr.Column(min_width=15):
- with gr.Column():
- # btn_clear_conversation = gr.Button("\u2716", variant="primary", visible=True).style(full_width=False, size="lg")
- btn_clear_conversation = gr.Button("重启一轮新的对话", variant="secondary", visible=True).style(full_width=True, size="lg")
- with gr.Column():
- # btn_stop = gr.Button("\u25FD", variant="primary", visible=True).style(full_width=False, size="lg")
- btn_stop = gr.Button("终止当前问题/任务", variant="secondary", visible=True).style(full_width=True, size="lg")
- with gr.Column():
- # btn_submit = gr.Button("\u2714", variant="primary", visible=True).style(full_width=False, size="lg")
- btn_submit = gr.Button("提交你的问题/任务或直接按Enter键", variant="primary", visible=True).style(full_width=True, size="lg")
-
- with gr.Column(scale=2):
- gr.Markdown("### **高级定制化选项**")
- # with gr.Accordion(label='模型参数设定', open=True):
-
- with gr.Tab('Prompt提示词模板'):
- prompt_template = gr.Dropdown(label="选择提示词类型:", value="调研报告助手",choices=list(prompt_templates.keys()))
- default_prompt_value = "请根据以下提示撰写一份【报告主题】调研报告。您可以根据您的研究领域自由发挥,但请确保您的报告具有以下特征:1. 具有明确的问题陈述和研究目的;2. 包含对现有文献和数据的全面分析和综述;3. 采用适当的方法和技术进行数据收集和分析;4. 提供准确的结论和建议,以回答研究问题并解决研究目的。"
- prompt_template_preview = gr.Textbox(label="提示词预设内容:", value=default_prompt_value, show_label=True, lines=15).style(show_copy_button=True) ## working.
-
-
- with gr.Tab(label='模型参数设定', elem_id='tab'):
- claim_value = str("ChatGPT具有多种高级设置选项来调整其模型。1. Temperature:温度调整文本的多样性。温度值越高,生成的文本越随机。2. Token:控制生成文本的长度。3. 'top_p':0.0到1.0 (默认 1.0) ,类似Temperature,也叫核采样。4.presence_penalty:惩罚原始文本中已经出现过的单词/短语,从而鼓励生成无重复的输出。"
- )
- claim = gr.Textbox(value=claim_value, type="text", show_label=False, lines=5).style(container=True)
- temperature = gr.Slider(minimum=0, maximum=2.0, value=0.7, step=0.1, label="Temperature参数",info="数值越高语句越灵活")
- max_tokens = gr.Slider(minimum=100, maximum=14096, value=8000, step=100,
- label="单次聊天最多Token数", info="平均1.12个token约等于1个汉字")
- top_p = gr.Slider(minimum=0, maximum=1, value=1, step=0.1, label="top_p参数",info="数值越低语句越固定")
- presence_penalty = gr.Slider(minimum=0, maximum=1, value=0.5, step=0.1, label="penalty参数",info="0没有惩罚,1完全禁止输出复制的单词")
-
-
-
- with gr.Tab('工作台'):
- output_record_1 = gr.TextArea(lines=5, label='记录1').style(show_copy_button=True)
- output_record_2 = gr.TextArea(lines=5, label='记录2').style(show_copy_button=True)
- output_record_3 = gr.TextArea(lines=5, label='记录3').style(show_copy_button=True)
-
- ## click + submit.
- btn_submit_event = btn_submit.click(user, [input_message, chatbot], [input_message, chatbot], queue=False).then(submit_message, [radio, chatbot,temperature,max_tokens,top_p,presence_penalty], chatbot)
- input_message.submit(user, [input_message, chatbot], [input_message, chatbot], queue=False).then(submit_message, [radio, chatbot,temperature,max_tokens,top_p,presence_penalty], chatbot)
- btn_clear_conversation.click(clear_conversation, [], [input_message, chatbot])
-
- ## stop button中止提交程序运行。
- btn_stop.click(fn=None, inputs=None, outputs=None, cancels=[btn_submit_event])
-
- # gradio.Tab.select()
- prompt_template.change(on_prompt_template_change, inputs=[prompt_template], outputs=[prompt_template_preview])
-
- demo.load()
-
-# auth_list = (
-# ('1234', '1234'),
-# )
-
-### 用户名和密码认证
-# user_csv = pd.read_csv('auth_list.csv')
-# auth_list = [(x, y) for (x, y) in user_csv[['username', 'password']].values]
-
-# demo.launch(height='1200px', enable_queue=True, auth=auth_list, auth_message="欢迎使用ChatGPT")
-# demo.launch(height='1200px', enable_queue=True, share=False,server_name='0.0.0.0', server_port=8000)
-# demo.launch(height='1200px', enable_queue=True, share=False,server_name='0.0.0.0')
-demo.launch(height='1200px', enable_queue=True)
-demo.queue(concurrency_count=500)
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test163/app.py b/spaces/allknowingroger/Image-Models-Test163/app.py
deleted file mode 100644
index 27e1523a44c12006d58cf6f699b560a05a3931a4..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test163/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "digiplay/CoffeeDonut_v1",
- "digiplay/MiracleMixGlitter_v1",
- "thomasdavidwang/lora-trained-xl",
- "jtlowell/cozy_only",
- "Rish111104/my-rabbit",
- "Srit/my-exp",
- "Yntec/Splash",
- "pranaykoppula/vtonseconduser",
- "digiplay/AnyPastel",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/alwaysbetter1314/gradio-start/app.py b/spaces/alwaysbetter1314/gradio-start/app.py
deleted file mode 100644
index c94ac6551c965cf5d26d20dc6dc7091324536c2d..0000000000000000000000000000000000000000
--- a/spaces/alwaysbetter1314/gradio-start/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gradio as gr
-from transformers import *
-
-# 标题
-title = "抽取式问答"
-# 标题下的描述,支持md格式
-description = "输入上下文与问题后,点击submit按钮,可从上下文中抽取出答案,赶快试试吧!"
-# 输入样例
-examples = [
- ["普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。", "著名诗歌《假如生活欺骗了你》的作者是"],
- ["普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。", "普希金创作的叙事诗叫什么"]
- ]
-# 页面最后的信息,可以选择引用文章,支持md格式
-article = "感兴趣的小伙伴可以阅读[Transformers实用指南](https://zhuanlan.zhihu.com/p/548336726)"
-
-gr.Interface.from_pipeline(
- pipeline("question-answering", model="uer/roberta-base-chinese-extractive-qa"),
- title=title, description=description, examples=examples, article=article).launch()
\ No newline at end of file
diff --git a/spaces/amasgari06/ChatGPT4/app.py b/spaces/amasgari06/ChatGPT4/app.py
deleted file mode 100644
index 7e09e57ef928fd2451fd0ed1295d0994ca75d026..0000000000000000000000000000000000000000
--- a/spaces/amasgari06/ChatGPT4/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Huggingface provided GPT4 OpenAI API Key
-OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
-
-#Inferenec function
-def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {OPENAI_API_KEY}"
- }
- print(f"system message is ^^ {system_msg}")
- if system_msg.strip() == '':
- initial_message = [{"role": "user", "content": f"{inputs}"},]
- multi_turn_message = []
- else:
- initial_message= [{"role": "system", "content": system_msg},
- {"role": "user", "content": f"{inputs}"},]
- multi_turn_message = [{"role": "system", "content": system_msg},]
-
- if chat_counter == 0 :
- payload = {
- "model": "gpt-4",
- "messages": initial_message ,
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
- print(f"chat_counter - {chat_counter}")
- else: #if chat_counter != 0 :
- messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
- for data in chatbot:
- user = {}
- user["role"] = "user"
- user["content"] = data[0]
- assistant = {}
- assistant["role"] = "assistant"
- assistant["content"] = data[1]
- messages.append(user)
- messages.append(assistant)
- temp = {}
- temp["role"] = "user"
- temp["content"] = inputs
- messages.append(temp)
- #messages
- payload = {
- "model": "gpt-4",
- "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,}
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"Logging : payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- print(f"Logging : response code - {response}")
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
-
-#Resetting to blank
-def reset_textbox():
- return gr.update(value='')
-
-#to set a component as visible=False
-def set_visible_false():
- return gr.update(visible=False)
-
-#to set a component as visible=True
-def set_visible_true():
- return gr.update(visible=True)
-
-title = """🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming
"""
-
-#display message for themes feature
-theme_addon_msg = """🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub()
.
-
🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
-"""
-
-#Using info to add additional information about System message in GPT4
-system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
-System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
-
-#Modifying existing Gradio Theme
-theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
- text_size=gr.themes.sizes.text_lg)
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌""")
- gr.HTML(theme_addon_msg)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key ''')
-
- with gr.Column(elem_id = "col_container"):
- #GPT4 API Key is provided by Huggingface
- with gr.Accordion(label="System message:", open=False):
- system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="")
- accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False)
- chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot")
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
- state = gr.State([])
- with gr.Row():
- with gr.Column(scale=7):
- b1 = gr.Button().style(full_width=True)
- with gr.Column(scale=3):
- server_status_code = gr.Textbox(label="Status code from OpenAI server", )
-
- #top_p, temperature
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- #Event handling
- inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
- b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
-
- inputs.submit(set_visible_false, [], [system_msg])
- b1.click(set_visible_false, [], [system_msg])
- inputs.submit(set_visible_true, [], [accordion_msg])
- b1.click(set_visible_true, [], [accordion_msg])
-
- b1.click(reset_textbox, [], [inputs])
- inputs.submit(reset_textbox, [], [inputs])
-
- #Examples
- with gr.Accordion(label="Examples for System message:", open=False):
- gr.Examples(
- examples = [["""You are an AI programming assistant.
-
- - Follow the user's requirements carefully and to the letter.
- - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail.
- - Then output the code in a single code block.
- - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""],
- ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."],
- ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."],
- ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."],
- ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."],
- ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."],
- ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."],
- ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."],
- ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."],
- ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."],
- ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."],
- ["You are a helpful assistant that provides detailed and accurate information."],
- ["You are an assistant that speaks like Shakespeare."],
- ["You are a friendly assistant who uses casual language and humor."],
- ["You are a financial advisor who gives expert advice on investments and budgeting."],
- ["You are a health and fitness expert who provides advice on nutrition and exercise."],
- ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."],
- ["You are a movie critic who shares insightful opinions on films and their themes."],
- ["You are a history enthusiast who loves to discuss historical events and figures."],
- ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."],
- ["You are an AI poet who can compose creative and evocative poems on any given topic."],],
- inputs = system_msg,)
-
-demo.queue(max_size=99, concurrency_count=20).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/anzorq/sd-space-creator/app.py b/spaces/anzorq/sd-space-creator/app.py
deleted file mode 100644
index c738ef5ad7f8de72d7959c2ce6711d4017cbea0a..0000000000000000000000000000000000000000
--- a/spaces/anzorq/sd-space-creator/app.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import os
-import subprocess
-from huggingface_hub import HfApi, upload_folder, whoami, list_models, hf_hub_download, upload_file
-import gradio as gr
-import requests
-
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def url_to_model_id(model_id_str):
- return model_id_str.split("/")[-2] + "/" + model_id_str.split("/")[-1] if model_id_str.startswith("https://huggingface.co/") else model_id_str
-
-def has_diffusion_model(model_id, token):
- api = HfApi(token=token)
- return any([f.endswith("diffusion_pytorch_model.bin") for f in api.list_repo_files(repo_id=model_id)])
-
-def get_my_model_names(token):
-
- try:
- author = whoami(token=token)
- model_infos = list_models(author=author["name"], use_auth_token=token)
-
-
- model_names = []
- for model_info in model_infos:
- model_id = model_info.modelId
- if has_diffusion_model(model_id, token):
- model_names.append(model_id)
-
- # if not model_names:
- # return [], Exception("No diffusion models found in your account.")
-
- return model_names, None
-
- except Exception as e:
- return [], e
-
-def on_token_change(token):
-
- if token:
- model_names, error = get_my_model_names(token)
- return gr.update(visible=not error), gr.update(choices=model_names, label="Select a model:"), error_str(error)
- else:
- return gr.update(visible=False), gr.update(choices=[], label="Select a model:"), None
-
-def on_load_model(user_model_id, other_model_id, token):
-
- if not user_model_id and not other_model_id:
- return None, None, None, None, gr.update(value=error_str("Please enter a model ID.")), None
-
- try:
- model_id = url_to_model_id(other_model_id) if other_model_id else user_model_id
- original_model_id = model_id
-
- if not has_diffusion_model(model_id, token):
- return None, None, None, None, gr.update(value=error_str("There are no diffusion weights in the model you selected.")), None
-
- user = whoami(token=token)
- model_id = user["name"] + "/" + model_id.split("/")[-1]
- title = " ".join([w.capitalize() for w in model_id.split("/")[-1].replace("-", " ").replace("_", " ").split(" ")])
-
- description = f"""Demo for {title} Stable Diffusion model."""
-
- return gr.update(visible=True), gr.update(value=model_id), gr.update(value=title), gr.update(value=description), None, original_model_id
-
- except Exception as e:
- return None, None, None, None, gr.update(value=error_str(e)), None
-
-def add_space_badge_to_model_card(model_id, token):
-
- readme_file = 'README.md'
- model_card = hf_hub_download(repo_id=model_id, filename=readme_file, token=token)
-
- with open(model_card, "r") as f:
- content = f.read()
-
- content = content.split("---\n")
- content[2] = "[](https://huggingface.co/spaces/" + model_id + ")\n" + content[2]
- content = "---\n".join(content)
-
- with open(readme_file, "w") as f:
- f.write(content)
-
- upload_file(
- path_or_fileobj=readme_file,
- path_in_repo=readme_file,
- repo_id=model_id,
- token=token,
- create_pr=True,
- commit_message="Add Space badge to model card",
- )
-
- os.remove(readme_file)
-
-def create_and_push(space_type, hardware, private_space, add_badge, other_model_name, radio_model_names, model_id, title, description, prefix, update, token, original_model_id):
-
- try:
-
- # 1. Create the new space
- api = HfApi(token=token)
- repo_url = api.create_repo(
- repo_id=model_id,
- exist_ok=update,
- repo_type="space",
- space_sdk="gradio",
- private=private_space
- )
- api_url = f'https://huggingface.co/api/spaces/{model_id}'
- headers = { "Authorization" : f"Bearer {token}"}
- # add HUGGING_FACE_HUB_TOKEN secret to new space
- requests.post(f'{api_url}/secrets', json={"key":"HUGGING_FACE_HUB_TOKEN","value":token}, headers=headers)
- # set new Space Hardware flavor
- requests.post(f'{api_url}/hardware', json={'flavor': hardware}, headers=headers)
-
- # 2. Replace the name, title, and description in the template
- with open("template/app_simple.py" if space_type == "Simple" else "template/app_advanced.py", "r") as f:
- app = f.read()
- app = app.replace("$model_id", url_to_model_id(other_model_name) if other_model_name else radio_model_names)
- app = app.replace("$title", title)
- app = app.replace("$description", description)
- app = app.replace("$prefix", prefix)
- app = app.replace("$space_id", whoami(token=token)["name"] + "/" + model_id.split("/")[-1])
-
- # 3. save the new app.py file
- with open("app.py", "w") as f:
- f.write(app)
-
- # 4. Upload the new app.py to the space
- api.upload_file(
- path_or_fileobj="app.py",
- path_in_repo="app.py",
- repo_id=model_id,
- token=token,
- repo_type="space",
- )
-
- # 5. Upload template/requirements.txt to the space
- if space_type == "Advanced":
- api.upload_file(
- path_or_fileobj="template/requirements.txt",
- path_in_repo="requirements.txt",
- repo_id=model_id,
- token=token,
- repo_type="space",
- )
-
- # 5. Delete the app.py file
- os.remove("app.py")
-
- # 6. Add the Space badge to the model card
- if add_badge:
- add_space_badge_to_model_card(original_model_id, token)
-
- return f"""
- Successfully created space at: {repo_url}
- Opened a PR to add the space badge: https://huggingface.co/{original_model_id}
- """
-
- except Exception as e:
- return error_str(e)
-
-
-DESCRIPTION = """### Create a gradio space for your Diffusers🧨 model
- With this space, you can easily create a gradio demo for your Diffusers model and share it with the community.
- """
- #
- # 1️⃣ Make sure you have created your hugging face account
- # 2️⃣ Generate a token here with write access
- # 3️⃣ Choose a stable diffusion base model, there are thousands of them here
- # 4️⃣ Choose Space type
- # 5️⃣ Choose the new Space Hardware
- # It is done.
- # """
-
-with gr.Blocks() as demo:
-
- gr.Markdown(DESCRIPTION)
- with gr.Row():
-
- with gr.Column(scale=11):
- with gr.Column():
- gr.Markdown("#### 1. Choose a model")
- input_token = gr.Textbox(
- max_lines=1,
- type="password",
- label="Enter your Hugging Face token",
- placeholder="WRITE permission is required!",
- )
- gr.Markdown("You can get a token [here](https://huggingface.co/settings/tokens)")
- with gr.Group(visible=False) as group_model:
- radio_model_names = gr.Radio(label="Your models:")
- other_model_name = gr.Textbox(label="Other model:", placeholder="URL or model id, e.g. username/model_name")
- btn_load = gr.Button(value="Load model")
-
- with gr.Column(scale=10):
- with gr.Column(visible=False) as group_create:
- gr.Markdown("#### 2. Enter details and create the space")
- name = gr.Textbox(label="Name", placeholder="e.g. diffusers-demo")
- title = gr.Textbox(label="Title", placeholder="e.g. Diffusers Demo")
- description = gr.Textbox(label="Description", placeholder="e.g. Demo for my awesome Diffusers model", lines=5)
- original_model_id = gr.Textbox(visible=False)
- prefix = gr.Textbox(label="Prefix tokens", placeholder="Tokens that are required to be present in the prompt, e.g. `rick and morty style`")
-
- gr.Markdown("""#### Choose space type
- - **Simple** - Runs on GPU using Hugging Face inference API, but you cannot control image generation parameters.
- - **Advanced** - Runs on CPU by default, with the option to upgrade to GPU. You can control image generation parameters: guidance, number of steps, image size, etc. Also supports **image-to-image** generation.""")
- space_type =gr.Radio(label="Space type", choices=["Simple", "Advanced"], value="Simple")
-
- update = gr.Checkbox(label="Update the space if it already exists?")
- private_space = gr.Checkbox(label="Private Space")
- add_badge = gr.Checkbox(label="Add Space badge to the model card (will open a PR)")
-
- gr.Markdown("Choose the new Space Hardware [check pricing page](https://huggingface.co/pricing#spaces), you need payment method to upgrade your Space hardware")
- hardware = gr.Dropdown(["cpu-basic","cpu-upgrade","t4-small","t4-medium","a10g-small","a10g-large"],value = "cpu-basic", label="Space Hardware")
-
- btn_create = gr.Button("Create the space")
-
- error_output = gr.Markdown(label="Output")
-
-
- input_token.change(
- fn=on_token_change,
- inputs=input_token,
- outputs=[group_model, radio_model_names, error_output],
- queue=False,
- scroll_to_output=True)
-
- btn_load.click(
- fn=on_load_model,
- inputs=[radio_model_names, other_model_name, input_token],
- outputs=[group_create, name, title, description, error_output, original_model_id],
- queue=False,
- scroll_to_output=True)
-
- btn_create.click(
- fn=create_and_push,
- inputs=[space_type, hardware, private_space, add_badge, other_model_name, radio_model_names, name, title, description, prefix, update, input_token, original_model_id],
- outputs=[error_output],
- scroll_to_output=True
- )
-
- # gr.Markdown("""
""")
- gr.HTML("""
-
- """)
-
-demo.queue()
-demo.launch(debug=True)
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/errors.py b/spaces/aodianyun/stable-diffusion-webui/modules/errors.py
deleted file mode 100644
index 72c9c44497221eb814b402aa5859a3e6aaeaac00..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/errors.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import sys
-import traceback
-
-
-def print_error_explanation(message):
- lines = message.strip().split("\n")
- max_len = max([len(x) for x in lines])
-
- print('=' * max_len, file=sys.stderr)
- for line in lines:
- print(line, file=sys.stderr)
- print('=' * max_len, file=sys.stderr)
-
-
-def display(e: Exception, task):
- print(f"{task or 'error'}: {type(e).__name__}", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
-
- message = str(e)
- if "copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768])" in message:
- print_error_explanation("""
-The most likely cause of this is you are trying to load Stable Diffusion 2.0 model without specifying its config file.
-See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 for how to solve this.
- """)
-
-
-already_displayed = {}
-
-
-def display_once(e: Exception, task):
- if task in already_displayed:
- return
-
- display(e, task)
-
- already_displayed[task] = 1
-
-
-def run(code, task):
- try:
- code()
- except Exception as e:
- display(task, e)
diff --git a/spaces/aodianyun/stable-diffusion-webui/scripts/prompt_matrix.py b/spaces/aodianyun/stable-diffusion-webui/scripts/prompt_matrix.py
deleted file mode 100644
index 51c70998866d4b0853a46e4de73d86c3d9ec9b93..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/scripts/prompt_matrix.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import math
-from collections import namedtuple
-from copy import copy
-import random
-
-import modules.scripts as scripts
-import gradio as gr
-
-from modules import images
-from modules.processing import process_images, Processed
-from modules.shared import opts, cmd_opts, state
-import modules.sd_samplers
-
-
-def draw_xy_grid(xs, ys, x_label, y_label, cell):
- res = []
-
- ver_texts = [[images.GridAnnotation(y_label(y))] for y in ys]
- hor_texts = [[images.GridAnnotation(x_label(x))] for x in xs]
-
- first_processed = None
-
- state.job_count = len(xs) * len(ys)
-
- for iy, y in enumerate(ys):
- for ix, x in enumerate(xs):
- state.job = f"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}"
-
- processed = cell(x, y)
- if first_processed is None:
- first_processed = processed
-
- res.append(processed.images[0])
-
- grid = images.image_grid(res, rows=len(ys))
- grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts)
-
- first_processed.images = [grid]
-
- return first_processed
-
-
-class Script(scripts.Script):
- def title(self):
- return "Prompt matrix"
-
- def ui(self, is_img2img):
- gr.HTML('
')
- with gr.Row():
- with gr.Column():
- put_at_start = gr.Checkbox(label='Put variable parts at start of prompt', value=False, elem_id=self.elem_id("put_at_start"))
- different_seeds = gr.Checkbox(label='Use different seed for each picture', value=False, elem_id=self.elem_id("different_seeds"))
- with gr.Column():
- prompt_type = gr.Radio(["positive", "negative"], label="Select prompt", elem_id=self.elem_id("prompt_type"), value="positive")
- variations_delimiter = gr.Radio(["comma", "space"], label="Select joining char", elem_id=self.elem_id("variations_delimiter"), value="comma")
- with gr.Column():
- margin_size = gr.Slider(label="Grid margins (px)", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id("margin_size"))
-
- return [put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size]
-
- def run(self, p, put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size):
- modules.processing.fix_seed(p)
- # Raise error if promp type is not positive or negative
- if prompt_type not in ["positive", "negative"]:
- raise ValueError(f"Unknown prompt type {prompt_type}")
- # Raise error if variations delimiter is not comma or space
- if variations_delimiter not in ["comma", "space"]:
- raise ValueError(f"Unknown variations delimiter {variations_delimiter}")
-
- prompt = p.prompt if prompt_type == "positive" else p.negative_prompt
- original_prompt = prompt[0] if type(prompt) == list else prompt
- positive_prompt = p.prompt[0] if type(p.prompt) == list else p.prompt
-
- delimiter = ", " if variations_delimiter == "comma" else " "
-
- all_prompts = []
- prompt_matrix_parts = original_prompt.split("|")
- combination_count = 2 ** (len(prompt_matrix_parts) - 1)
- for combination_num in range(combination_count):
- selected_prompts = [text.strip().strip(',') for n, text in enumerate(prompt_matrix_parts[1:]) if combination_num & (1 << n)]
-
- if put_at_start:
- selected_prompts = selected_prompts + [prompt_matrix_parts[0]]
- else:
- selected_prompts = [prompt_matrix_parts[0]] + selected_prompts
-
- all_prompts.append(delimiter.join(selected_prompts))
-
- p.n_iter = math.ceil(len(all_prompts) / p.batch_size)
- p.do_not_save_grid = True
-
- print(f"Prompt matrix will create {len(all_prompts)} images using a total of {p.n_iter} batches.")
-
- if prompt_type == "positive":
- p.prompt = all_prompts
- else:
- p.negative_prompt = all_prompts
- p.seed = [p.seed + (i if different_seeds else 0) for i in range(len(all_prompts))]
- p.prompt_for_display = positive_prompt
- processed = process_images(p)
-
- grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))
- grid = images.draw_prompt_matrix(grid, processed.images[0].width, processed.images[1].height, prompt_matrix_parts, margin_size)
- processed.images.insert(0, grid)
- processed.index_of_first_image = 1
- processed.infotexts.insert(0, processed.infotexts[0])
-
- if opts.grid_save:
- images.save_image(processed.images[0], p.outpath_grids, "prompt_matrix", extension=opts.grid_format, prompt=original_prompt, seed=processed.seed, grid=True, p=p)
-
- return processed
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_multiscale_discriminator.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_multiscale_discriminator.py
deleted file mode 100644
index b4909f37c0c91c6fee8bb0baab98a8662039dea1..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_multiscale_discriminator.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from torch import nn
-
-from TTS.vocoder.models.melgan_discriminator import MelganDiscriminator
-
-
-class MelganMultiscaleDiscriminator(nn.Module):
- def __init__(
- self,
- in_channels=1,
- out_channels=1,
- num_scales=3,
- kernel_sizes=(5, 3),
- base_channels=16,
- max_channels=1024,
- downsample_factors=(4, 4, 4),
- pooling_kernel_size=4,
- pooling_stride=2,
- pooling_padding=2,
- groups_denominator=4,
- ):
- super().__init__()
-
- self.discriminators = nn.ModuleList(
- [
- MelganDiscriminator(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_sizes=kernel_sizes,
- base_channels=base_channels,
- max_channels=max_channels,
- downsample_factors=downsample_factors,
- groups_denominator=groups_denominator,
- )
- for _ in range(num_scales)
- ]
- )
-
- self.pooling = nn.AvgPool1d(
- kernel_size=pooling_kernel_size, stride=pooling_stride, padding=pooling_padding, count_include_pad=False
- )
-
- def forward(self, x):
- scores = []
- feats = []
- for disc in self.discriminators:
- score, feat = disc(x)
- scores.append(score)
- feats.append(feat)
- x = self.pooling(x)
- return scores, feats
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/tests/test_common.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/tests/test_common.py
deleted file mode 100644
index 49d7a18d551b9b97289b724ff0814a4964166e85..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/tests/test_common.py
+++ /dev/null
@@ -1,99 +0,0 @@
-"""Tests of functionality that should work in all vegalite versions"""
-
-import pytest
-
-import pandas as pd
-
-from .. import v3, v4
-
-
-@pytest.fixture
-def basic_spec():
- return {
- "data": {"url": "data.csv"},
- "mark": "line",
- "encoding": {
- "color": {"type": "nominal", "field": "color"},
- "x": {"type": "quantitative", "field": "xval"},
- "y": {"type": "ordinal", "field": "yval"},
- },
- }
-
-
-def make_final_spec(alt, basic_spec):
- theme = alt.themes.get()
- spec = theme()
- spec.update(basic_spec)
- return spec
-
-
-def make_basic_chart(alt):
- data = pd.DataFrame(
- {
- "a": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
- "b": [28, 55, 43, 91, 81, 53, 19, 87, 52],
- }
- )
-
- return alt.Chart(data).mark_bar().encode(x="a", y="b")
-
-
-@pytest.mark.parametrize("alt", [v3, v4])
-def test_basic_chart_to_dict(alt, basic_spec):
- chart = (
- alt.Chart("data.csv")
- .mark_line()
- .encode(alt.X("xval:Q"), y=alt.Y("yval:O"), color="color:N")
- )
- dct = chart.to_dict()
-
- # schema should be in the top level
- assert dct.pop("$schema").startswith("http")
-
- # remainder of spec should match the basic spec
- assert dct == make_final_spec(alt, basic_spec)
-
-
-@pytest.mark.parametrize("alt", [v3, v4])
-def test_basic_chart_from_dict(alt, basic_spec):
- chart = alt.Chart.from_dict(basic_spec)
- dct = chart.to_dict()
-
- # schema should be in the top level
- assert dct.pop("$schema").startswith("http")
-
- # remainder of spec should match the basic spec
- assert dct == make_final_spec(alt, basic_spec)
-
-
-@pytest.mark.parametrize("alt", [v3, v4])
-def test_theme_enable(alt, basic_spec):
- active_theme = alt.themes.active
-
- try:
- alt.themes.enable("none")
-
- chart = alt.Chart.from_dict(basic_spec)
- dct = chart.to_dict()
-
- # schema should be in the top level
- assert dct.pop("$schema").startswith("http")
-
- # remainder of spec should match the basic spec
- # without any theme settings
- assert dct == basic_spec
- finally:
- # reset the theme to its initial value
- alt.themes.enable(active_theme)
-
-
-@pytest.mark.parametrize("alt", [v3, v4])
-def test_max_rows(alt):
- basic_chart = make_basic_chart(alt)
-
- with alt.data_transformers.enable("default"):
- basic_chart.to_dict() # this should not fail
-
- with alt.data_transformers.enable("default", max_rows=5):
- with pytest.raises(alt.MaxRowsError):
- basic_chart.to_dict() # this should not fail
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/text_compressor.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/text_compressor.py
deleted file mode 100644
index d699f2ea296f33cdc37ca152ab225d09cb04b5ea..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/text_compressor.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from enum import Enum
-
-
-class TextCompressionLevel(Enum):
- none = 0
- low = 1
- high = 2
-
-
-class TextCompressor(object):
- def __init__(
- self, level: TextCompressionLevel, max_input_byte_length: int = 2**16
- ):
- self.level = level
- self.max_input_length = max_input_byte_length
-
- def compress(self, text: str) -> bytes:
- if self.level == TextCompressionLevel.low:
- import zlib
-
- # zlib: built-in, fast
- return zlib.compress(text.encode(), level=0)
- elif self.level == TextCompressionLevel.high:
- try:
- import unishox2
-
- # unishox2: optimized for short text but slower
- except ImportError:
- raise ImportError(
- "Please install unishox2 for the text compression feature: "
- "pip install unishox2-py3"
- )
- assert len(text.encode()) <= self.max_input_length
- return unishox2.compress(text)[0]
- else:
- return text.encode()
-
- def decompress(self, compressed: bytes) -> str:
- if self.level == TextCompressionLevel.low:
- import zlib
-
- return zlib.decompress(compressed).decode()
- elif self.level == TextCompressionLevel.high:
- try:
- import unishox2
- except ImportError:
- raise ImportError(
- "Please install unishox2 for the text compression feature: "
- "pip install unishox2-py3"
- )
- return unishox2.decompress(compressed, self.max_input_length)
- else:
- return compressed.decode()
diff --git a/spaces/aryadytm/remove-photo-object/src/core.py b/spaces/aryadytm/remove-photo-object/src/core.py
deleted file mode 100644
index 9706f344d99877b9f8ea6d383ef030c0a4aebdfa..0000000000000000000000000000000000000000
--- a/spaces/aryadytm/remove-photo-object/src/core.py
+++ /dev/null
@@ -1,466 +0,0 @@
-import base64
-import json
-import os
-import re
-import time
-import uuid
-from io import BytesIO
-from pathlib import Path
-import cv2
-
-# For inpainting
-
-import numpy as np
-import pandas as pd
-import streamlit as st
-from PIL import Image
-from streamlit_drawable_canvas import st_canvas
-
-
-import argparse
-import io
-import multiprocessing
-from typing import Union
-
-import torch
-
-try:
- torch._C._jit_override_can_fuse_on_cpu(False)
- torch._C._jit_override_can_fuse_on_gpu(False)
- torch._C._jit_set_texpr_fuser_enabled(False)
- torch._C._jit_set_nvfuser_enabled(False)
-except:
- pass
-
-from src.helper import (
- download_model,
- load_img,
- norm_img,
- numpy_to_bytes,
- pad_img_to_modulo,
- resize_max_size,
-)
-
-NUM_THREADS = str(multiprocessing.cpu_count())
-
-os.environ["OMP_NUM_THREADS"] = NUM_THREADS
-os.environ["OPENBLAS_NUM_THREADS"] = NUM_THREADS
-os.environ["MKL_NUM_THREADS"] = NUM_THREADS
-os.environ["VECLIB_MAXIMUM_THREADS"] = NUM_THREADS
-os.environ["NUMEXPR_NUM_THREADS"] = NUM_THREADS
-if os.environ.get("CACHE_DIR"):
- os.environ["TORCH_HOME"] = os.environ["CACHE_DIR"]
-
-#BUILD_DIR = os.environ.get("LAMA_CLEANER_BUILD_DIR", "./lama_cleaner/app/build")
-
-# For Seam-carving
-
-from scipy import ndimage as ndi
-
-SEAM_COLOR = np.array([255, 200, 200]) # seam visualization color (BGR)
-SHOULD_DOWNSIZE = True # if True, downsize image for faster carving
-DOWNSIZE_WIDTH = 500 # resized image width if SHOULD_DOWNSIZE is True
-ENERGY_MASK_CONST = 100000.0 # large energy value for protective masking
-MASK_THRESHOLD = 10 # minimum pixel intensity for binary mask
-USE_FORWARD_ENERGY = True # if True, use forward energy algorithm
-
-device = torch.device("cpu")
-model_path = "./assets/big-lama.pt"
-model = torch.jit.load(model_path, map_location="cpu")
-model = model.to(device)
-model.eval()
-
-
-########################################
-# UTILITY CODE
-########################################
-
-
-def visualize(im, boolmask=None, rotate=False):
- vis = im.astype(np.uint8)
- if boolmask is not None:
- vis[np.where(boolmask == False)] = SEAM_COLOR
- if rotate:
- vis = rotate_image(vis, False)
- cv2.imshow("visualization", vis)
- cv2.waitKey(1)
- return vis
-
-def resize(image, width):
- dim = None
- h, w = image.shape[:2]
- dim = (width, int(h * width / float(w)))
- image = image.astype('float32')
- return cv2.resize(image, dim)
-
-def rotate_image(image, clockwise):
- k = 1 if clockwise else 3
- return np.rot90(image, k)
-
-
-########################################
-# ENERGY FUNCTIONS
-########################################
-
-def backward_energy(im):
- """
- Simple gradient magnitude energy map.
- """
- xgrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=1, mode='wrap')
- ygrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=0, mode='wrap')
-
- grad_mag = np.sqrt(np.sum(xgrad**2, axis=2) + np.sum(ygrad**2, axis=2))
-
- # vis = visualize(grad_mag)
- # cv2.imwrite("backward_energy_demo.jpg", vis)
-
- return grad_mag
-
-def forward_energy(im):
- """
- Forward energy algorithm as described in "Improved Seam Carving for Video Retargeting"
- by Rubinstein, Shamir, Avidan.
- Vectorized code adapted from
- https://github.com/axu2/improved-seam-carving.
- """
- h, w = im.shape[:2]
- im = cv2.cvtColor(im.astype(np.uint8), cv2.COLOR_BGR2GRAY).astype(np.float64)
-
- energy = np.zeros((h, w))
- m = np.zeros((h, w))
-
- U = np.roll(im, 1, axis=0)
- L = np.roll(im, 1, axis=1)
- R = np.roll(im, -1, axis=1)
-
- cU = np.abs(R - L)
- cL = np.abs(U - L) + cU
- cR = np.abs(U - R) + cU
-
- for i in range(1, h):
- mU = m[i-1]
- mL = np.roll(mU, 1)
- mR = np.roll(mU, -1)
-
- mULR = np.array([mU, mL, mR])
- cULR = np.array([cU[i], cL[i], cR[i]])
- mULR += cULR
-
- argmins = np.argmin(mULR, axis=0)
- m[i] = np.choose(argmins, mULR)
- energy[i] = np.choose(argmins, cULR)
-
- # vis = visualize(energy)
- # cv2.imwrite("forward_energy_demo.jpg", vis)
-
- return energy
-
-########################################
-# SEAM HELPER FUNCTIONS
-########################################
-
-def add_seam(im, seam_idx):
- """
- Add a vertical seam to a 3-channel color image at the indices provided
- by averaging the pixels values to the left and right of the seam.
- Code adapted from https://github.com/vivianhylee/seam-carving.
- """
- h, w = im.shape[:2]
- output = np.zeros((h, w + 1, 3))
- for row in range(h):
- col = seam_idx[row]
- for ch in range(3):
- if col == 0:
- p = np.mean(im[row, col: col + 2, ch])
- output[row, col, ch] = im[row, col, ch]
- output[row, col + 1, ch] = p
- output[row, col + 1:, ch] = im[row, col:, ch]
- else:
- p = np.mean(im[row, col - 1: col + 1, ch])
- output[row, : col, ch] = im[row, : col, ch]
- output[row, col, ch] = p
- output[row, col + 1:, ch] = im[row, col:, ch]
-
- return output
-
-def add_seam_grayscale(im, seam_idx):
- """
- Add a vertical seam to a grayscale image at the indices provided
- by averaging the pixels values to the left and right of the seam.
- """
- h, w = im.shape[:2]
- output = np.zeros((h, w + 1))
- for row in range(h):
- col = seam_idx[row]
- if col == 0:
- p = np.mean(im[row, col: col + 2])
- output[row, col] = im[row, col]
- output[row, col + 1] = p
- output[row, col + 1:] = im[row, col:]
- else:
- p = np.mean(im[row, col - 1: col + 1])
- output[row, : col] = im[row, : col]
- output[row, col] = p
- output[row, col + 1:] = im[row, col:]
-
- return output
-
-def remove_seam(im, boolmask):
- h, w = im.shape[:2]
- boolmask3c = np.stack([boolmask] * 3, axis=2)
- return im[boolmask3c].reshape((h, w - 1, 3))
-
-def remove_seam_grayscale(im, boolmask):
- h, w = im.shape[:2]
- return im[boolmask].reshape((h, w - 1))
-
-def get_minimum_seam(im, mask=None, remove_mask=None):
- """
- DP algorithm for finding the seam of minimum energy. Code adapted from
- https://karthikkaranth.me/blog/implementing-seam-carving-with-python/
- """
- h, w = im.shape[:2]
- energyfn = forward_energy if USE_FORWARD_ENERGY else backward_energy
- M = energyfn(im)
-
- if mask is not None:
- M[np.where(mask > MASK_THRESHOLD)] = ENERGY_MASK_CONST
-
- # give removal mask priority over protective mask by using larger negative value
- if remove_mask is not None:
- M[np.where(remove_mask > MASK_THRESHOLD)] = -ENERGY_MASK_CONST * 100
-
- seam_idx, boolmask = compute_shortest_path(M, im, h, w)
-
- return np.array(seam_idx), boolmask
-
-def compute_shortest_path(M, im, h, w):
- backtrack = np.zeros_like(M, dtype=np.int_)
-
-
- # populate DP matrix
- for i in range(1, h):
- for j in range(0, w):
- if j == 0:
- idx = np.argmin(M[i - 1, j:j + 2])
- backtrack[i, j] = idx + j
- min_energy = M[i-1, idx + j]
- else:
- idx = np.argmin(M[i - 1, j - 1:j + 2])
- backtrack[i, j] = idx + j - 1
- min_energy = M[i - 1, idx + j - 1]
-
- M[i, j] += min_energy
-
- # backtrack to find path
- seam_idx = []
- boolmask = np.ones((h, w), dtype=np.bool_)
- j = np.argmin(M[-1])
- for i in range(h-1, -1, -1):
- boolmask[i, j] = False
- seam_idx.append(j)
- j = backtrack[i, j]
-
- seam_idx.reverse()
- return seam_idx, boolmask
-
-########################################
-# MAIN ALGORITHM
-########################################
-
-def seams_removal(im, num_remove, mask=None, vis=False, rot=False):
- for _ in range(num_remove):
- seam_idx, boolmask = get_minimum_seam(im, mask)
- if vis:
- visualize(im, boolmask, rotate=rot)
- im = remove_seam(im, boolmask)
- if mask is not None:
- mask = remove_seam_grayscale(mask, boolmask)
- return im, mask
-
-
-def seams_insertion(im, num_add, mask=None, vis=False, rot=False):
- seams_record = []
- temp_im = im.copy()
- temp_mask = mask.copy() if mask is not None else None
-
- for _ in range(num_add):
- seam_idx, boolmask = get_minimum_seam(temp_im, temp_mask)
- if vis:
- visualize(temp_im, boolmask, rotate=rot)
-
- seams_record.append(seam_idx)
- temp_im = remove_seam(temp_im, boolmask)
- if temp_mask is not None:
- temp_mask = remove_seam_grayscale(temp_mask, boolmask)
-
- seams_record.reverse()
-
- for _ in range(num_add):
- seam = seams_record.pop()
- im = add_seam(im, seam)
- if vis:
- visualize(im, rotate=rot)
- if mask is not None:
- mask = add_seam_grayscale(mask, seam)
-
- # update the remaining seam indices
- for remaining_seam in seams_record:
- remaining_seam[np.where(remaining_seam >= seam)] += 2
-
- return im, mask
-
-########################################
-# MAIN DRIVER FUNCTIONS
-########################################
-
-def seam_carve(im, dy, dx, mask=None, vis=False):
- im = im.astype(np.float64)
- h, w = im.shape[:2]
- assert h + dy > 0 and w + dx > 0 and dy <= h and dx <= w
-
- if mask is not None:
- mask = mask.astype(np.float64)
-
- output = im
-
- if dx < 0:
- output, mask = seams_removal(output, -dx, mask, vis)
-
- elif dx > 0:
- output, mask = seams_insertion(output, dx, mask, vis)
-
- if dy < 0:
- output = rotate_image(output, True)
- if mask is not None:
- mask = rotate_image(mask, True)
- output, mask = seams_removal(output, -dy, mask, vis, rot=True)
- output = rotate_image(output, False)
-
- elif dy > 0:
- output = rotate_image(output, True)
- if mask is not None:
- mask = rotate_image(mask, True)
- output, mask = seams_insertion(output, dy, mask, vis, rot=True)
- output = rotate_image(output, False)
-
- return output
-
-
-def object_removal(im, rmask, mask=None, vis=False, horizontal_removal=False):
- im = im.astype(np.float64)
- rmask = rmask.astype(np.float64)
- if mask is not None:
- mask = mask.astype(np.float64)
- output = im
-
- h, w = im.shape[:2]
-
- if horizontal_removal:
- output = rotate_image(output, True)
- rmask = rotate_image(rmask, True)
- if mask is not None:
- mask = rotate_image(mask, True)
-
- while len(np.where(rmask > MASK_THRESHOLD)[0]) > 0:
- seam_idx, boolmask = get_minimum_seam(output, mask, rmask)
- if vis:
- visualize(output, boolmask, rotate=horizontal_removal)
- output = remove_seam(output, boolmask)
- rmask = remove_seam_grayscale(rmask, boolmask)
- if mask is not None:
- mask = remove_seam_grayscale(mask, boolmask)
-
- num_add = (h if horizontal_removal else w) - output.shape[1]
- output, mask = seams_insertion(output, num_add, mask, vis, rot=horizontal_removal)
- if horizontal_removal:
- output = rotate_image(output, False)
-
- return output
-
-
-
-def s_image(im,mask,vs,hs,mode="resize"):
- im = cv2.cvtColor(im, cv2.COLOR_RGBA2RGB)
- mask = 255-mask[:,:,3]
- h, w = im.shape[:2]
- if SHOULD_DOWNSIZE and w > DOWNSIZE_WIDTH:
- im = resize(im, width=DOWNSIZE_WIDTH)
- if mask is not None:
- mask = resize(mask, width=DOWNSIZE_WIDTH)
-
- # image resize mode
- if mode=="resize":
- dy = hs#reverse
- dx = vs#reverse
- assert dy is not None and dx is not None
- output = seam_carve(im, dy, dx, mask, False)
-
-
- # object removal mode
- elif mode=="remove":
- assert mask is not None
- output = object_removal(im, mask, None, False, True)
-
- return output
-
-
-##### Inpainting helper code
-
-def run(image, mask):
- """
- image: [C, H, W]
- mask: [1, H, W]
- return: BGR IMAGE
- """
- origin_height, origin_width = image.shape[1:]
- image = pad_img_to_modulo(image, mod=8)
- mask = pad_img_to_modulo(mask, mod=8)
-
- mask = (mask > 0) * 1
- image = torch.from_numpy(image).unsqueeze(0).to(device)
- mask = torch.from_numpy(mask).unsqueeze(0).to(device)
-
- start = time.time()
- with torch.no_grad():
- inpainted_image = model(image, mask)
-
- print(f"process time: {(time.time() - start)*1000}ms")
- cur_res = inpainted_image[0].permute(1, 2, 0).detach().cpu().numpy()
- cur_res = cur_res[0:origin_height, 0:origin_width, :]
- cur_res = np.clip(cur_res * 255, 0, 255).astype("uint8")
- cur_res = cv2.cvtColor(cur_res, cv2.COLOR_BGR2RGB)
- return cur_res
-
-
-def get_args_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", default=8080, type=int)
- parser.add_argument("--device", default="cuda", type=str)
- parser.add_argument("--debug", action="store_true")
- return parser.parse_args()
-
-
-def process_inpaint(image, mask):
- image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB)
- original_shape = image.shape
- interpolation = cv2.INTER_CUBIC
-
- #size_limit: Union[int, str] = request.form.get("sizeLimit", "1080")
- #if size_limit == "Original":
- size_limit = max(image.shape)
- #else:
- # size_limit = int(size_limit)
-
- print(f"Origin image shape: {original_shape}")
- image = resize_max_size(image, size_limit=size_limit, interpolation=interpolation)
- print(f"Resized image shape: {image.shape}")
- image = norm_img(image)
-
- mask = 255-mask[:,:,3]
- mask = resize_max_size(mask, size_limit=size_limit, interpolation=interpolation)
- mask = norm_img(mask)
-
- res_np_img = run(image, mask)
-
- return cv2.cvtColor(res_np_img, cv2.COLOR_BGR2RGB)
\ No newline at end of file
diff --git a/spaces/aseifert/ExplaiNER/html/index.md b/spaces/aseifert/ExplaiNER/html/index.md
deleted file mode 100644
index e3f9df9725f3904f1fca0e33b0cb96d311cedde0..0000000000000000000000000000000000000000
--- a/spaces/aseifert/ExplaiNER/html/index.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-title: "🏷️ ExplaiNER"
-subtitle: "Error Analysis for NER models & datasets"
----
-
-
-
-
-
-_Error Analysis is an important but often overlooked part of the data science project lifecycle, for which there is still very little tooling available. Practitioners tend to write throwaway code or, worse, skip this crucial step of understanding their models' errors altogether. This project tries to provide an extensive toolkit to probe any NER model/dataset combination, find labeling errors and understand the models' and datasets' limitations, leading the user on her way to further improvements._
-
-[Documentation](../doc/index.html) | [Slides](../presentation.pdf) | [Github](https://github.com/aseifert/ExplaiNER)
-
-
-## Getting started
-
-```bash
-# Install requirements
-pip install -r requirements.txt # you'll need Python 3.9+
-
-# Run
-make run
-```
-
-## Description
-
-Some interesting **visualization techniques** contained in this project:
-
-* customizable visualization of neural network activation, based on the embedding layer and the feed-forward layers of the selected transformer model. ([Alammar 2021](https://aclanthology.org/2021.acl-demo.30/))
-* customizable similarity map of a 2d projection of the model's final layer's hidden states, using various algorithms (a bit like the [Tensorflow Embedding Projector](https://projector.tensorflow.org/))
-* inline HTML representation of samples with token-level prediction + labels (my own; see below under 'Samples by loss' for more info)
-
-
-**Libraries** important to this project:
-
-* `streamlit` for demoing (custom multi-page feature hacked in, also using session state)
-* `plotly` and `matplotlib` for charting
-* `transformers` for providing the models, and `datasets` for, well, the datasets
-* a forked, slightly modified version of [`ecco`](https://github.com/jalammar/ecco) for visualizing the neural net activations
-* `sentence_transformers` for finding potential duplicates
-* `scikit-learn` for TruncatedSVD & PCA, `umap-learn` for UMAP
-
-
-## Application Sections
-
-
-Activations
-
-> A group of neurons tend to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.
-
-
-Hidden States
-
-> For every token in the dataset, we take its hidden state and project it onto a two-dimensional plane. Data points are colored by label/prediction, with disagreements marked by a small black border.
->
-> Using these projections you can visually identify data points that end up in the wrong neighborhood, indicating prediction/labeling errors.
-
-
-Probing
-
-> A very direct and interactive way to test your model is by providing it with a list of text inputs and then inspecting the model outputs. The application features a multiline text field so the user can input multiple texts separated by newlines. For each text, the app will show a data frame containing the tokenized string, token predictions, probabilities and a visual indicator for low probability predictions -- these are the ones you should inspect first for prediction errors.
-
-
-Metrics
-
-> The metrics page contains precision, recall and f-score metrics as well as a confusion matrix over all the classes. By default, the confusion matrix is normalized. There's an option to zero out the diagonal, leaving only prediction errors (here it makes sense to turn off normalization, so you get raw error counts).
->
-> With the confusion matrix, you don't want any of the classes to end up in the bottom right quarter: those are frequent but error-prone.
-
-
-Misclassified
-
-> This page contains all misclassified examples and allows filtering by specific error types. Helps you get an understanding of the types of errors your model makes.
-
-
-Loss by Token/Label
-
-> Show count, mean and median loss per token and label.
->
-> Look out for tokens that have a big gap between mean and median, indicating systematic labeling issues.
-
-
-Samples by Loss
-
-> Show every example sorted by loss (descending) for close inspection.
->
-> Apart from a (token-based) dataframe view, there's also an HTML representation of the samples, which is very information-dense but really helpful, once you got used to reading it:
->
-> Every predicted entity (every token, really) gets a black border. The text color signifies the predicted label, with the first token of a sequence of token also showing the label's icon. If (and only if) the prediction is wrong, a small little box after the entity (token) contains the correct target class, with a background color corresponding to that class.
->
-> For short texts, the dataframe view can be sufficient, but for longer texts the HTML view tends to be more useful.
-
-
-Random Samples
-
-> Show random samples. Simple method, but it often turns up interesting things.
-
-
-Find Duplicates
-
-> Find potential duplicates in the data using cosine similarity.
-
-
-Inspect
-
-> Inspect your whole dataset, either unfiltered or by id.
-
-
-Raw data
-
-> See the data as seen by your model.
-
-
-Debug
-
-> Debug info.
diff --git a/spaces/avid-ml/bias-detection/avidtools/__init__.py b/spaces/avid-ml/bias-detection/avidtools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/awacke1/CB-GR-Chatbot-Blenderbot/app.py b/spaces/awacke1/CB-GR-Chatbot-Blenderbot/app.py
deleted file mode 100644
index a2ec61b6bacb0178644b42639f6e37e82ba67cce..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CB-GR-Chatbot-Blenderbot/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
-import torch
-import gradio as gr
-from datasets import load_dataset
-
-# PersistDataset -----
-import os
-import csv
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-
-#fastapi is where its at: share your app, share your api
-import fastapi
-
-from typing import List, Dict
-import httpx
-import pandas as pd
-import datasets as ds
-
-UseMemory=True
-HF_TOKEN=os.environ.get("HF_TOKEN")
-
-def SaveResult(text, outputfileName):
- basedir = os.path.dirname(__file__)
- savePath = outputfileName
- print("Saving: " + text + " to " + savePath)
- from os.path import exists
- file_exists = exists(savePath)
- if file_exists:
- with open(outputfileName, "a") as f: #append
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- else:
- with open(outputfileName, "w") as f: #write
- f.write(str("time, message, text\n")) # one time only to get column headers for CSV file
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- return
-
-
-def store_message(name: str, message: str, outputfileName: str):
- basedir = os.path.dirname(__file__)
- savePath = outputfileName
-
- # if file doesnt exist, create it with labels
- from os.path import exists
- file_exists = exists(savePath)
-
- if (file_exists==False):
- with open(savePath, "w") as f: #write
- f.write(str("time, message, text\n")) # one time only to get column headers for CSV file
- if name and message:
- writer = csv.DictWriter(f, fieldnames=["time", "message", "name"])
- writer.writerow(
- {"time": str(datetime.now()), "message": message.strip(), "name": name.strip() }
- )
- df = pd.read_csv(savePath)
- df = df.sort_values(df.columns[0],ascending=False)
- else:
- if name and message:
- with open(savePath, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=[ "time", "message", "name", ])
- writer.writerow(
- {"time": str(datetime.now()), "message": message.strip(), "name": name.strip() }
- )
- df = pd.read_csv(savePath)
- df = df.sort_values(df.columns[0],ascending=False)
- return df
-
-mname = "facebook/blenderbot-400M-distill"
-model = BlenderbotForConditionalGeneration.from_pretrained(mname)
-tokenizer = BlenderbotTokenizer.from_pretrained(mname)
-
-def take_last_tokens(inputs, note_history, history):
- if inputs['input_ids'].shape[1] > 128:
- inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-128:].tolist()])
- inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-128:].tolist()])
- note_history = [' '.join(note_history[0].split(' ')[2:])]
- history = history[1:]
- return inputs, note_history, history
-
-def add_note_to_history(note, note_history):# good example of non async since we wait around til we know it went okay.
- note_history.append(note)
- note_history = ' '.join(note_history)
- return [note_history]
-
-title = "💬ChatBack🧠💾"
-description = """Chatbot With persistent memory dataset allowing multiagent system AI to access a shared dataset as memory pool with stored interactions.
- Current Best SOTA Chatbot: https://huggingface.co/facebook/blenderbot-400M-distill?text=Hey+my+name+is+ChatBack%21+Are+you+ready+to+rock%3F """
-
-def get_base(filename):
- basedir = os.path.dirname(__file__)
- print(basedir)
- #loadPath = basedir + "\\" + filename # works on windows
- loadPath = basedir + filename
- print(loadPath)
- return loadPath
-
-def chat(message, history):
- history = history or []
- if history:
- history_useful = [' '.join([str(a[0])+' '+str(a[1]) for a in history])]
- else:
- history_useful = []
-
- history_useful = add_note_to_history(message, history_useful)
- inputs = tokenizer(history_useful, return_tensors="pt")
- inputs, history_useful, history = take_last_tokens(inputs, history_useful, history)
- reply_ids = model.generate(**inputs)
- response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
- history_useful = add_note_to_history(response, history_useful)
- list_history = history_useful[0].split(' ')
- history.append((list_history[-2], list_history[-1]))
-
- df=pd.DataFrame()
-
- if UseMemory:
- #outputfileName = 'ChatbotMemory.csv'
- outputfileName = 'ChatbotMemory3.csv' # Test first time file create
- df = store_message(message, response, outputfileName) # Save to dataset
- basedir = get_base(outputfileName)
-
- return history, df, basedir
-
-
-with gr.Blocks() as demo:
- gr.Markdown("🍰Gradio chatbot backed by dataframe CSV memory🎨
")
-
- with gr.Row():
- t1 = gr.Textbox(lines=1, default="", label="Chat Text:")
- b1 = gr.Button("Respond and Retrieve Messages")
-
- with gr.Row(): # inputs and buttons
- s1 = gr.State([])
- df1 = gr.Dataframe(wrap=True, max_rows=1000, overflow_row_behaviour= "paginate")
- with gr.Row(): # inputs and buttons
- file = gr.File(label="File")
- s2 = gr.Markdown()
-
- b1.click(fn=chat, inputs=[t1, s1], outputs=[s1, df1, file])
-
-demo.launch(debug=True, show_error=True)
diff --git a/spaces/awacke1/VizLib-KeywordExtraction-Clustering-Translation/README.md b/spaces/awacke1/VizLib-KeywordExtraction-Clustering-Translation/README.md
deleted file mode 100644
index 30f89d7d73e94861d82922f58b8bff9af6bcfc83..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VizLib-KeywordExtraction-Clustering-Translation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: VizLib KeywordExtraction Clustering Translation
-emoji: 📚
-colorFrom: purple
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/Fire.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/Fire.js
deleted file mode 100644
index 28109c1c7d2bd6ad4a6efe9bc07006d0f7f59b23..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/Fire.js
+++ /dev/null
@@ -1,1075 +0,0 @@
-/**
- * @author Mike Piecuch / https://github.com/mikepiecuch
- *
- * Based on research paper "Real-Time Fluid Dynamics for Games" by Jos Stam
- * http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/GDC03.pdf
- *
- */
-
-THREE.Fire = function ( geometry, options ) {
-
- THREE.Mesh.call( this, geometry );
-
- this.type = 'Fire';
-
- this.clock = new THREE.Clock();
-
- options = options || {};
-
- var textureWidth = options.textureWidth || 512;
- var textureHeight = options.textureHeight || 512;
- var oneOverWidth = 1.0 / textureWidth;
- var oneOverHeight = 1.0 / textureHeight;
-
- var debug = ( options.debug === undefined ) ? false : options.debug;
- this.color1 = options.color1 || new THREE.Color( 0xffffff );
- this.color2 = options.color2 || new THREE.Color( 0xffa000 );
- this.color3 = options.color3 || new THREE.Color( 0x000000 );
- this.colorBias = ( options.colorBias === undefined ) ? 0.8 : options.colorBias;
- this.diffuse = ( options.diffuse === undefined ) ? 1.33 : options.diffuse;
- this.viscosity = ( options.viscosity === undefined ) ? 0.25 : options.viscosity;
- this.expansion = ( options.expansion === undefined ) ? - 0.25 : options.expansion;
- this.swirl = ( options.swirl === undefined ) ? 50.0 : options.swirl;
- this.burnRate = ( options.burnRate === undefined ) ? 0.3 : options.burnRate;
- this.drag = ( options.drag === undefined ) ? 0.35 : options.drag;
- this.airSpeed = ( options.airSpeed === undefined ) ? 6.0 : options.airSpeed;
- this.windVector = options.windVector || new THREE.Vector2( 0.0, 0.75 );
- this.speed = ( options.speed === undefined ) ? 500.0 : options.speed;
- this.massConservation = ( options.massConservation === undefined ) ? false : options.massConservation;
-
- var size = textureWidth * textureHeight;
- this.sourceData = new Uint8Array( 4 * size );
-
- this.clearSources = function () {
-
- for ( var y = 0; y < textureHeight; y ++ ) {
-
- for ( var x = 0; x < textureWidth; x ++ ) {
-
- var i = y * textureWidth + x;
- var stride = i * 4;
-
- this.sourceData[ stride ] = 0;
- this.sourceData[ stride + 1 ] = 0;
- this.sourceData[ stride + 2 ] = 0;
- this.sourceData[ stride + 3 ] = 0;
-
- }
-
- }
-
- this.sourceMaterial.uniforms[ "sourceMap" ].value = this.internalSource;
- this.sourceMaterial.needsUpdate = true;
-
- return this.sourceData;
-
- };
-
- this.addSource = function ( u, v, radius, density = null, windX = null, windY = null ) {
-
- var startX = Math.max( Math.floor( ( u - radius ) * textureWidth ), 0 );
- var startY = Math.max( Math.floor( ( v - radius ) * textureHeight ), 0 );
- var endX = Math.min( Math.floor( ( u + radius ) * textureWidth ), textureWidth );
- var endY = Math.min( Math.floor( ( v + radius ) * textureHeight ), textureHeight );
-
- for ( var y = startY; y < endY; y ++ ) {
-
- for ( var x = startX; x < endX; x ++ ) {
-
- var diffX = x * oneOverWidth - u;
- var diffY = y * oneOverHeight - v;
-
- if ( diffX * diffX + diffY * diffY < radius * radius ) {
-
- var i = y * textureWidth + x;
- var stride = i * 4;
-
- if ( density != null ) {
-
- this.sourceData[ stride ] = Math.min( Math.max( density, 0.0 ), 1.0 ) * 255;
-
- }
- if ( windX != null ) {
-
- var wind = Math.min( Math.max( windX, - 1.0 ), 1.0 );
- wind = ( wind < 0.0 ) ? Math.floor( wind * 127 ) + 255 : Math.floor( wind * 127 );
- this.sourceData[ stride + 1 ] = wind;
-
- }
- if ( windY != null ) {
-
- var wind = Math.min( Math.max( windY, - 1.0 ), 1.0 );
- wind = ( wind < 0.0 ) ? Math.floor( wind * 127 ) + 255 : Math.floor( wind * 127 );
- this.sourceData[ stride + 2 ] = wind;
-
- }
-
- }
-
- }
-
- }
-
- this.internalSource.needsUpdate = true;
-
- return this.sourceData;
-
- };
-
- // When setting source map, red channel is density. Green and blue channels
- // encode x and y velocity respectively as signed chars:
- // (0 -> 127 = 0.0 -> 1.0, 128 -> 255 = -1.0 -> 0.0 )
- this.setSourceMap = function ( texture ) {
-
- this.sourceMaterial.uniforms[ "sourceMap" ].value = texture;
-
- };
-
- var parameters = {
- minFilter: THREE.NearestFilter,
- magFilter: THREE.NearestFilter,
- depthBuffer: false,
- stencilBuffer: false
- };
-
-
- this.field0 = new THREE.WebGLRenderTarget( textureWidth, textureHeight, parameters );
-
- this.field0.background = new THREE.Color( 0x000000 );
-
- this.field1 = new THREE.WebGLRenderTarget( textureWidth, textureHeight, parameters );
-
- this.field0.background = new THREE.Color( 0x000000 );
-
- this.fieldProj = new THREE.WebGLRenderTarget( textureWidth, textureHeight, parameters );
-
- this.field0.background = new THREE.Color( 0x000000 );
-
- if ( ! THREE.Math.isPowerOfTwo( textureWidth ) ||
- ! THREE.Math.isPowerOfTwo( textureHeight ) ) {
-
- this.field0.texture.generateMipmaps = false;
- this.field1.texture.generateMipmaps = false;
- this.fieldProj.texture.generateMipmaps = false;
-
- }
-
-
- this.fieldScene = new THREE.Scene();
- this.fieldScene.background = new THREE.Color( 0x000000 );
-
- this.orthoCamera = new THREE.OrthographicCamera( textureWidth / - 2, textureWidth / 2, textureHeight / 2, textureHeight / - 2, 1, 2 );
- this.orthoCamera.position.z = 1;
-
- this.fieldGeometry = new THREE.PlaneBufferGeometry( textureWidth, textureHeight );
-
- this.internalSource = new THREE.DataTexture( this.sourceData, textureWidth, textureHeight, THREE.RGBAFormat );
-
- // Source Shader
-
- var shader = THREE.Fire.SourceShader;
- this.sourceMaterial = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
- this.clearSources();
-
- this.sourceMesh = new THREE.Mesh( this.fieldGeometry, this.sourceMaterial );
- this.fieldScene.add( this.sourceMesh );
-
- // Diffuse Shader
-
- var shader = THREE.Fire.DiffuseShader;
- this.diffuseMaterial = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
- this.diffuseMaterial.uniforms[ "oneOverWidth" ].value = oneOverWidth;
- this.diffuseMaterial.uniforms[ "oneOverHeight" ].value = oneOverHeight;
-
- this.diffuseMesh = new THREE.Mesh( this.fieldGeometry, this.diffuseMaterial );
- this.fieldScene.add( this.diffuseMesh );
-
- // Drift Shader
-
- shader = THREE.Fire.DriftShader;
- this.driftMaterial = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
- this.driftMaterial.uniforms[ "oneOverWidth" ].value = oneOverWidth;
- this.driftMaterial.uniforms[ "oneOverHeight" ].value = oneOverHeight;
-
- this.driftMesh = new THREE.Mesh( this.fieldGeometry, this.driftMaterial );
- this.fieldScene.add( this.driftMesh );
-
- // Projection Shader 1
-
- shader = THREE.Fire.ProjectionShader1;
- this.projMaterial1 = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
- this.projMaterial1.uniforms[ "oneOverWidth" ].value = oneOverWidth;
- this.projMaterial1.uniforms[ "oneOverHeight" ].value = oneOverHeight;
-
- this.projMesh1 = new THREE.Mesh( this.fieldGeometry, this.projMaterial1 );
- this.fieldScene.add( this.projMesh1 );
-
- // Projection Shader 2
-
- shader = THREE.Fire.ProjectionShader2;
- this.projMaterial2 = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
-
- this.projMaterial2.uniforms[ "oneOverWidth" ].value = oneOverWidth;
- this.projMaterial2.uniforms[ "oneOverHeight" ].value = oneOverHeight;
-
- this.projMesh2 = new THREE.Mesh( this.fieldGeometry, this.projMaterial2 );
- this.fieldScene.add( this.projMesh2 );
-
- // Projection Shader 3
-
- shader = THREE.Fire.ProjectionShader3;
- this.projMaterial3 = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: false
- } );
-
-
- this.projMaterial3.uniforms[ "oneOverWidth" ].value = oneOverWidth;
- this.projMaterial3.uniforms[ "oneOverHeight" ].value = oneOverHeight;
-
- this.projMesh3 = new THREE.Mesh( this.fieldGeometry, this.projMaterial3 );
- this.fieldScene.add( this.projMesh3 );
-
- // Color Shader
-
- if ( debug ) {
-
- shader = THREE.Fire.DebugShader;
-
- } else {
-
- shader = THREE.Fire.ColorShader;
-
- }
- this.material = new THREE.ShaderMaterial( {
- uniforms: shader.uniforms,
- vertexShader: shader.vertexShader,
- fragmentShader: shader.fragmentShader,
- transparent: true
- } );
-
- this.material.uniforms[ "densityMap" ].value = this.field1.texture;
-
- this.configShaders = function ( dt ) {
-
- this.diffuseMaterial.uniforms[ "diffuse" ].value = dt * 0.05 * this.diffuse;
- this.diffuseMaterial.uniforms[ "viscosity" ].value = dt * 0.05 * this.viscosity;
- this.diffuseMaterial.uniforms[ "expansion" ].value = Math.exp( this.expansion * - 1.0 );
- this.diffuseMaterial.uniforms[ "swirl" ].value = Math.exp( this.swirl * - 0.1 );
- this.diffuseMaterial.uniforms[ "drag" ].value = Math.exp( this.drag * - 0.1 );
- this.diffuseMaterial.uniforms[ "burnRate" ].value = this.burnRate * dt * 0.01;
- this.driftMaterial.uniforms[ "windVector" ].value = this.windVector;
- this.driftMaterial.uniforms[ "airSpeed" ].value = dt * this.airSpeed * 0.001 * textureHeight;
- this.material.uniforms[ "color1" ].value = this.color1;
- this.material.uniforms[ "color2" ].value = this.color2;
- this.material.uniforms[ "color3" ].value = this.color3;
- this.material.uniforms[ "colorBias" ].value = this.colorBias;
-
- };
-
- this.clearDiffuse = function () {
-
- this.diffuseMaterial.uniforms[ "expansion" ].value = 1.0;
- this.diffuseMaterial.uniforms[ "swirl" ].value = 1.0;
- this.diffuseMaterial.uniforms[ "drag" ].value = 1.0;
- this.diffuseMaterial.uniforms[ "burnRate" ].value = 0.0;
-
- };
-
- this.swapTextures = function () {
-
- var swap = this.field0;
- this.field0 = this.field1;
- this.field1 = swap;
-
- };
-
- this.saveRenderState = function ( renderer ) {
-
- this.savedRenderTarget = renderer.getRenderTarget();
- this.savedVrEnabled = renderer.vr.enabled;
- this.savedShadowAutoUpdate = renderer.shadowMap.autoUpdate;
- this.savedAntialias = renderer.antialias;
- this.savedToneMapping = renderer.toneMapping;
-
- };
-
- this.restoreRenderState = function ( renderer ) {
-
- renderer.vr.enabled = this.savedVrEnabled;
- renderer.shadowMap.autoUpdate = this.savedShadowAutoUpdate;
- renderer.setRenderTarget( this.savedRenderTarget );
- renderer.antialias = this.savedAntialias;
- renderer.toneMapping = this.savedToneMapping;
-
- };
-
- this.renderSource = function ( renderer ) {
-
- this.sourceMesh.visible = true;
-
- this.sourceMaterial.uniforms[ "densityMap" ].value = this.field0.texture;
-
- renderer.setRenderTarget( this.field1 );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- this.sourceMesh.visible = false;
-
- this.swapTextures();
-
- };
-
- this.renderDiffuse = function ( renderer ) {
-
- this.diffuseMesh.visible = true;
-
- this.diffuseMaterial.uniforms[ "densityMap" ].value = this.field0.texture;
-
- renderer.setRenderTarget( this.field1 );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- this.diffuseMesh.visible = false;
-
- this.swapTextures();
-
- };
-
- this.renderDrift = function ( renderer ) {
-
- this.driftMesh.visible = true;
-
- this.driftMaterial.uniforms[ "densityMap" ].value = this.field0.texture;
-
- renderer.setRenderTarget( this.field1 );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- this.driftMesh.visible = false;
-
- this.swapTextures();
-
- };
-
- this.renderProject = function ( renderer ) {
-
- // Projection pass 1
-
- this.projMesh1.visible = true;
-
- this.projMaterial1.uniforms[ "densityMap" ].value = this.field0.texture;
-
- renderer.setRenderTarget( this.fieldProj );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- this.projMesh1.visible = false;
-
- this.projMaterial2.uniforms[ "densityMap" ].value = this.fieldProj.texture;
-
- // Projection pass 2
-
- this.projMesh2.visible = true;
-
- for ( var i = 0; i < 20; i ++ ) {
-
- renderer.setRenderTarget( this.field1 );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- var temp = this.field1;
- this.field1 = this.fieldProj;
- this.fieldProj = temp;
-
- this.projMaterial2.uniforms[ "densityMap" ].value = this.fieldProj.texture;
-
- }
-
- this.projMesh2.visible = false;
-
- this.projMaterial3.uniforms[ "densityMap" ].value = this.field0.texture;
- this.projMaterial3.uniforms[ "projMap" ].value = this.fieldProj.texture;
-
- // Projection pass 3
-
- this.projMesh3.visible = true;
-
- renderer.setRenderTarget( this.field1 );
- renderer.render( this.fieldScene, this.orthoCamera );
-
- this.projMesh3.visible = false;
-
- this.swapTextures();
-
- };
-
- this.onBeforeRender = function ( renderer ) {
-
- var delta = this.clock.getDelta();
- if ( delta > 0.1 ) {
-
- delta = 0.1;
-
- }
- var dt = delta * ( this.speed * 0.1 );
-
- this.configShaders( dt );
-
- this.saveRenderState( renderer );
-
- renderer.vr.enabled = false; // Avoid camera modification and recursion
- renderer.shadowMap.autoUpdate = false; // Avoid re-computing shadows
- renderer.antialias = false;
- renderer.toneMapping = THREE.NoToneMapping;
-
- this.sourceMesh.visible = false;
- this.diffuseMesh.visible = false;
- this.driftMesh.visible = false;
- this.projMesh1.visible = false;
- this.projMesh2.visible = false;
- this.projMesh3.visible = false;
-
- this.renderSource( renderer );
-
- this.clearDiffuse();
- for ( var i = 0; i < 21; i ++ ) {
-
- this.renderDiffuse( renderer );
-
- }
- this.configShaders( dt );
- this.renderDiffuse( renderer );
-
- this.renderDrift( renderer );
-
- if ( this.massConservation ) {
-
- this.renderProject( renderer );
- this.renderProject( renderer );
-
- }
-
- // Final result out for coloring
-
- this.material.map = this.field1.texture;
- this.material.transparent = true;
- this.material.minFilter = THREE.LinearFilter,
- this.material.magFilter = THREE.LinearFilter,
-
- this.restoreRenderState( renderer );
-
- };
-
-};
-
-
-THREE.Fire.prototype = Object.create( THREE.Mesh.prototype );
-THREE.Fire.prototype.constructor = THREE.Fire;
-
-THREE.Fire.SourceShader = {
-
- uniforms: {
- 'sourceMap': {
- type: 't',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform sampler2D sourceMap;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' vec4 source = texture2D( sourceMap, vUv );',
- ' vec4 current = texture2D( densityMap, vUv );',
-
- ' vec2 v0 = (current.gb - step(0.5, current.gb)) * 2.0;',
- ' vec2 v1 = (source.gb - step(0.5, source.gb)) * 2.0;',
-
- ' vec2 newVel = v0 + v1;',
-
- ' newVel = clamp(newVel, -0.99, 0.99);',
- ' newVel = newVel * 0.5 + step(0.0, -newVel);',
-
- ' float newDensity = source.r + current.a;',
- ' float newTemp = source.r + current.r;',
-
- ' newDensity = clamp(newDensity, 0.0, 1.0);',
- ' newTemp = clamp(newTemp, 0.0, 1.0);',
-
- ' gl_FragColor = vec4(newTemp, newVel.xy, newDensity);',
-
- '}'
-
- ].join( "\n" )
-};
-
-
-THREE.Fire.DiffuseShader = {
-
- uniforms: {
- 'oneOverWidth': {
- type: 'f',
- value: null
- },
- 'oneOverHeight': {
- type: 'f',
- value: null
- },
- 'diffuse': {
- type: 'f',
- value: null
- },
- 'viscosity': {
- type: 'f',
- value: null
- },
- 'expansion': {
- type: 'f',
- value: null
- },
- 'swirl': {
- type: 'f',
- value: null
- },
- 'drag': {
- type: 'f',
- value: null
- },
- 'burnRate': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform float oneOverWidth;',
- 'uniform float oneOverHeight;',
- 'uniform float diffuse;',
- 'uniform float viscosity;',
- 'uniform float expansion;',
- 'uniform float swirl;',
- 'uniform float burnRate;',
- 'uniform float drag;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vec4 dC = texture2D( densityMap, vUv );',
- ' vec4 dL = texture2D( densityMap, vec2(vUv.x - oneOverWidth, vUv.y) );',
- ' vec4 dR = texture2D( densityMap, vec2(vUv.x + oneOverWidth, vUv.y) );',
- ' vec4 dU = texture2D( densityMap, vec2(vUv.x, vUv.y - oneOverHeight) );',
- ' vec4 dD = texture2D( densityMap, vec2(vUv.x, vUv.y + oneOverHeight) );',
- ' vec4 dUL = texture2D( densityMap, vec2(vUv.x - oneOverWidth, vUv.y - oneOverHeight) );',
- ' vec4 dUR = texture2D( densityMap, vec2(vUv.x + oneOverWidth, vUv.y - oneOverHeight) );',
- ' vec4 dDL = texture2D( densityMap, vec2(vUv.x - oneOverWidth, vUv.y + oneOverHeight) );',
- ' vec4 dDR = texture2D( densityMap, vec2(vUv.x + oneOverWidth, vUv.y + oneOverHeight) );',
-
- ' dC.yz = (dC.yz - step(0.5, dC.yz)) * 2.0;',
- ' dL.yz = (dL.yz - step(0.5, dL.yz)) * 2.0;',
- ' dR.yz = (dR.yz - step(0.5, dR.yz)) * 2.0;',
- ' dU.yz = (dU.yz - step(0.5, dU.yz)) * 2.0;',
- ' dD.yz = (dD.yz - step(0.5, dD.yz)) * 2.0;',
- ' dUL.yz = (dUL.yz - step(0.5, dUL.yz)) * 2.0;',
- ' dUR.yz = (dUR.yz - step(0.5, dUR.yz)) * 2.0;',
- ' dDL.yz = (dDL.yz - step(0.5, dDL.yz)) * 2.0;',
- ' dDR.yz = (dDR.yz - step(0.5, dDR.yz)) * 2.0;',
-
- ' vec4 result = (dC + vec4(diffuse, viscosity, viscosity, diffuse) * ( dL + dR + dU + dD + dUL + dUR + dDL + dDR )) / (1.0 + 8.0 * vec4(diffuse, viscosity, viscosity, diffuse)) - vec4(0.0, 0.0, 0.0, 0.001);',
-
- ' float temperature = result.r;',
- ' temperature = clamp(temperature - burnRate, 0.0, 1.0);',
-
- ' vec2 velocity = result.yz;',
-
- ' vec2 expansionVec = vec2(dL.w - dR.w, dU.w - dD.w);',
-
- ' vec2 swirlVec = vec2((dL.z - dR.z) * 0.5, (dU.y - dD.y) * 0.5);',
-
- ' velocity = velocity + (1.0 - expansion) * expansionVec + (1.0 - swirl) * swirlVec;',
-
- ' velocity = velocity - (1.0 - drag) * velocity;',
-
- ' gl_FragColor = vec4(temperature, velocity * 0.5 + step(0.0, -velocity), result.w);',
-
- ' gl_FragColor = gl_FragColor * step(oneOverWidth, vUv.x);',
- ' gl_FragColor = gl_FragColor * step(oneOverHeight, vUv.y);',
- ' gl_FragColor = gl_FragColor * step(vUv.x, 1.0 - oneOverWidth);',
- ' gl_FragColor = gl_FragColor * step(vUv.y, 1.0 - oneOverHeight);',
-
- '}'
-
- ].join( "\n" )
-};
-
-THREE.Fire.DriftShader = {
-
- uniforms: {
- 'oneOverWidth': {
- type: 'f',
- value: null
- },
- 'oneOverHeight': {
- type: 'f',
- value: null
- },
- 'windVector': {
- type: 'v2',
- value: new THREE.Vector2( 0.0, 0.0 )
- },
- 'airSpeed': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform float oneOverWidth;',
- 'uniform float oneOverHeight;',
- 'uniform vec2 windVector;',
- 'uniform float airSpeed;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' vec2 velocity = texture2D( densityMap, vUv ).gb;',
- ' velocity = (velocity - step(0.5, velocity)) * 2.0;',
-
- ' velocity = velocity + windVector;',
-
- ' vec2 sourcePos = vUv - airSpeed * vec2(oneOverWidth, oneOverHeight) * velocity;',
-
- ' vec2 units = sourcePos / vec2(oneOverWidth, oneOverHeight);',
-
- ' vec2 intPos = floor(units);',
- ' vec2 frac = units - intPos;',
- ' intPos = intPos * vec2(oneOverWidth, oneOverHeight);',
-
- ' vec4 dX0Y0 = texture2D( densityMap, intPos + vec2(0.0, -oneOverHeight) );',
- ' vec4 dX1Y0 = texture2D( densityMap, intPos + vec2(oneOverWidth, 0.0) );',
- ' vec4 dX0Y1 = texture2D( densityMap, intPos + vec2(0.0, oneOverHeight) );',
- ' vec4 dX1Y1 = texture2D( densityMap, intPos + vec2(oneOverWidth, oneOverHeight) );',
-
-
- ' dX0Y0.gb = (dX0Y0.gb - step(0.5, dX0Y0.gb)) * 2.0;',
- ' dX1Y0.gb = (dX1Y0.gb - step(0.5, dX1Y0.gb)) * 2.0;',
- ' dX0Y1.gb = (dX0Y1.gb - step(0.5, dX0Y1.gb)) * 2.0;',
- ' dX1Y1.gb = (dX1Y1.gb - step(0.5, dX1Y1.gb)) * 2.0;',
-
- ' vec4 source = mix(mix(dX0Y0, dX1Y0, frac.x), mix(dX0Y1, dX1Y1, frac.x), frac.y);',
-
- ' source.gb = source.gb * 0.5 + step(0.0, -source.gb);',
-
- ' gl_FragColor = source;',
-
- '}'
-
- ].join( "\n" )
-};
-
-
-THREE.Fire.ProjectionShader1 = {
-
- uniforms: {
- 'oneOverWidth': {
- type: 'f',
- value: null
- },
- 'oneOverHeight': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform float oneOverWidth;',
- 'uniform float oneOverHeight;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' float dL = texture2D( densityMap, vec2(vUv.x - oneOverWidth, vUv.y) ).g;',
- ' float dR = texture2D( densityMap, vec2(vUv.x + oneOverWidth, vUv.y) ).g;',
- ' float dU = texture2D( densityMap, vec2(vUv.x, vUv.y - oneOverHeight) ).b;',
- ' float dD = texture2D( densityMap, vec2(vUv.x, vUv.y + oneOverHeight) ).b;',
-
- ' dL = (dL - step(0.5, dL)) * 2.0;',
- ' dR = (dR - step(0.5, dR)) * 2.0;',
- ' dU = (dU - step(0.5, dU)) * 2.0;',
- ' dD = (dD - step(0.5, dD)) * 2.0;',
-
- ' float h = (oneOverWidth + oneOverHeight) * 0.5;',
- ' float div = -0.5 * h * (dR - dL + dD - dU);',
-
- ' gl_FragColor = vec4( 0.0, 0.0, div * 0.5 + step(0.0, -div), 0.0);',
-
- '}'
-
- ].join( "\n" )
-};
-
-
-THREE.Fire.ProjectionShader2 = {
-
- uniforms: {
- 'oneOverWidth': {
- type: 'f',
- value: null
- },
- 'oneOverHeight': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform float oneOverWidth;',
- 'uniform float oneOverHeight;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' float div = texture2D( densityMap, vUv ).b;',
- ' float pL = texture2D( densityMap, vec2(vUv.x - oneOverWidth, vUv.y) ).g;',
- ' float pR = texture2D( densityMap, vec2(vUv.x + oneOverWidth, vUv.y) ).g;',
- ' float pU = texture2D( densityMap, vec2(vUv.x, vUv.y - oneOverHeight) ).g;',
- ' float pD = texture2D( densityMap, vec2(vUv.x, vUv.y + oneOverHeight) ).g;',
-
- ' float divNorm = (div - step(0.5, div)) * 2.0;',
- ' pL = (pL - step(0.5, pL)) * 2.0;',
- ' pR = (pR - step(0.5, pR)) * 2.0;',
- ' pU = (pU - step(0.5, pU)) * 2.0;',
- ' pD = (pD - step(0.5, pD)) * 2.0;',
-
- ' float p = (divNorm + pR + pL + pD + pU) * 0.25;',
-
- ' gl_FragColor = vec4( 0.0, p * 0.5 + step(0.0, -p), div, 0.0);',
-
- '}'
-
- ].join( "\n" )
-};
-
-
-THREE.Fire.ProjectionShader3 = {
-
- uniforms: {
- 'oneOverWidth': {
- type: 'f',
- value: null
- },
- 'oneOverHeight': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- },
- 'projMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform float oneOverWidth;',
- 'uniform float oneOverHeight;',
- 'uniform sampler2D densityMap;',
- 'uniform sampler2D projMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' vec4 orig = texture2D(densityMap, vUv);',
-
- ' float pL = texture2D( projMap, vec2(vUv.x - oneOverWidth, vUv.y) ).g;',
- ' float pR = texture2D( projMap, vec2(vUv.x + oneOverWidth, vUv.y) ).g;',
- ' float pU = texture2D( projMap, vec2(vUv.x, vUv.y - oneOverHeight) ).g;',
- ' float pD = texture2D( projMap, vec2(vUv.x, vUv.y + oneOverHeight) ).g;',
-
- ' float uNorm = (orig.g - step(0.5, orig.g)) * 2.0;',
- ' float vNorm = (orig.b - step(0.5, orig.b)) * 2.0;',
-
- ' pL = (pL - step(0.5, pL)) * 2.0;',
- ' pR = (pR - step(0.5, pR)) * 2.0;',
- ' pU = (pU - step(0.5, pU)) * 2.0;',
- ' pD = (pD - step(0.5, pD)) * 2.0;',
-
- ' float h = (oneOverWidth + oneOverHeight) * 0.5;',
- ' float u = uNorm - (0.5 * (pR - pL) / h);',
- ' float v = vNorm - (0.5 * (pD - pU) / h);',
-
- ' gl_FragColor = vec4( orig.r, u * 0.5 + step(0.0, -u), v * 0.5 + step(0.0, -v), orig.a);',
-
- '}'
-
- ].join( "\n" )
-};
-
-THREE.Fire.ColorShader = {
-
- uniforms: {
- 'color1': {
- type: 'c',
- value: null
- },
- 'color2': {
- type: 'c',
- value: null
- },
- 'color3': {
- type: 'c',
- value: null
- },
- 'colorBias': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform vec3 color1;',
- 'uniform vec3 color2;',
- 'uniform vec3 color3;',
- 'uniform float colorBias;',
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' float density = texture2D( densityMap, vUv ).a;',
- ' float temperature = texture2D( densityMap, vUv ).r;',
-
- ' float bias = clamp(colorBias, 0.0001, 0.9999);',
-
- ' vec3 blend1 = mix(color3, color2, temperature / bias) * (1.0 - step(bias, temperature));',
- ' vec3 blend2 = mix(color2, color1, (temperature - bias) / (1.0 - bias) ) * step(bias, temperature);',
-
- ' gl_FragColor = vec4(blend1 + blend2, density);',
- '}'
-
- ].join( "\n" )
-};
-
-
-THREE.Fire.DebugShader = {
-
- uniforms: {
- 'color1': {
- type: 'c',
- value: null
- },
- 'color2': {
- type: 'c',
- value: null
- },
- 'color3': {
- type: 'c',
- value: null
- },
- 'colorBias': {
- type: 'f',
- value: null
- },
- 'densityMap': {
- type: 't',
- value: null
- }
- },
-
- vertexShader: [
- 'varying vec2 vUv;',
-
- 'void main() {',
-
- ' vUv = uv;',
-
- ' vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );',
- ' gl_Position = projectionMatrix * mvPosition;',
-
- '}'
-
- ].join( "\n" ),
-
- fragmentShader: [
- 'uniform sampler2D densityMap;',
-
- 'varying vec2 vUv;',
-
- 'void main() {',
- ' float density;',
- ' density = texture2D( densityMap, vUv ).a;',
-
- ' vec2 vel = texture2D( densityMap, vUv ).gb;',
-
- ' vel = (vel - step(0.5, vel)) * 2.0;',
-
- ' float r = density;',
- ' float g = max(abs(vel.x), density * 0.5);',
- ' float b = max(abs(vel.y), density * 0.5);',
- ' float a = max(density * 0.5, max(abs(vel.x), abs(vel.y)));',
-
- ' gl_FragColor = vec4(r, g, b, a);',
-
- '}'
-
- ].join( "\n" )
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/HorizontalBlurShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/HorizontalBlurShader.js
deleted file mode 100644
index a73c94bad63e4af895e03e8323df7e6765147a30..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/HorizontalBlurShader.js
+++ /dev/null
@@ -1,62 +0,0 @@
-/**
- * @author zz85 / http://www.lab4games.net/zz85/blog
- *
- * Two pass Gaussian blur filter (horizontal and vertical blur shaders)
- * - described in http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
- * and used in http://www.cake23.de/traveling-wavefronts-lit-up.html
- *
- * - 9 samples per pass
- * - standard deviation 2.7
- * - "h" and "v" parameters should be set to "1 / width" and "1 / height"
- */
-
-THREE.HorizontalBlurShader = {
-
- uniforms: {
-
- "tDiffuse": { value: null },
- "h": { value: 1.0 / 512.0 }
-
- },
-
- vertexShader: [
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vUv = uv;",
- "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
-
- "}"
-
- ].join( "\n" ),
-
- fragmentShader: [
-
- "uniform sampler2D tDiffuse;",
- "uniform float h;",
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vec4 sum = vec4( 0.0 );",
-
- "sum += texture2D( tDiffuse, vec2( vUv.x - 4.0 * h, vUv.y ) ) * 0.051;",
- "sum += texture2D( tDiffuse, vec2( vUv.x - 3.0 * h, vUv.y ) ) * 0.0918;",
- "sum += texture2D( tDiffuse, vec2( vUv.x - 2.0 * h, vUv.y ) ) * 0.12245;",
- "sum += texture2D( tDiffuse, vec2( vUv.x - 1.0 * h, vUv.y ) ) * 0.1531;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y ) ) * 0.1633;",
- "sum += texture2D( tDiffuse, vec2( vUv.x + 1.0 * h, vUv.y ) ) * 0.1531;",
- "sum += texture2D( tDiffuse, vec2( vUv.x + 2.0 * h, vUv.y ) ) * 0.12245;",
- "sum += texture2D( tDiffuse, vec2( vUv.x + 3.0 * h, vUv.y ) ) * 0.0918;",
- "sum += texture2D( tDiffuse, vec2( vUv.x + 4.0 * h, vUv.y ) ) * 0.051;",
-
- "gl_FragColor = sum;",
-
- "}"
-
- ].join( "\n" )
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.d.ts
deleted file mode 100644
index 29fa97e806315beed02662784db6bfa81dffd37f..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.d.ts
+++ /dev/null
@@ -1,60 +0,0 @@
-import { Event } from './Face3';
-
-/**
- * JavaScript events for custom objects
- *
- * # Example
- * var Car = function () {
- *
- * EventDispatcher.call( this );
- * this.start = function () {
- *
- * this.dispatchEvent( { type: 'start', message: 'vroom vroom!' } );
- *
- * };
- *
- * };
- *
- * var car = new Car();
- * car.addEventListener( 'start', function ( event ) {
- *
- * alert( event.message );
- *
- * } );
- * car.start();
- *
- * @source src/core/EventDispatcher.js
- */
-export class EventDispatcher {
- /**
- * Creates eventDispatcher object. It needs to be call with '.call' to add the functionality to an object.
- */
- constructor();
-
- /**
- * Adds a listener to an event type.
- * @param type The type of event to listen to.
- * @param listener The function that gets called when the event is fired.
- */
- addEventListener(type: string, listener: (event: Event) => void): void;
-
- /**
- * Checks if listener is added to an event type.
- * @param type The type of event to listen to.
- * @param listener The function that gets called when the event is fired.
- */
- hasEventListener(type: string, listener: (event: Event) => void): boolean;
-
- /**
- * Removes a listener from an event type.
- * @param type The type of the listener that gets removed.
- * @param listener The listener function that gets removed.
- */
- removeEventListener(type: string, listener: (event: Event) => void): void;
-
- /**
- * Fire an event type.
- * @param type The type of event that gets fired.
- */
- dispatchEvent(event: { type: string; [attachment: string]: any }): void;
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/shadow_frag.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/shadow_frag.glsl.js
deleted file mode 100644
index 457ab748c4daf3e3cf4fbeaaa4815250fa947094..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/shadow_frag.glsl.js
+++ /dev/null
@@ -1,20 +0,0 @@
-export default /* glsl */`
-uniform vec3 color;
-uniform float opacity;
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-void main() {
-
- gl_FragColor = vec4( color, opacity * ( 1.0 - getShadowMask() ) );
-
- #include
-
-}
-`;
diff --git a/spaces/basicv8vc/learning-rate-scheduler-online/streamlit_app.py b/spaces/basicv8vc/learning-rate-scheduler-online/streamlit_app.py
deleted file mode 100644
index b951024906e2292785faf10437c2c19c859435aa..0000000000000000000000000000000000000000
--- a/spaces/basicv8vc/learning-rate-scheduler-online/streamlit_app.py
+++ /dev/null
@@ -1,649 +0,0 @@
-import time
-import re
-
-import streamlit as st
-import oneflow as flow
-
-import numpy as np
-import pandas as pd
-import altair as alt
-from altair import X, Y, Axis
-
-ConstantLR_CODE = """oneflow.optim.lr_scheduler.ConstantLR(
- optimizer: Optimizer,
- factor: float = 1.0 / 3,
- total_iters: int = 5,
- last_step: int = -1,
- verbose: bool = False
- )"""
-
-LinearLR_CODE = """oneflow.optim.lr_scheduler.LinearLR(
- optimizer: Optimizer,
- start_factor: float = 1.0 / 3,
- end_factor: float = 1.0,
- total_iters: int = 5,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-ExponentialLR_CODE = """oneflow.optim.lr_scheduler.ExponentialLR(
- optimizer: Optimizer,
- gamma: float,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-StepLR_CODE = """oneflow.optim.lr_scheduler.StepLR(
- optimizer: Optimizer,
- step_size: int,
- gamma: float = 0.1,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-MultiStepLR_CODE = """oneflow.optim.lr_scheduler.MultiStepLR(
- optimizer: Optimizer,
- milestones: list,
- gamma: float = 0.1,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-PolynomialLR_CODE = """oneflow.optim.lr_scheduler.PolynomialLR(
- optimizer,
- steps: int,
- end_learning_rate: float = 0.0001,
- power: float = 1.0,
- cycle: bool = False,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-CosineDecayLR_CODE = """oneflow.optim.lr_scheduler.CosineDecayLR(
- optimizer: Optimizer,
- decay_steps: int,
- alpha: float = 0.0,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-CosineAnnealingLR_CODE = """oneflow.optim.lr_scheduler.CosineAnnealingLR(
- optimizer: Optimizer,
- T_max: int,
- eta_min: float = 0.0,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-CosineAnnealingWarmRestarts_CODE = """oneflow.optim.lr_scheduler.CosineAnnealingWarmRestarts(
- optimizer: Optimizer,
- T_0: int,
- T_mult: int = 1,
- eta_min: float = 0.0,
- decay_rate: float = 1.0,
- restart_limit: int = 0,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-SequentialLR_CODE = """oneflow.optim.lr_scheduler.SequentialLR(
- optimizer: Optimizer,
- schedulers: Sequence[LRScheduler],
- milestones: Sequence[int],
- interval_rescaling: Union[Sequence[bool], bool] = False,
- last_step: int = -1,
- verbose: bool = False,
- )"""
-
-WarmupLR_CODE = """oneflow.optim.lr_scheduler.WarmupLR(
- scheduler_or_optimizer: Union[LRScheduler, Optimizer],
- warmup_factor: float = 1.0 / 3,
- warmup_iters: int = 5,
- warmup_method: str = "linear",
- warmup_prefix: bool = False,
- last_step=-1,
- verbose=False,
- )"""
-
-ReduceLROnPlateau_CODE = """oneflow.optim.lr_scheduler.ReduceLROnPlateau(
- optimizer,
- mode="min",
- factor=0.1,
- patience=10,
- threshold=1e-4,
- threshold_mode="rel",
- cooldown=0,
- min_lr=0,
- eps=1e-8,
- verbose=False,
- )"""
-
-IS_DISPLAY_CODE = False
-
-
-def _display(display_steps, steps, lrs):
- # altair
- line = ( # Creating an empty chart in the beginning when the page loads
- alt.Chart(pd.DataFrame({"last_step": [], "lr": []}))
- .mark_line(point={"filled": True, "fill": "red"})
- .encode(
- x=X(
- "last_step",
- axis=Axis(title="step"),
- scale=alt.Scale(domain=[0, steps[-1] + 2]),
- ),
- y=Y(
- "lr",
- axis=Axis(title="lr"),
- scale=alt.Scale(domain=[min(lrs) * 0.8, max(lrs) * 1.2]),
- ),
- color=alt.value("#FFAA00"),
- )
- .properties(width=600, height=400)
- .interactive()
- )
- bar_plot = st.altair_chart(line)
-
- for i in range(display_steps):
- df = pd.DataFrame({"last_step": steps[: i + 1], "lr": lrs[: i + 1]})
- line = (
- alt.Chart(df)
- .mark_line(point={"filled": True, "fill": "red"})
- .encode(
- x=X(
- "last_step",
- axis=Axis(title="step"),
- scale=alt.Scale(domain=[0, steps[-1] + 2]),
- ),
- y=Y(
- "lr",
- axis=Axis(title="lr"),
- scale=alt.Scale(domain=[min(lrs) * 0.8, max(lrs) * 1.2]),
- ),
- color=alt.value("#FFAA00"),
- )
- .properties(width=600, height=400)
- .interactive()
- )
- bar_plot.altair_chart(line)
- # Pretend we're doing some computation that takes time.
- time.sleep(0.5)
-
-
-# st.title("Learning Rate Scheduler Visualization")
-st.header("Learning Rate Scheduler Visualization")
-
-
-scheduler = st.selectbox(
- "Please choose one scheduler to display",
- (
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- # "LambdaLR",
- # "SequentialLR",
- # "WarmupLR",
- # "ChainedScheduler",
- # "ReduceLROnPlateau",
- ),
-)
-
-if scheduler == "ConstantLR":
- if IS_DISPLAY_CODE:
- st.code(ConstantLR_CODE, language="python")
- st.write("You can set argument values")
- factor = st.slider("factor:", 0.0, 1.0, 0.3)
- total_iters = st.slider("total_iters:", 0, 20, 5)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.ConstantLR(
- optimizer=optimizer, factor=factor, total_iters=total_iters
- )
- steps = []
- lrs = []
- display_steps = max(6, total_iters * 2)
- for i in range(display_steps):
- steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, steps, lrs)
-
-
-elif scheduler == "LinearLR":
- if IS_DISPLAY_CODE:
- st.code(LinearLR_CODE, language="python")
- st.write("You can set argument values")
- start_factor = st.slider("start_factor:", 0.0, 1.0, 0.3)
- end_factor = st.slider("end_factor:", 0.0, 1.0, 1.0)
- total_iters = st.slider("total_iters:", 0, 20, 5)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.LinearLR(
- optimizer=optimizer,
- start_factor=start_factor,
- end_factor=end_factor,
- total_iters=total_iters,
- )
- steps = []
- lrs = []
- display_steps = max(6, total_iters * 2)
- for i in range(display_steps):
- steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, steps, lrs)
-
-elif scheduler == "ExponentialLR":
- if IS_DISPLAY_CODE:
- st.code(ExponentialLR_CODE, language="python")
- st.write("You can set argument values")
- gamma = st.slider("gamma:", 0.0, 1.0, 0.9)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.ExponentialLR(
- optimizer=optimizer,
- gamma=gamma,
- )
- steps = []
- lrs = []
- display_steps = 20
- for i in range(display_steps):
- steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, steps, lrs)
-
-elif scheduler == "StepLR":
- if IS_DISPLAY_CODE:
- st.code(StepLR_CODE, language="python")
- st.write("You can set argument values")
- step_size = st.slider("step_size:", 0, 10, 2)
- gamma = st.slider("gamma:", 0.0, 1.0, 0.9)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.StepLR(
- optimizer=optimizer,
- step_size=step_size,
- gamma=gamma,
- )
- steps = []
- lrs = []
- display_steps = 20
- for i in range(display_steps):
- steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, steps, lrs)
-
-
-elif scheduler == "MultiStepLR":
- if IS_DISPLAY_CODE:
- st.code(MultiStepLR_CODE, language="python")
- st.write("You can set argument values")
-
- collect_numbers = lambda x: [int(i) for i in re.split("[^0-9]", x) if i != ""]
- milestones = st.text_input("PLease enter milestones")
- milestones = collect_numbers(milestones)
- if milestones is None or len(milestones) == 0:
- milestones = [5]
- gamma = st.slider("gamma:", 0.0, 1.0, 0.9)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.MultiStepLR(
- optimizer=optimizer,
- milestones=milestones,
- gamma=gamma,
- )
- steps = []
- lrs = []
- display_steps = milestones[-1] + 5
- for i in range(display_steps):
- steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, steps, lrs)
-
-elif scheduler == "PolynomialLR":
- if IS_DISPLAY_CODE:
- st.code(PolynomialLR_CODE, language="python")
- st.write("You can set argument values")
- steps = st.slider("steps:", 1, 10, 5)
- end_learning_rate = st.slider("end_learning_rate", 0.0, 1.0, 0.0001)
- power = st.slider("power", 0.0, 10.0, 1.0)
- cycle = st.checkbox(
- "cycle",
- )
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.PolynomialLR(
- optimizer=optimizer,
- steps=steps,
- end_learning_rate=end_learning_rate,
- power=power,
- cycle=cycle,
- )
- x_steps = []
- lrs = []
- display_steps = max(steps + 5, 10)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-elif scheduler == "CosineDecayLR":
- if IS_DISPLAY_CODE:
- st.code(CosineDecayLR_CODE, language="python")
- st.write("You can set argument values")
- decay_steps = st.slider("decay_steps:", 0, 10, 5)
- alpha = st.slider("alpha", 0.0, 1.0, 0.0)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.CosineDecayLR(
- optimizer=optimizer,
- decay_steps=decay_steps,
- alpha=alpha,
- )
- x_steps = []
- lrs = []
- display_steps = max(decay_steps + 5, 10)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-elif scheduler == "CosineAnnealingLR":
- if IS_DISPLAY_CODE:
- st.code(CosineAnnealingLR_CODE, language="python")
- st.write("You can set argument values")
- T_max = st.slider("T_max", 1, 20, 20)
- eta_min = st.slider("eta_min", 0.0, 1.0, 0.0)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.CosineAnnealingLR(
- optimizer=optimizer,
- T_max=T_max,
- eta_min=eta_min,
- )
- x_steps = []
- lrs = []
- display_steps = max(T_max + 5, 20)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-elif scheduler == "CosineAnnealingWarmRestarts":
- if IS_DISPLAY_CODE:
- st.code(CosineAnnealingWarmRestarts_CODE, language="python")
- st.write("You can set argument values")
- T_0 = st.slider("T_0", 1, 20, 5)
- T_mult = st.slider("T_mult", 1, 5, 1)
- eta_min = st.slider("eta_min", 0.0, 1.0, 0.0)
- decay_rate = st.slider("decay_rate", 0.0, 1.0, 1.0)
- restart_limit = st.slider("restart_limit", 0, 5, 0)
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.CosineAnnealingWarmRestarts(
- optimizer=optimizer,
- T_0=T_0,
- T_mult=T_mult,
- eta_min=eta_min,
- decay_rate=decay_rate,
- restart_limit=restart_limit,
- )
- x_steps = []
- lrs = []
- display_steps = max(T_0 + 5, 20)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-# elif scheduler == "LambdaLR":
-# code = """oneflow.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_step=-1, verbose=False)"""
-# st.code(code, language="python")
-
-elif scheduler == "SequentialLR":
- if IS_DISPLAY_CODE:
- st.code(SequentialLR_CODE, language="python")
- st.write("You can set argument values")
- schedulers = st.multiselect(
- "you can choose multiple schedulers",
- [
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- ],
- )
- collect_numbers = lambda x: [int(i) for i in re.split("[^0-9]", x) if i != ""]
- milestones = st.text_input("PLease enter milestones")
- milestones = collect_numbers(milestones)
- interval_rescaling = st.checkbox("interval_rescaling")
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.SequentialLR(
- optimizer=optimizer,
- schedulers=schedulers,
- milestones=milestones,
- interval_rescaling=interval_rescaling,
- )
- x_steps = []
- lrs = []
- display_steps = max(milestones[-1] + 5, 20)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-elif scheduler == "WarmupLR":
- if IS_DISPLAY_CODE:
- st.code(WarmupLR_CODE, language="python")
- scheduler_or_optimizer = st.selectbox(
- "choose one scheduler for scheduler_or_optimizer",
- [
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- ],
- )
- warmup_factor = st.slider("warmup_factor:", 0.0, 1.0, 0.3)
- warmup_iters = st.slider("warmup_iters:", 1, 10, 5)
- warmup_method = st.selectbox("warmup_method", ["linear", "constant"])
- warmup_prefix = st.checkbox("warmup_prefix")
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.WarmupLR(
- optimizer=optimizer,
- scheduler_or_optimizer=scheduler_or_optimizer,
- warmup_factor=warmup_factor,
- warmup_iters=warmup_iters,
- warmup_method=warmup_method,
- warmup_prefix=warmup_prefix,
- )
- x_steps = []
- lrs = []
- display_steps = max(warmup_factor + 5, 20)
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-
-elif scheduler == "ChainedScheduler":
- if IS_DISPLAY_CODE:
- code = """oneflow.optim.lr_scheduler.ChainedScheduler(schedulers)"""
- st.code(code, language="python")
- st.write("You can set argument values")
- schedulers = st.multiselect(
- "you can choose multiple schedulers",
- [
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- "ConstantLR",
- "LinearLR",
- "ExponentialLR",
- "StepLR",
- "MultiStepLR",
- "PolynomialLR",
- "CosineDecayLR",
- "CosineAnnealingLR",
- "CosineAnnealingWarmRestarts",
- ],
- )
- lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
- net = flow.nn.Linear(10, 2)
- optimizer = flow.optim.SGD(net.parameters(), lr=lr)
- scheduler = flow.optim.lr_scheduler.ChainedScheduler(
- optimizer=optimizer,
- schedulers=schedulers,
- )
- x_steps = []
- lrs = []
- display_steps = 20
- for i in range(display_steps):
- x_steps.append(i)
- lrs.append(scheduler.get_last_lr()[0])
- scheduler.step()
-
- col1, col2, col3 = st.columns(3)
- if col2.button("Display?"):
- _display(display_steps, x_steps, lrs)
-
-# elif scheduler == "ReduceLROnPlateau":
-# st.code(ReduceLROnPlateau_CODE, language="python")
-# st.write("You can set argument values")
-# mode = st.selectbox(
-# "mode",
-# [
-# "min",
-# "max",
-# ],
-# )
-# factor = st.slider("factor", 1e-5, 1.0 - 1e-5, 0.1)
-# patience = st.slider("patience", 1, 20, 10)
-# threshold = st.slider("threshold", 1e-4, 9e-4, 1e-4)
-# threshold_mode = st.selectbox("threshold_mode", ["rel", "abs"])
-# cooldown = st.slider("cooldown", 0, 10, 0)
-# min_lr = st.slider("min_lr", 0.0, 1.0, 0.0)
-# eps = st.slider("eps", 1e-8, 9e-8, 1e-8)
-# lr = st.slider("initial learning rate in Optimizer(e.g. SGD, Adam):", 0.0, 1.0, 0.1)
-
-# net = flow.nn.Linear(10, 2)
-# optimizer = flow.optim.SGD(net.parameters(), lr=lr)
-# scheduler = flow.optim.lr_scheduler.ReduceLROnPlateau(
-# optimizer=optimizer,
-# mode=mode,
-# factor=factor,
-# patience=patience,
-# threshold=threshold,
-# threshold_mode=threshold_mode,
-# cooldown=cooldown,
-# min_lr=min_lr,
-# eps=eps,
-# )
-# x_steps = []
-# lrs = []
-# display_steps = 25
-# for i in range(display_steps):
-# x_steps.append(i)
-# lrs.append(scheduler.get_last_lr()[0])
-# scheduler.step()
-
-# col1, col2, col3 = st.columns(3)
-# if col2.button("Display?"):
-# _display(display_steps, x_steps, lrs)
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225639.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225639.py
deleted file mode 100644
index 2e0a37be3ba26cc71d1a25ff33b06b64b6322c36..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225639.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import os
-os.system("pip install gfpgan")
-
-os.system("pip freeze")
-#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg')
-torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png')
-torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg')
-torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg')
-torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg')
-
-
-
-
-import cv2
-import glob
-import numpy as np
-from basicsr.utils import imwrite
-from gfpgan import GFPGANer
-
-import warnings
-warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. '
- 'If you really want to use it, please modify the corresponding codes.')
-bg_upsampler = None
-
-
-
-# set up GFPGAN restorer
-restorer = GFPGANer(
- model_path='GFPGANv1.3.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=bg_upsampler)
-
-
-
-
-
-def inference(img):
- input_img = cv2.imread(img, cv2.IMREAD_COLOR)
- cropped_faces, restored_faces, restored_img = restorer.enhance(
- input_img, has_aligned=False, only_center_face=False, paste_back=True)
-
- return Image.fromarray(restored_faces[0][:,:,::-1])
-
-title = "GFP-GAN"
-description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once"
-article = "Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

"
-gr.Interface(
- inference,
- [gr.inputs.Image(type="filepath", label="Input")],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[
- ['lincoln.jpg'],
- ['einstein.png'],
- ['edison.jpg'],
- ['Henry.jpg'],
- ['Frida.jpg']
- ]
- ).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/preprocess.py b/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/preprocess.py
deleted file mode 100644
index e1902115c97a076ace06e07f3a2e94085cb707cf..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/preprocess.py
+++ /dev/null
@@ -1,230 +0,0 @@
-import os
-from PIL import Image, ImageOps
-import math
-import platform
-import sys
-import tqdm
-import time
-
-from modules import paths, shared, images, deepbooru
-from modules.shared import opts, cmd_opts
-from modules.textual_inversion import autocrop
-
-
-def preprocess(id_task, process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None):
- try:
- if process_caption:
- shared.interrogator.load()
-
- if process_caption_deepbooru:
- deepbooru.model.start()
-
- preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)
-
- finally:
-
- if process_caption:
- shared.interrogator.send_blip_to_ram()
-
- if process_caption_deepbooru:
- deepbooru.model.stop()
-
-
-def listfiles(dirname):
- return os.listdir(dirname)
-
-
-class PreprocessParams:
- src = None
- dstdir = None
- subindex = 0
- flip = False
- process_caption = False
- process_caption_deepbooru = False
- preprocess_txt_action = None
-
-
-def save_pic_with_caption(image, index, params: PreprocessParams, existing_caption=None):
- caption = ""
-
- if params.process_caption:
- caption += shared.interrogator.generate_caption(image)
-
- if params.process_caption_deepbooru:
- if len(caption) > 0:
- caption += ", "
- caption += deepbooru.model.tag_multi(image)
-
- filename_part = params.src
- filename_part = os.path.splitext(filename_part)[0]
- filename_part = os.path.basename(filename_part)
-
- basename = f"{index:05}-{params.subindex}-{filename_part}"
- image.save(os.path.join(params.dstdir, f"{basename}.png"))
-
- if params.preprocess_txt_action == 'prepend' and existing_caption:
- caption = existing_caption + ' ' + caption
- elif params.preprocess_txt_action == 'append' and existing_caption:
- caption = caption + ' ' + existing_caption
- elif params.preprocess_txt_action == 'copy' and existing_caption:
- caption = existing_caption
-
- caption = caption.strip()
-
- if len(caption) > 0:
- with open(os.path.join(params.dstdir, f"{basename}.txt"), "w", encoding="utf8") as file:
- file.write(caption)
-
- params.subindex += 1
-
-
-def save_pic(image, index, params, existing_caption=None):
- save_pic_with_caption(image, index, params, existing_caption=existing_caption)
-
- if params.flip:
- save_pic_with_caption(ImageOps.mirror(image), index, params, existing_caption=existing_caption)
-
-
-def split_pic(image, inverse_xy, width, height, overlap_ratio):
- if inverse_xy:
- from_w, from_h = image.height, image.width
- to_w, to_h = height, width
- else:
- from_w, from_h = image.width, image.height
- to_w, to_h = width, height
- h = from_h * to_w // from_w
- if inverse_xy:
- image = image.resize((h, to_w))
- else:
- image = image.resize((to_w, h))
-
- split_count = math.ceil((h - to_h * overlap_ratio) / (to_h * (1.0 - overlap_ratio)))
- y_step = (h - to_h) / (split_count - 1)
- for i in range(split_count):
- y = int(y_step * i)
- if inverse_xy:
- splitted = image.crop((y, 0, y + to_h, to_w))
- else:
- splitted = image.crop((0, y, to_w, y + to_h))
- yield splitted
-
-# not using torchvision.transforms.CenterCrop because it doesn't allow float regions
-def center_crop(image: Image, w: int, h: int):
- iw, ih = image.size
- if ih / h < iw / w:
- sw = w * ih / h
- box = (iw - sw) / 2, 0, iw - (iw - sw) / 2, ih
- else:
- sh = h * iw / w
- box = 0, (ih - sh) / 2, iw, ih - (ih - sh) / 2
- return image.resize((w, h), Image.Resampling.LANCZOS, box)
-
-
-def multicrop_pic(image: Image, mindim, maxdim, minarea, maxarea, objective, threshold):
- iw, ih = image.size
- err = lambda w, h: 1-(lambda x: x if x < 1 else 1/x)(iw/ih/(w/h))
- wh = max(((w, h) for w in range(mindim, maxdim+1, 64) for h in range(mindim, maxdim+1, 64)
- if minarea <= w * h <= maxarea and err(w, h) <= threshold),
- key= lambda wh: (wh[0]*wh[1], -err(*wh))[::1 if objective=='Maximize area' else -1],
- default=None
- )
- return wh and center_crop(image, *wh)
-
-
-def preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None):
- width = process_width
- height = process_height
- src = os.path.abspath(process_src)
- dst = os.path.abspath(process_dst)
- split_threshold = max(0.0, min(1.0, split_threshold))
- overlap_ratio = max(0.0, min(0.9, overlap_ratio))
-
- assert src != dst, 'same directory specified as source and destination'
-
- os.makedirs(dst, exist_ok=True)
-
- files = listfiles(src)
-
- shared.state.job = "preprocess"
- shared.state.textinfo = "Preprocessing..."
- shared.state.job_count = len(files)
-
- params = PreprocessParams()
- params.dstdir = dst
- params.flip = process_flip
- params.process_caption = process_caption
- params.process_caption_deepbooru = process_caption_deepbooru
- params.preprocess_txt_action = preprocess_txt_action
-
- pbar = tqdm.tqdm(files)
- for index, imagefile in enumerate(pbar):
- params.subindex = 0
- filename = os.path.join(src, imagefile)
- try:
- img = Image.open(filename).convert("RGB")
- except Exception:
- continue
-
- description = f"Preprocessing [Image {index}/{len(files)}]"
- pbar.set_description(description)
- shared.state.textinfo = description
-
- params.src = filename
-
- existing_caption = None
- existing_caption_filename = os.path.splitext(filename)[0] + '.txt'
- if os.path.exists(existing_caption_filename):
- with open(existing_caption_filename, 'r', encoding="utf8") as file:
- existing_caption = file.read()
-
- if shared.state.interrupted:
- break
-
- if img.height > img.width:
- ratio = (img.width * height) / (img.height * width)
- inverse_xy = False
- else:
- ratio = (img.height * width) / (img.width * height)
- inverse_xy = True
-
- process_default_resize = True
-
- if process_split and ratio < 1.0 and ratio <= split_threshold:
- for splitted in split_pic(img, inverse_xy, width, height, overlap_ratio):
- save_pic(splitted, index, params, existing_caption=existing_caption)
- process_default_resize = False
-
- if process_focal_crop and img.height != img.width:
-
- dnn_model_path = None
- try:
- dnn_model_path = autocrop.download_and_cache_models(os.path.join(paths.models_path, "opencv"))
- except Exception as e:
- print("Unable to load face detection model for auto crop selection. Falling back to lower quality haar method.", e)
-
- autocrop_settings = autocrop.Settings(
- crop_width = width,
- crop_height = height,
- face_points_weight = process_focal_crop_face_weight,
- entropy_points_weight = process_focal_crop_entropy_weight,
- corner_points_weight = process_focal_crop_edges_weight,
- annotate_image = process_focal_crop_debug,
- dnn_model_path = dnn_model_path,
- )
- for focal in autocrop.crop_image(img, autocrop_settings):
- save_pic(focal, index, params, existing_caption=existing_caption)
- process_default_resize = False
-
- if process_multicrop:
- cropped = multicrop_pic(img, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)
- if cropped is not None:
- save_pic(cropped, index, params, existing_caption=existing_caption)
- else:
- print(f"skipped {img.width}x{img.height} image {filename} (can't find suitable size within error threshold)")
- process_default_resize = False
-
- if process_default_resize:
- img = images.resize_image(1, img, width, height)
- save_pic(img, index, params, existing_caption=existing_caption)
-
- shared.state.nextjob()
diff --git a/spaces/bioriAsaeru/text-to-voice/Digital Image Processing Book By Poornima Thangam Free 28 A Practical Guide to Techniques and Applications.md b/spaces/bioriAsaeru/text-to-voice/Digital Image Processing Book By Poornima Thangam Free 28 A Practical Guide to Techniques and Applications.md
deleted file mode 100644
index 2aaf4f4cc12403bc3d7f1a7985b2fda6be3b1737..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Digital Image Processing Book By Poornima Thangam Free 28 A Practical Guide to Techniques and Applications.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Digital Image Processing Book By Poornima Thangam Free 28
Download File === https://urloso.com/2uyOoc
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Half Girlfriend Movie 3gp Free Download [UPDATED].md b/spaces/bioriAsaeru/text-to-voice/Half Girlfriend Movie 3gp Free Download [UPDATED].md
deleted file mode 100644
index 2fb1bd083f64edac2828e438db7214c6723ad23e..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Half Girlfriend Movie 3gp Free Download [UPDATED].md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-When Madhav attends Riya's birthday, he questions her about the nature of their relationship. Uncomfortable, Riya says that she is not his girlfriend, but they can maybe reach a compromise since they have reached halfway, and she offers to be his "Half Girlfriend." One afternoon after a game Madhav asks Riya if she would like to rest in his room in a boys only dorm, where (goaded by his peers and feeling humiliated by Riya's uncertainty) Madhav tries to force himself upon Riya. Upset and hurt, a few days later, Riya tells Madhav that she is leaving college and getting married. Madhav tries to stop her but she leaves.
-Free download Waptrick Half Girlfriend ft Rahul Mishra videos from Waptrick.com music video clip download site Watch new Tu Hi Hai clips and download free Half Girlfriend ft Rahul Mishra music videos at Waptrick.com
-Half Girlfriend movie 3gp free download
Download Zip › https://urloso.com/2uyO9J
-download Shraddha Boobs unlimited Movies and videos Download Here.Shraddha Boobs Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Ledstudio10 Serial [Extra Quality].md b/spaces/bioriAsaeru/text-to-voice/Ledstudio10 Serial [Extra Quality].md
deleted file mode 100644
index 17375a1df99c1abb681fc4175a9172d1e8531488..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Ledstudio10 Serial [Extra Quality].md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-How to Use Ledstudio10 Software for LED Display Screens
-Ledstudio10 is a software that allows you to control and configure LED display screens using LINSN technology. It is compatible with various types of LED controllers and modules, and it has many features and functions to help you create stunning visual effects. In this article, we will show you how to use Ledstudio10 software for LED display screens, and how to get the serial number and password for it.
-What is Ledstudio10 Software?
-Ledstudio10 is a software developed by LINSN technology company, which is one of the leading manufacturers of LED display controllers and accessories in China. Ledstudio10 is an upgraded version of the previous Ledstudio software, which has been widely used by LED display users around the world. Ledstudio10 has improved its performance, stability, compatibility, and user interface, making it more convenient and efficient for LED display operation and management.
-Ledstudio10 Serial
DOWNLOAD ✺✺✺ https://urloso.com/2uyR94
-What are the Features and Functions of Ledstudio10 Software?
-Ledstudio10 software has many features and functions that can help you control and configure your LED display screens. Some of the main features and functions are:
-
-- Intelligent Setup: This function allows you to automatically detect the parameters of your LED display screen, such as the resolution, scan mode, color depth, refresh rate, etc. You can also manually adjust these parameters according to your needs.
-- Display Connection: This function allows you to connect your LED display screen to your computer via Ethernet or USB cable. You can also use wireless devices such as Wi-Fi or 4G modules to connect your LED display screen remotely.
-- Hardware Setting: This function allows you to set up the hardware configuration of your LED display screen, such as the type and quantity of LED controllers, modules, power supplies, etc. You can also set up the brightness, contrast, color temperature, gamma correction, etc. of your LED display screen.
-- Software Setup: This function allows you to set up the software configuration of your LED display screen, such as the program mode, play mode, play time, play list, etc. You can also edit and manage the content that you want to display on your LED display screen, such as text, images, videos, animations, etc.
-- User Setup: This function allows you to set up the user permissions and passwords for your Ledstudio10 software. You can also backup and restore your Ledstudio10 data and settings.
-
-How to Get the Serial Number and Password for Ledstudio10 Software?
-To use Ledstudio10 software for your LED display screen, you need to have a serial number and a password. The serial number is used to activate your Ledstudio10 software on your computer. The password is used to access some functions of your Ledstudio10 software.
-The serial number and password for Ledstudio10 software are different from the previous versions of Ledstudio software. Here are the steps to get them:
-
-- Download Ledstudio10 software from this link: https://www.youtube.com/watch?v=5YfZUtNCb5M
-- Install Ledstudio10 software on your computer. You do not need to enter any serial number during the installation process.
-- Open Ledstudio10 software on your computer. You do not need any password to enter the main interface of Ledstudio10 software.
-- To access some functions of Ledstudio10 software, such as Intelligent Setup, Display Connection, Hardware Setting, Software Setup, etc., you do not need any password either. Just click on the corresponding icons on the main interface of Ledstudio10 software.
-- To access User Setup function of Ledstudio10 software, you need a password. The password is 168. Just enter 168 in the password box that pops up when you click on User Setup icon on the main interface of Ledstudio10 software.
-
-Conclusion
-Ledstudio10 is a powerful and user
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/demo/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/demo/README.md
deleted file mode 100644
index 133d8d38e5e9f5f44aca92c59f73309e166d7132..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/demo/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-## Detectron2 Demo
-
-We provide a command line tool to run a simple demo of builtin configs.
-The usage is explained in [GETTING_STARTED.md](../GETTING_STARTED.md).
-
-See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-)
-for a high-quality demo generated with this tool.
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py
deleted file mode 100644
index e4aee2aedf2e62e2357f278417ac58c6b4ff264e..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import copy
-import json
-import numpy as np
-import os
-import sys
-import pycocotools.mask as mask_utils
-
-from detectron2.utils.env import seed_all_rng
-from detectron2.utils.file_io import PathManager
-
-
-def get_point_annotations(input_filename, output_filename, num_points_per_instance):
- with PathManager.open(input_filename, "r") as f:
- coco_json = json.load(f)
-
- coco_annos = coco_json.pop("annotations")
- coco_points_json = copy.deepcopy(coco_json)
-
- imgs = {}
- for img in coco_json["images"]:
- imgs[img["id"]] = img
-
- new_annos = []
- for ann in coco_annos:
- # convert mask
- t = imgs[ann["image_id"]]
- h, w = t["height"], t["width"]
- segm = ann.pop("segmentation")
- if type(segm) == list:
- # polygon -- a single object might consist of multiple parts
- # we merge all parts into one mask rle code
- rles = mask_utils.frPyObjects(segm, h, w)
- rle = mask_utils.merge(rles)
- elif type(segm["counts"]) == list:
- # uncompressed RLE
- rle = mask_utils.frPyObjects(segm, h, w)
- else:
- # rle
- rle = segm
- mask = mask_utils.decode(rle)
- new_ann = copy.deepcopy(ann)
- # sample points in image coordinates
- box = ann["bbox"]
- point_coords_wrt_image = np.random.rand(num_points_per_instance, 2)
- point_coords_wrt_image[:, 0] = point_coords_wrt_image[:, 0] * box[2]
- point_coords_wrt_image[:, 1] = point_coords_wrt_image[:, 1] * box[3]
- point_coords_wrt_image[:, 0] += box[0]
- point_coords_wrt_image[:, 1] += box[1]
- # round to integer coordinates
- point_coords_wrt_image = np.floor(point_coords_wrt_image).astype(int)
- # get labels
- assert (point_coords_wrt_image >= 0).all(), (point_coords_wrt_image, mask.shape)
- assert (point_coords_wrt_image[:, 0] < w).all(), (point_coords_wrt_image, mask.shape)
- assert (point_coords_wrt_image[:, 1] < h).all(), (point_coords_wrt_image, mask.shape)
- point_labels = mask[point_coords_wrt_image[:, 1], point_coords_wrt_image[:, 0]]
- # store new annotations
- new_ann["point_coords"] = point_coords_wrt_image.tolist()
- new_ann["point_labels"] = point_labels.tolist()
- new_annos.append(new_ann)
- coco_points_json["annotations"] = new_annos
-
- with PathManager.open(output_filename, "w") as f:
- json.dump(coco_points_json, f)
-
- print("{} is modified and stored in {}.".format(input_filename, output_filename))
-
-
-if __name__ == "__main__":
- """
- Generate point-based supervision for COCO dataset.
-
- Usage:
- python tools/prepare_coco_point_annotations_without_masks.py \
- NUM_POINTS_PER_INSTANCE NUM_VERSIONS_WITH_DIFFERENT_SEED
-
- Example to generate point-based COCO dataset with 10 points per instance:
- python tools/prepare_coco_point_annotations_without_masks.py 10
- """
-
- # Fix random seed
- seed_all_rng(12345)
-
- assert len(sys.argv) >= 2, "Please provide number of points to sample per instance"
- dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "coco/annotations")
- num_points_per_instance = int(sys.argv[1])
- if len(sys.argv) == 3:
- repeat = int(sys.argv[2])
- else:
- repeat = 1
- s = "instances_train2017"
- for version in range(repeat):
- print(
- "Start sampling {} points per instance for annotations {}.".format(
- num_points_per_instance, s
- )
- )
- get_point_annotations(
- os.path.join(dataset_dir, "{}.json".format(s)),
- os.path.join(
- dataset_dir,
- "{}_n{}_v{}_without_masks.json".format(s, num_points_per_instance, version + 1),
- ),
- num_points_per_instance,
- )
diff --git a/spaces/cadige/03GR-Chatbot-Memory/app.py b/spaces/cadige/03GR-Chatbot-Memory/app.py
deleted file mode 100644
index 81a521248e8f7cdad40078742a14e97db5f9cc8b..0000000000000000000000000000000000000000
--- a/spaces/cadige/03GR-Chatbot-Memory/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
-import torch
-import gradio as gr
-
-
-# PersistDataset -----
-import os
-import csv
-import gradio as gr
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/Carddata.csv"
-DATASET_REPO_ID = "awacke1/Carddata.csv"
-DATA_FILENAME = "Carddata.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-SCRIPT = """
-
-"""
-
-try:
- hf_hub_download(
- repo_id=DATASET_REPO_ID,
- filename=DATA_FILENAME,
- cache_dir=DATA_DIRNAME,
- force_filename=DATA_FILENAME
- )
-except:
- print("file not found")
-repo = Repository(
- local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
-)
-
-def generate_html() -> str:
- with open(DATA_FILE) as csvfile:
- reader = csv.DictReader(csvfile)
- rows = []
- for row in reader:
- rows.append(row)
- rows.reverse()
- if len(rows) == 0:
- return "no messages yet"
- else:
- html = ""
- for row in rows:
- html += ""
- html += f"{row['inputs']}"
- html += f"{row['outputs']}"
- html += ""
- html += ""
- return html
-
-def store_message(name: str, message: str):
- if name and message:
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"])
- writer.writerow(
- {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())}
- )
- commit_url = repo.push_to_hub()
- return ""
-
-iface = gr.Interface(
- store_message,
- [
- inputs.Textbox(placeholder="Your name"),
- inputs.Textbox(placeholder="Your message", lines=2),
- ],
- "html",
- css="""
- .message {background-color:cornflowerblue;color:white; padding:4px;margin:4px;border-radius:4px; }
- """,
- title="Reading/writing to a HuggingFace dataset repo from Spaces",
- description=f"This is a demo of how to do simple *shared data persistence* in a Gradio Space, backed by a dataset repo.",
- article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})",
-)
-
-
-mname = "facebook/blenderbot-400M-distill"
-model = BlenderbotForConditionalGeneration.from_pretrained(mname)
-tokenizer = BlenderbotTokenizer.from_pretrained(mname)
-
-def take_last_tokens(inputs, note_history, history):
- """Filter the last 128 tokens"""
- if inputs['input_ids'].shape[1] > 128:
- inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-128:].tolist()])
- inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-128:].tolist()])
- note_history = [' '.join(note_history[0].split(' ')[2:])]
- history = history[1:]
- return inputs, note_history, history
-
-def add_note_to_history(note, note_history):
- """Add a note to the historical information"""
- note_history.append(note)
- note_history = ' '.join(note_history)
- return [note_history]
-
-title = "Chatbot State of the Art now with Memory Saved to Dataset"
-description = """Chatbot With Memory"""
-
-def chat(message, history):
- history = history or []
- if history:
- history_useful = [' '.join([str(a[0])+' '+str(a[1]) for a in history])]
- else:
- history_useful = []
- history_useful = add_note_to_history(message, history_useful)
- inputs = tokenizer(history_useful, return_tensors="pt")
- inputs, history_useful, history = take_last_tokens(inputs, history_useful, history)
- reply_ids = model.generate(**inputs)
- response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
- history_useful = add_note_to_history(response, history_useful)
- list_history = history_useful[0].split(' ')
- history.append((list_history[-2], list_history[-1]))
- store_message(message, response) # Save to dataset
- return history, history
-
-gr.Interface(
- fn=chat,
- theme="huggingface",
- css=".footer {display:none !important}",
- inputs=["text", "state"],
- outputs=["chatbot", "state"],
- title=title,
- allow_flagging="never",
- description=f"Gradio chatbot backed by memory in a dataset repository.",
- article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})"
- ).launch()
\ No newline at end of file
diff --git a/spaces/cahya/image-search/Dockerfile b/spaces/cahya/image-search/Dockerfile
deleted file mode 100644
index 3f0880796d65b4c996cdaa863ad5924fdd5fedcf..0000000000000000000000000000000000000000
--- a/spaces/cahya/image-search/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM python:3.8-slim-buster
-COPY . /app
-WORKDIR /app
-RUN pip install -r requirements.txt
-EXPOSE 8501
-ENTRYPOINT ["streamlit","run"]
-CMD ["app.py"]
\ No newline at end of file
diff --git a/spaces/candlend/vits-hoshimi/sovits/flask_api.py b/spaces/candlend/vits-hoshimi/sovits/flask_api.py
deleted file mode 100644
index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/sovits/flask_api.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import io
-import logging
-
-import soundfile
-import torch
-import torchaudio
-from flask import Flask, request, send_file
-from flask_cors import CORS
-
-from inference.infer_tool import Svc, RealTimeVC
-
-app = Flask(__name__)
-
-CORS(app)
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-
-@app.route("/voiceChangeModel", methods=["POST"])
-def voice_change_model():
- request_form = request.form
- wave_file = request.files.get("sample", None)
- # 变调信息
- f_pitch_change = float(request_form.get("fPitchChange", 0))
- # DAW所需的采样率
- daw_sample = int(float(request_form.get("sampleRate", 0)))
- speaker_id = int(float(request_form.get("sSpeakId", 0)))
- # http获得wav文件并转换
- input_wav_path = io.BytesIO(wave_file.read())
-
- # 模型推理
- if raw_infer:
- out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path)
- tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample)
- else:
- out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path)
- tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample)
- # 返回音频
- out_wav_path = io.BytesIO()
- soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav")
- out_wav_path.seek(0)
- return send_file(out_wav_path, download_name="temp.wav", as_attachment=True)
-
-
-if __name__ == '__main__':
- # 启用则为直接切片合成,False为交叉淡化方式
- # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音
- # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些
- raw_infer = True
- # 每个模型和config是唯一对应的
- model_name = "logs/32k/G_174000-Copy1.pth"
- config_name = "configs/config.json"
- svc_model = Svc(model_name, config_name)
- svc = RealTimeVC()
- # 此处与vst插件对应,不建议更改
- app.run(port=6842, host="0.0.0.0", debug=False, threaded=False)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/embedder.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/embedder.py
deleted file mode 100644
index 7f52b06032ed97b2d652931646f0855ef342ada9..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/embedder.py
+++ /dev/null
@@ -1,130 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import logging
-import numpy as np
-import pickle
-from enum import Enum
-from typing import Optional
-import torch
-from torch import nn
-
-from detectron2.config import CfgNode
-from detectron2.utils.file_io import PathManager
-
-from .vertex_direct_embedder import VertexDirectEmbedder
-from .vertex_feature_embedder import VertexFeatureEmbedder
-
-
-class EmbedderType(Enum):
- """
- Embedder type which defines how vertices are mapped into the embedding space:
- - "vertex_direct": direct vertex embedding
- - "vertex_feature": embedding vertex features
- """
-
- VERTEX_DIRECT = "vertex_direct"
- VERTEX_FEATURE = "vertex_feature"
-
-
-def create_embedder(embedder_spec: CfgNode, embedder_dim: int) -> nn.Module:
- """
- Create an embedder based on the provided configuration
-
- Args:
- embedder_spec (CfgNode): embedder configuration
- embedder_dim (int): embedding space dimensionality
- Return:
- An embedder instance for the specified configuration
- Raises ValueError, in case of unexpected embedder type
- """
- embedder_type = EmbedderType(embedder_spec.TYPE)
- if embedder_type == EmbedderType.VERTEX_DIRECT:
- embedder = VertexDirectEmbedder(
- num_vertices=embedder_spec.NUM_VERTICES,
- embed_dim=embedder_dim,
- )
- if embedder_spec.INIT_FILE != "":
- embedder.load(embedder_spec.INIT_FILE)
- elif embedder_type == EmbedderType.VERTEX_FEATURE:
- embedder = VertexFeatureEmbedder(
- num_vertices=embedder_spec.NUM_VERTICES,
- feature_dim=embedder_spec.FEATURE_DIM,
- embed_dim=embedder_dim,
- train_features=embedder_spec.FEATURES_TRAINABLE,
- )
- if embedder_spec.INIT_FILE != "":
- embedder.load(embedder_spec.INIT_FILE)
- else:
- raise ValueError(f"Unexpected embedder type {embedder_type}")
-
- if not embedder_spec.IS_TRAINABLE:
- embedder.requires_grad_(False)
-
- return embedder
-
-
-class Embedder(nn.Module):
- """
- Embedder module that serves as a container for embedders to use with different
- meshes. Extends Module to automatically save / load state dict.
- """
-
- DEFAULT_MODEL_CHECKPOINT_PREFIX = "roi_heads.embedder."
-
- def __init__(self, cfg: CfgNode):
- """
- Initialize mesh embedders. An embedder for mesh `i` is stored in a submodule
- "embedder_{i}".
-
- Args:
- cfg (CfgNode): configuration options
- """
- super(Embedder, self).__init__()
- self.mesh_names = set()
- embedder_dim = cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBED_SIZE
- logger = logging.getLogger(__name__)
- for mesh_name, embedder_spec in cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBEDDERS.items():
- logger.info(f"Adding embedder embedder_{mesh_name} with spec {embedder_spec}")
- self.add_module(f"embedder_{mesh_name}", create_embedder(embedder_spec, embedder_dim))
- self.mesh_names.add(mesh_name)
- if cfg.MODEL.WEIGHTS != "":
- self.load_from_model_checkpoint(cfg.MODEL.WEIGHTS)
-
- def load_from_model_checkpoint(self, fpath: str, prefix: Optional[str] = None):
- if prefix is None:
- prefix = Embedder.DEFAULT_MODEL_CHECKPOINT_PREFIX
- state_dict = None
- if fpath.endswith(".pkl"):
- with PathManager.open(fpath, "rb") as hFile:
- state_dict = pickle.load(hFile, encoding="latin1") # pyre-ignore[6]
- else:
- with PathManager.open(fpath, "rb") as hFile:
- # pyre-fixme[6]: For 1st param expected `Union[PathLike[typing.Any],
- # IO[bytes], str, BinaryIO]` but got `Union[IO[bytes], IO[str]]`.
- state_dict = torch.load(hFile, map_location=torch.device("cpu"))
- if state_dict is not None and "model" in state_dict:
- state_dict_local = {}
- for key in state_dict["model"]:
- if key.startswith(prefix):
- v_key = state_dict["model"][key]
- if isinstance(v_key, np.ndarray):
- v_key = torch.from_numpy(v_key)
- state_dict_local[key[len(prefix) :]] = v_key
- # non-strict loading to finetune on different meshes
- self.load_state_dict(state_dict_local, strict=False)
-
- def forward(self, mesh_name: str) -> torch.Tensor:
- """
- Produce vertex embeddings for the specific mesh; vertex embeddings are
- a tensor of shape [N, D] where:
- N = number of vertices
- D = number of dimensions in the embedding space
- Args:
- mesh_name (str): name of a mesh for which to obtain vertex embeddings
- Return:
- Vertex embeddings, a tensor of shape [N, D]
- """
- return getattr(self, f"embedder_{mesh_name}")()
-
- def has_embeddings(self, mesh_name: str) -> bool:
- return hasattr(self, f"embedder_{mesh_name}")
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/multihead_model.py b/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/multihead_model.py
deleted file mode 100644
index 346ad3dbb7c6561192c5f9563e19943ceca02a19..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/multihead_model.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import torch; torch.manual_seed(0)
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils
-import torch.distributions
-import matplotlib.pyplot as plt; plt.rcParams['figure.dpi'] = 200
-from src.cocktails.representation_learning.simple_model import SimpleNet
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-def get_activation(activation):
- if activation == 'tanh':
- activ = F.tanh
- elif activation == 'relu':
- activ = F.relu
- elif activation == 'mish':
- activ = F.mish
- elif activation == 'sigmoid':
- activ = F.sigmoid
- elif activation == 'leakyrelu':
- activ = F.leaky_relu
- elif activation == 'exp':
- activ = torch.exp
- else:
- raise ValueError
- return activ
-
-class IngredientEncoder(nn.Module):
- def __init__(self, input_dim, deepset_latent_dim, hidden_dims, activation, dropout):
- super(IngredientEncoder, self).__init__()
- self.linears = nn.ModuleList()
- self.dropouts = nn.ModuleList()
- dims = [input_dim] + hidden_dims + [deepset_latent_dim]
- for d_in, d_out in zip(dims[:-1], dims[1:]):
- self.linears.append(nn.Linear(d_in, d_out))
- self.dropouts.append(nn.Dropout(dropout))
- self.activation = get_activation(activation)
- self.n_layers = len(self.linears)
- self.layer_range = range(self.n_layers)
-
- def forward(self, x):
- for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts):
- x = layer(x)
- if i_layer != self.n_layers - 1:
- x = self.activation(dropout(x))
- return x # do not use dropout on last layer?
-
-class DeepsetCocktailEncoder(nn.Module):
- def __init__(self, input_dim, deepset_latent_dim, hidden_dims_ing, activation,
- hidden_dims_cocktail, latent_dim, aggregation, dropout):
- super(DeepsetCocktailEncoder, self).__init__()
- self.input_dim = input_dim # dimension of ingredient representation + quantity
- self.ingredient_encoder = IngredientEncoder(input_dim, deepset_latent_dim, hidden_dims_ing, activation, dropout) # encode each ingredient separately
- self.deepset_latent_dim = deepset_latent_dim # dimension of the deepset aggregation
- self.aggregation = aggregation
- self.latent_dim = latent_dim
- # post aggregation network
- self.linears = nn.ModuleList()
- self.dropouts = nn.ModuleList()
- dims = [deepset_latent_dim] + hidden_dims_cocktail
- for d_in, d_out in zip(dims[:-1], dims[1:]):
- self.linears.append(nn.Linear(d_in, d_out))
- self.dropouts.append(nn.Dropout(dropout))
- self.FC_mean = nn.Linear(hidden_dims_cocktail[-1], latent_dim)
- self.FC_logvar = nn.Linear(hidden_dims_cocktail[-1], latent_dim)
- self.softplus = nn.Softplus()
-
- self.activation = get_activation(activation)
- self.n_layers = len(self.linears)
- self.layer_range = range(self.n_layers)
-
- def forward(self, nb_ingredients, x):
-
- # reshape x in (batch size * nb ingredients, dim_ing_rep)
- batch_size = x.shape[0]
- all_ingredients = []
- for i in range(batch_size):
- for j in range(nb_ingredients[i]):
- all_ingredients.append(x[i, self.input_dim * j: self.input_dim * (j + 1)].reshape(1, -1))
- x = torch.cat(all_ingredients, dim=0)
- # encode ingredients in parallel
- ingredients_encodings = self.ingredient_encoder(x)
- assert ingredients_encodings.shape == (torch.sum(nb_ingredients), self.deepset_latent_dim)
-
- # aggregate
- x = []
- index_first = 0
- for i in range(batch_size):
- index_last = index_first + nb_ingredients[i]
- # aggregate
- if self.aggregation == 'sum':
- x.append(torch.sum(ingredients_encodings[index_first:index_last], dim=0).reshape(1, -1))
- elif self.aggregation == 'mean':
- x.append(torch.mean(ingredients_encodings[index_first:index_last], dim=0).reshape(1, -1))
- else:
- raise ValueError
- index_first = index_last
- x = torch.cat(x, dim=0)
- assert x.shape[0] == batch_size
-
- for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts):
- x = self.activation(dropout(layer(x)))
- mean = self.FC_mean(x)
- logvar = self.FC_logvar(x)
- return mean, logvar
-
-
-class MultiHeadModel(nn.Module):
- def __init__(self, encoder, auxiliaries_dict, activation, hidden_dims_decoder):
- super(MultiHeadModel, self).__init__()
- self.encoder = encoder
- self.latent_dim = self.encoder.output_dim
- self.auxiliaries_str = []
- self.auxiliaries = nn.ModuleList()
- for aux_str in sorted(auxiliaries_dict.keys()):
- if aux_str == 'taste_reps':
- self.taste_reps_decoder = SimpleNet(input_dim=self.latent_dim, hidden_dims=[], output_dim=auxiliaries_dict[aux_str]['dim_output'],
- activation=activation, dropout=0.0, final_activ=auxiliaries_dict[aux_str]['final_activ'])
- else:
- self.auxiliaries_str.append(aux_str)
- if aux_str == 'ingredients_quantities':
- hd = hidden_dims_decoder
- else:
- hd = []
- self.auxiliaries.append(SimpleNet(input_dim=self.latent_dim, hidden_dims=hd, output_dim=auxiliaries_dict[aux_str]['dim_output'],
- activation=activation, dropout=0.0, final_activ=auxiliaries_dict[aux_str]['final_activ']))
-
- def get_all_auxiliaries(self, x):
- return [aux(x) for aux in self.auxiliaries]
-
- def get_auxiliary(self, z, aux_str):
- if aux_str == 'taste_reps':
- return self.taste_reps_decoder(z)
- else:
- index = self.auxiliaries_str.index(aux_str)
- return self.auxiliaries[index](z)
-
- def forward(self, x, aux_str=None):
- z = self.encoder(x)
- if aux_str is not None:
- return z, self.get_auxiliary(z, aux_str), [aux_str]
- else:
- return z, self.get_all_auxiliaries(z), self.auxiliaries_str
-
-def get_multihead_model(input_dim, activation, hidden_dims_cocktail, latent_dim, dropout, auxiliaries_dict, hidden_dims_decoder):
- encoder = SimpleNet(input_dim, hidden_dims_cocktail, latent_dim, activation, dropout)
- model = MultiHeadModel(encoder, auxiliaries_dict, activation, hidden_dims_decoder)
- return model
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/MspImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/MspImagePlugin.py
deleted file mode 100644
index c6567b2ae626fd83ef21575a59374c922d5392a9..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/MspImagePlugin.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#
-# The Python Imaging Library.
-#
-# MSP file handling
-#
-# This is the format used by the Paint program in Windows 1 and 2.
-#
-# History:
-# 95-09-05 fl Created
-# 97-01-03 fl Read/write MSP images
-# 17-02-21 es Fixed RLE interpretation
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1995-97.
-# Copyright (c) Eric Soroos 2017.
-#
-# See the README file for information on usage and redistribution.
-#
-# More info on this format: https://archive.org/details/gg243631
-# Page 313:
-# Figure 205. Windows Paint Version 1: "DanM" Format
-# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03
-#
-# See also: https://www.fileformat.info/format/mspaint/egff.htm
-
-import io
-import struct
-
-from . import Image, ImageFile
-from ._binary import i16le as i16
-from ._binary import o16le as o16
-
-#
-# read MSP files
-
-
-def _accept(prefix):
- return prefix[:4] in [b"DanM", b"LinS"]
-
-
-##
-# Image plugin for Windows MSP images. This plugin supports both
-# uncompressed (Windows 1.0).
-
-
-class MspImageFile(ImageFile.ImageFile):
- format = "MSP"
- format_description = "Windows Paint"
-
- def _open(self):
- # Header
- s = self.fp.read(32)
- if not _accept(s):
- msg = "not an MSP file"
- raise SyntaxError(msg)
-
- # Header checksum
- checksum = 0
- for i in range(0, 32, 2):
- checksum = checksum ^ i16(s, i)
- if checksum != 0:
- msg = "bad MSP checksum"
- raise SyntaxError(msg)
-
- self.mode = "1"
- self._size = i16(s, 4), i16(s, 6)
-
- if s[:4] == b"DanM":
- self.tile = [("raw", (0, 0) + self.size, 32, ("1", 0, 1))]
- else:
- self.tile = [("MSP", (0, 0) + self.size, 32, None)]
-
-
-class MspDecoder(ImageFile.PyDecoder):
- # The algo for the MSP decoder is from
- # https://www.fileformat.info/format/mspaint/egff.htm
- # cc-by-attribution -- That page references is taken from the
- # Encyclopedia of Graphics File Formats and is licensed by
- # O'Reilly under the Creative Common/Attribution license
- #
- # For RLE encoded files, the 32byte header is followed by a scan
- # line map, encoded as one 16bit word of encoded byte length per
- # line.
- #
- # NOTE: the encoded length of the line can be 0. This was not
- # handled in the previous version of this encoder, and there's no
- # mention of how to handle it in the documentation. From the few
- # examples I've seen, I've assumed that it is a fill of the
- # background color, in this case, white.
- #
- #
- # Pseudocode of the decoder:
- # Read a BYTE value as the RunType
- # If the RunType value is zero
- # Read next byte as the RunCount
- # Read the next byte as the RunValue
- # Write the RunValue byte RunCount times
- # If the RunType value is non-zero
- # Use this value as the RunCount
- # Read and write the next RunCount bytes literally
- #
- # e.g.:
- # 0x00 03 ff 05 00 01 02 03 04
- # would yield the bytes:
- # 0xff ff ff 00 01 02 03 04
- #
- # which are then interpreted as a bit packed mode '1' image
-
- _pulls_fd = True
-
- def decode(self, buffer):
- img = io.BytesIO()
- blank_line = bytearray((0xFF,) * ((self.state.xsize + 7) // 8))
- try:
- self.fd.seek(32)
- rowmap = struct.unpack_from(
- f"<{self.state.ysize}H", self.fd.read(self.state.ysize * 2)
- )
- except struct.error as e:
- msg = "Truncated MSP file in row map"
- raise OSError(msg) from e
-
- for x, rowlen in enumerate(rowmap):
- try:
- if rowlen == 0:
- img.write(blank_line)
- continue
- row = self.fd.read(rowlen)
- if len(row) != rowlen:
- msg = f"Truncated MSP file, expected {rowlen} bytes on row {x}"
- raise OSError(msg)
- idx = 0
- while idx < rowlen:
- runtype = row[idx]
- idx += 1
- if runtype == 0:
- (runcount, runval) = struct.unpack_from("Bc", row, idx)
- img.write(runval * runcount)
- idx += 2
- else:
- runcount = runtype
- img.write(row[idx : idx + runcount])
- idx += runcount
-
- except struct.error as e:
- msg = f"Corrupted MSP file in row {x}"
- raise OSError(msg) from e
-
- self.set_as_raw(img.getvalue(), ("1", 0, 1))
-
- return -1, 0
-
-
-Image.register_decoder("MSP", MspDecoder)
-
-
-#
-# write MSP files (uncompressed only)
-
-
-def _save(im, fp, filename):
- if im.mode != "1":
- msg = f"cannot write mode {im.mode} as MSP"
- raise OSError(msg)
-
- # create MSP header
- header = [0] * 16
-
- header[0], header[1] = i16(b"Da"), i16(b"nM") # version 1
- header[2], header[3] = im.size
- header[4], header[5] = 1, 1
- header[6], header[7] = 1, 1
- header[8], header[9] = im.size
-
- checksum = 0
- for h in header:
- checksum = checksum ^ h
- header[12] = checksum # FIXME: is this the right field?
-
- # header
- for h in header:
- fp.write(o16(h))
-
- # image body
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 32, ("1", 0, 1))])
-
-
-#
-# registry
-
-Image.register_open(MspImageFile.format, MspImageFile, _accept)
-Image.register_save(MspImageFile.format, _save)
-
-Image.register_extension(MspImageFile.format, ".msp")
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/external.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/external.py
deleted file mode 100644
index 2d34f71ba8d290509329dd5fd008c56dc5d6a0d4..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/external.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import logging
-from typing import Optional, Sequence, Dict, Union
-from pathlib import Path
-
-from clickhouse_connect.driver.exceptions import ProgrammingError
-
-logger = logging.getLogger(__name__)
-
-
-class ExternalFile:
- # pylint: disable=too-many-branches
- def __init__(self,
- file_path: Optional[str] = None,
- file_name: Optional[str] = None,
- data: Optional[bytes] = None,
- fmt: Optional[str] = None,
- types: Optional[Union[str, Sequence[str]]] = None,
- structure: Optional[Union[str, Sequence[str]]] = None,
- mime_type: Optional[str] = None):
- if file_path:
- if data:
- raise ProgrammingError('Only data or file_path should be specified for external data, not both')
- try:
- with open(file_path, 'rb') as file:
- self.data = file.read()
- except OSError as ex:
- raise ProgrammingError(f'Failed to open file {file_path} for external data') from ex
- path_name = Path(file_path).name
- path_base = path_name.rsplit('.', maxsplit=1)[0]
- if not file_name:
- self.name = path_base
- self.file_name = path_name
- else:
- self.name = file_name.rsplit('.', maxsplit=1)[0]
- self.file_name = file_name
- if file_name != path_name and path_base != self.name:
- logger.warning('External data name %s and file_path %s use different names', file_name, path_name)
- elif data:
- if not file_name:
- raise ProgrammingError('Name is required for query external data')
- self.data = data
- self.name = file_name.rsplit('.', maxsplit=1)[0]
- self.file_name = file_name
- else:
- raise ProgrammingError('Either data or file_path must be specified for external data')
- if types:
- if structure:
- raise ProgrammingError('Only types or structure should be specified for external data, not both')
- self.structure = None
- if isinstance(types, str):
- self.types = types
- else:
- self.types = ','.join(types)
- elif structure:
- self.types = None
- if isinstance(structure, str):
- self.structure = structure
- else:
- self.structure = ','.join(structure)
- self.fmt = fmt
- self.mime_type = mime_type or 'application/octet-stream'
-
- @property
- def form_data(self) -> tuple:
- return self.file_name, self.data, self.mime_type
-
- @property
- def query_params(self) -> Dict[str, str]:
- params = {}
- for name, value in (('format', self.fmt),
- ('structure', self.structure),
- ('types', self.types)):
- if value:
- params[f'{self.name}_{name}'] = value
- return params
-
-
-class ExternalData:
- def __init__(self,
- file_path: Optional[str] = None,
- file_name: Optional[str] = None,
- data: Optional[bytes] = None,
- fmt: Optional[str] = None,
- types: Optional[Union[str, Sequence[str]]] = None,
- structure: Optional[Union[str, Sequence[str]]] = None,
- mime_type: Optional[str] = None):
- self.files: list[ExternalFile] = []
- if file_path or data:
- first_file = ExternalFile(file_path=file_path,
- file_name=file_name,
- data=data,
- fmt=fmt,
- types=types,
- structure=structure,
- mime_type=mime_type)
- self.files.append(first_file)
-
- def add_file(self,
- file_path: Optional[str] = None,
- file_name: Optional[str] = None,
- data: Optional[bytes] = None,
- fmt: Optional[str] = None,
- types: Optional[Union[str, Sequence[str]]] = None,
- structure: Optional[Union[str, Sequence[str]]] = None,
- mime_type: Optional[str] = None):
- self.files.append(ExternalFile(file_path=file_path,
- file_name=file_name,
- data=data,
- fmt=fmt,
- types=types,
- structure=structure,
- mime_type=mime_type))
-
- @property
- def form_data(self) -> Dict[str, tuple]:
- if not self.files:
- raise ProgrammingError('No external files set for external data')
- return {file.name: file.form_data for file in self.files}
-
- @property
- def query_params(self) -> Dict[str, str]:
- if not self.files:
- raise ProgrammingError('No external files set for external data')
- params = {}
- for file in self.files:
- params.update(file.query_params)
- return params
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Upload-3aa22eef.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Upload-3aa22eef.js
deleted file mode 100644
index 054bd44e7a272170fb9f866535ce8aa49a7e3ea2..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Upload-3aa22eef.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as H,e as I,s as J,a9 as L,N as A,O as V,K as o,U as F,p as W,M as B,Q as f,Y as m,af as b,ab as X,ac as Z,ad as x,z as $,v as ee,A as ae,a1 as le,B as te,F as y,h as ie}from"./index-f877dfd5.js";import{b as ne}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";function re(l){let a,n,r,c,g,u,i,k,z;const v=l[15].default,d=L(v,l,l[14],null);return{c(){a=A("div"),d&&d.c(),n=V(),r=A("input"),o(r,"type","file"),o(r,"accept",l[0]),r.multiple=c=l[4]==="multiple"||void 0,o(r,"webkitdirectory",g=l[4]==="directory"||void 0),o(r,"mozdirectory",u=l[4]==="directory"||void 0),o(r,"class","svelte-116rqfv"),o(a,"class","svelte-116rqfv"),F(a,"center",l[2]),F(a,"boundedheight",l[1]),F(a,"flex",l[3])},m(t,s){W(t,a,s),d&&d.m(a,null),B(a,n),B(a,r),l[23](r),i=!0,k||(z=[f(r,"change",l[8]),f(a,"drag",m(b(l[16]))),f(a,"dragstart",m(b(l[17]))),f(a,"dragend",m(b(l[18]))),f(a,"dragover",m(b(l[19]))),f(a,"dragenter",m(b(l[20]))),f(a,"dragleave",m(b(l[21]))),f(a,"drop",m(b(l[22]))),f(a,"click",l[7]),f(a,"drop",l[9]),f(a,"dragenter",l[6]),f(a,"dragleave",l[6])],k=!0)},p(t,[s]){d&&d.p&&(!i||s&16384)&&X(d,v,t,t[14],i?x(v,t[14],s,null):Z(t[14]),null),(!i||s&1)&&o(r,"accept",t[0]),(!i||s&16&&c!==(c=t[4]==="multiple"||void 0))&&(r.multiple=c),(!i||s&16&&g!==(g=t[4]==="directory"||void 0))&&o(r,"webkitdirectory",g),(!i||s&16&&u!==(u=t[4]==="directory"||void 0))&&o(r,"mozdirectory",u),(!i||s&4)&&F(a,"center",t[2]),(!i||s&2)&&F(a,"boundedheight",t[1]),(!i||s&8)&&F(a,"flex",t[3])},i(t){i||($(d,t),i=!0)},o(t){ee(d,t),i=!1},d(t){t&&ae(a),d&&d.d(t),l[23](null),k=!1,le(z)}}}function de(l,a,n){let{$$slots:r={},$$scope:c}=a,{filetype:g=null}=a,{include_file_metadata:u=!0}=a,{dragging:i=!1}=a,{boundedheight:k=!0}=a,{center:z=!0}=a,{flex:v=!0}=a,{file_count:d="single"}=a,{disable_click:t=!1}=a,{parse_to_data_url:s=!0}=a,w;const S=te(),C=()=>{n(10,i=!i)},E=()=>{t||(n(5,w.value="",w),w.click())},D=async e=>{let h=Array.from(e);if(!(!e.length||!window.FileReader)){if(d==="single"&&(h=[e[0]]),u)var T=h.map(_=>({name:_.name,size:_.size}));var p=[],U=[];s?U=await Promise.all(h.map(_=>ne(_))):U=h,u?s?p=U.map((_,q)=>({data:_,...T[q]})):p=U.map((_,q)=>({data:"",blob:_,...T[q]})):p=U,S("load",d==="single"?p[0]:p)}},K=async e=>{const h=e.target;h.files&&await D(h.files)},M=async e=>{n(10,i=!1),e.dataTransfer?.files&&await D(e.dataTransfer.files)};function N(e){y.call(this,l,e)}function O(e){y.call(this,l,e)}function P(e){y.call(this,l,e)}function Q(e){y.call(this,l,e)}function R(e){y.call(this,l,e)}function Y(e){y.call(this,l,e)}function j(e){y.call(this,l,e)}function G(e){ie[e?"unshift":"push"](()=>{w=e,n(5,w)})}return l.$$set=e=>{"filetype"in e&&n(0,g=e.filetype),"include_file_metadata"in e&&n(11,u=e.include_file_metadata),"dragging"in e&&n(10,i=e.dragging),"boundedheight"in e&&n(1,k=e.boundedheight),"center"in e&&n(2,z=e.center),"flex"in e&&n(3,v=e.flex),"file_count"in e&&n(4,d=e.file_count),"disable_click"in e&&n(12,t=e.disable_click),"parse_to_data_url"in e&&n(13,s=e.parse_to_data_url),"$$scope"in e&&n(14,c=e.$$scope)},[g,k,z,v,d,w,C,E,K,M,i,u,t,s,c,r,N,O,P,Q,R,Y,j,G]}class ue extends H{constructor(a){super(),I(this,a,de,re,J,{filetype:0,include_file_metadata:11,dragging:10,boundedheight:1,center:2,flex:3,file_count:4,disable_click:12,parse_to_data_url:13})}}export{ue as U};
-//# sourceMappingURL=Upload-3aa22eef.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/Discografia De Palabra Miel Descarga Los Mejores lbumes En Alta Calidad.md b/spaces/cihyFjudo/fairness-paper-search/Discografia De Palabra Miel Descarga Los Mejores lbumes En Alta Calidad.md
deleted file mode 100644
index 5a54e435673dad4cfa6b695a04a8a79bbb7de0b8..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Discografia De Palabra Miel Descarga Los Mejores lbumes En Alta Calidad.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Discografia De Palabra Miel ((FREE))
Download Zip ✦✦✦ https://tinurli.com/2uwitY
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Passware Kit Enterprise 11.7 Crackl The Ultimate Solution for Lost or Forgotten Passwords.md b/spaces/cihyFjudo/fairness-paper-search/Passware Kit Enterprise 11.7 Crackl The Ultimate Solution for Lost or Forgotten Passwords.md
deleted file mode 100644
index 562e3184344852100fb2df347b5e2956c974a26a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Passware Kit Enterprise 11.7 Crackl The Ultimate Solution for Lost or Forgotten Passwords.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Passware Kit Enterprise 11.7 Crackl
Download ➡ https://tinurli.com/2uwi2C
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Samsung Fast Gsm Agere 1002 A Simple and Effective Way to Unlock Your Samsung Phone.md b/spaces/cihyFjudo/fairness-paper-search/Samsung Fast Gsm Agere 1002 A Simple and Effective Way to Unlock Your Samsung Phone.md
deleted file mode 100644
index 915c2b71f2cf024ce9466deb72baa405f21a0d39..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Samsung Fast Gsm Agere 1002 A Simple and Effective Way to Unlock Your Samsung Phone.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Samsung Fast Gsm Agere 1002
Download ✺ https://tinurli.com/2uwhS5
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/ck46/qg-qa/app.py b/spaces/ck46/qg-qa/app.py
deleted file mode 100644
index 81fc05923a36f409a86cd2d472133021c2263fd7..0000000000000000000000000000000000000000
--- a/spaces/ck46/qg-qa/app.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import re
-import streamlit as st
-from qg_pipeline import Pipeline
-
-## Load NLTK
-import nltk
-nltk.download('punkt')
-
-def preprocess_text(text):
- text = re.sub('\[[0-9]+\]', '', text)
- text = re.sub('[\s]{2,}', ' ', text)
- text = text.strip()
- return text
-
-# Add a model selector to the sidebar
-q_model = 'ck46/t5-base-hotpot-qa-qg'
-a_model = 'ck46/t5-base-hotpot-qa-qg'
-
-st.header('Question-Answer Generation')
-st.write(f'Model: {q_model}')
-
-txt = st.text_area('Text for context')
-
-pipeline = Pipeline(
- q_model=q_model,
- q_tokenizer=q_model,
- a_model=a_model,
- a_tokenizer=a_model
-)
-
-if len(txt) >= 1:
- autocards = pipeline(preprocess_text(txt))
-else:
- autocards = []
-
-st.header('Generated question and answers')
-st.write(autocards)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_renderer.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_renderer.py
deleted file mode 100644
index dbcb5ca19a01e3ae000986673d66def23f9c2eac..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_renderer.py
+++ /dev/null
@@ -1,613 +0,0 @@
-from __future__ import annotations
-
-import io
-from typing import TYPE_CHECKING, Any, cast
-
-import matplotlib.collections as mcollections
-import matplotlib.pyplot as plt
-import numpy as np
-
-from contourpy import FillType, LineType
-from contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths, mpl_codes_to_offsets
-from contourpy.util.renderer import Renderer
-
-if TYPE_CHECKING:
- from matplotlib.axes import Axes
- from matplotlib.figure import Figure
- from numpy.typing import ArrayLike
-
- import contourpy._contourpy as cpy
-
-
-class MplRenderer(Renderer):
- _axes: Axes
- _fig: Figure
- _want_tight: bool
-
- """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range.
-
- Args:
- nrows (int, optional): Number of rows of plots, default ``1``.
- ncols (int, optional): Number of columns of plots, default ``1``.
- figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``.
- show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.
- backend (str, optional): Matplotlib backend to use or ``None`` for default backend.
- Default ``None``.
- gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``,
- default None.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- backend: str | None = None,
- gridspec_kw: dict[str, Any] | None = None,
- ) -> None:
- if backend is not None:
- import matplotlib
- matplotlib.use(backend)
-
- kwargs = dict(figsize=figsize, squeeze=False, sharex=True, sharey=True)
- if gridspec_kw is not None:
- kwargs["gridspec_kw"] = gridspec_kw
- else:
- kwargs["subplot_kw"] = dict(aspect="equal")
-
- self._fig, axes = plt.subplots(nrows, ncols, **kwargs)
- self._axes = axes.flatten()
- if not show_frame:
- for ax in self._axes:
- ax.axis("off")
-
- self._want_tight = True
-
- def __del__(self) -> None:
- if hasattr(self, "_fig"):
- plt.close(self._fig)
-
- def _autoscale(self) -> None:
- # Using axes._need_autoscale attribute if need to autoscale before rendering after adding
- # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled
- # added.
- for ax in self._axes:
- if getattr(ax, "_need_autoscale", False):
- ax.autoscale_view(tight=True)
- ax._need_autoscale = False
- if self._want_tight and len(self._axes) > 1:
- self._fig.tight_layout()
-
- def _get_ax(self, ax: Axes | int) -> Axes:
- if isinstance(ax, int):
- ax = self._axes[ax]
- return ax
-
- def filled(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 0.7,
- ) -> None:
- """Plot filled contours on a single Axes.
-
- Args:
- filled (sequence of arrays): Filled contour data as returned by
- :func:`~contourpy.ContourGenerator.filled`.
- fill_type (FillType): Type of ``filled`` data, as returned by
- :attr:`~contourpy.ContourGenerator.fill_type`.
- ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``.
- color (str, optional): Color to plot with. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"C0"``.
- alpha (float, optional): Opacity to plot with, default ``0.7``.
- """
- ax = self._get_ax(ax)
- paths = filled_to_mpl_paths(filled, fill_type)
- collection = mcollections.PathCollection(
- paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha)
- ax.add_collection(collection)
- ax._need_autoscale = True
-
- def grid(
- self,
- x: ArrayLike,
- y: ArrayLike,
- ax: Axes | int = 0,
- color: str = "black",
- alpha: float = 0.1,
- point_color: str | None = None,
- quad_as_tri_alpha: float = 0,
- ) -> None:
- """Plot quad grid lines on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color to plot grid lines, default ``"black"``.
- alpha (float, optional): Opacity to plot lines with, default ``0.1``.
- point_color (str, optional): Color to plot grid points or ``None`` if grid points
- should not be plotted, default ``None``.
- quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0.
-
- Colors may be a string color or the letter ``"C"`` followed by an integer in the range
- ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap.
-
- Warning:
- ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.
- """
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- kwargs = dict(color=color, alpha=alpha)
- ax.plot(x, y, x.T, y.T, **kwargs)
- if quad_as_tri_alpha > 0:
- # Assumes no quad mask.
- xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])
- ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])
- kwargs["alpha"] = quad_as_tri_alpha
- ax.plot(
- np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)),
- np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)),
- np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)),
- np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)),
- **kwargs)
- if point_color is not None:
- ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0)
- ax._need_autoscale = True
-
- def lines(
- self,
- lines: cpy.LineReturn,
- line_type: LineType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- ) -> None:
- """Plot contour lines on a single Axes.
-
- Args:
- lines (sequence of arrays): Contour line data as returned by
- :func:`~contourpy.ContourGenerator.lines`.
- line_type (LineType): Type of ``lines`` data, as returned by
- :attr:`~contourpy.ContourGenerator.line_type`.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"C0"``.
- alpha (float, optional): Opacity to plot lines with, default ``1.0``.
- linewidth (float, optional): Width of lines, default ``1``.
- """
- ax = self._get_ax(ax)
- paths = lines_to_mpl_paths(lines, line_type)
- collection = mcollections.PathCollection(
- paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha)
- ax.add_collection(collection)
- ax._need_autoscale = True
-
- def mask(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike | np.ma.MaskedArray[Any, Any],
- ax: Axes | int = 0,
- color: str = "black",
- ) -> None:
- """Plot masked out grid points as circles on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (masked array of shape (ny, nx): z-values.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Circle color, default ``"black"``.
- """
- mask = np.ma.getmask(z) # type: ignore[no-untyped-call]
- if mask is np.ma.nomask:
- return
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- ax.plot(x[mask], y[mask], "o", c=color)
-
- def save(self, filename: str, transparent: bool = False) -> None:
- """Save plots to SVG or PNG file.
-
- Args:
- filename (str): Filename to save to.
- transparent (bool, optional): Whether background should be transparent, default
- ``False``.
- """
- self._autoscale()
- self._fig.savefig(filename, transparent=transparent)
-
- def save_to_buffer(self) -> io.BytesIO:
- """Save plots to an ``io.BytesIO`` buffer.
-
- Return:
- BytesIO: PNG image buffer.
- """
- self._autoscale()
- buf = io.BytesIO()
- self._fig.savefig(buf, format="png")
- buf.seek(0)
- return buf
-
- def show(self) -> None:
- """Show plots in an interactive window, in the usual Matplotlib manner.
- """
- self._autoscale()
- plt.show()
-
- def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None:
- """Set the title of a single Axes.
-
- Args:
- title (str): Title text.
- ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``.
- color (str, optional): Color to set title. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color
- that depends on the stylesheet in use.
- """
- if color:
- self._get_ax(ax).set_title(title, color=color)
- else:
- self._get_ax(ax).set_title(title)
-
- def z_values(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "green",
- fmt: str = ".1f",
- quad_as_tri: bool = False,
- ) -> None:
- """Show ``z`` values on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (array-like of shape (ny, nx): z-values.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color of added text. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"green"``.
- fmt (str, optional): Format to display z-values, default ``".1f"``.
- quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers
- of quads.
-
- Warning:
- ``quad_as_tri=True`` shows z-values for all quads, even if masked.
- """
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center",
- color=color, clip_on=True)
- if quad_as_tri:
- for j in range(ny-1):
- for i in range(nx-1):
- xx = np.mean(x[j:j+2, i:i+2])
- yy = np.mean(y[j:j+2, i:i+2])
- zz = np.mean(z[j:j+2, i:i+2])
- ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color,
- clip_on=True)
-
-
-class MplTestRenderer(MplRenderer):
- """Test renderer implemented using Matplotlib.
-
- No whitespace around plots and no spines/ticks displayed.
- Uses Agg backend, so can only save to file/buffer, cannot call ``show()``.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- ) -> None:
- gridspec = {
- "left": 0.01,
- "right": 0.99,
- "top": 0.99,
- "bottom": 0.01,
- "wspace": 0.01,
- "hspace": 0.01,
- }
- super().__init__(
- nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec,
- )
-
- for ax in self._axes:
- ax.set_xmargin(0.0)
- ax.set_ymargin(0.0)
- ax.set_xticks([])
- ax.set_yticks([])
-
- self._want_tight = False
-
-
-class MplDebugRenderer(MplRenderer):
- """Debug renderer implemented using Matplotlib.
-
- Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows,
- text, etc.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- ) -> None:
- super().__init__(nrows, ncols, figsize, show_frame)
-
- def _arrow(
- self,
- ax: Axes,
- line_start: cpy.CoordinateArray,
- line_end: cpy.CoordinateArray,
- color: str,
- alpha: float,
- arrow_size: float,
- ) -> None:
- mid = 0.5*(line_start + line_end)
- along = line_end - line_start
- along /= np.sqrt(np.dot(along, along)) # Unit vector.
- right = np.asarray((along[1], -along[0]))
- arrow = np.stack((
- mid - (along*0.5 - right)*arrow_size,
- mid + along*0.5*arrow_size,
- mid - (along*0.5 + right)*arrow_size,
- ))
- ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha)
-
- def _filled_to_lists_of_points_and_offsets(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ) -> tuple[list[cpy.PointArray], list[cpy.OffsetArray]]:
- if fill_type == FillType.OuterCode:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_OuterCode, filled)
- all_points = filled[0]
- all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1]]
- elif fill_type == FillType.ChunkCombinedCode:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedCode, filled)
- all_points = [points for points in filled[0] if points is not None]
- all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1] if codes is not None]
- elif fill_type == FillType.OuterOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_OuterOffset, filled)
- all_points = filled[0]
- all_offsets = filled[1]
- elif fill_type == FillType.ChunkCombinedOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled)
- all_points = [points for points in filled[0] if points is not None]
- all_offsets = [offsets for offsets in filled[1] if offsets is not None]
- elif fill_type == FillType.ChunkCombinedCodeOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled)
- all_points = []
- all_offsets = []
- for points, codes, outer_offsets in zip(*filled):
- if points is None:
- continue
- if TYPE_CHECKING:
- assert codes is not None and outer_offsets is not None
- all_points += np.split(points, outer_offsets[1:-1])
- all_codes = np.split(codes, outer_offsets[1:-1])
- all_offsets += [mpl_codes_to_offsets(codes) for codes in all_codes]
- elif fill_type == FillType.ChunkCombinedOffsetOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled)
- all_points = []
- all_offsets = []
- for points, offsets, outer_offsets in zip(*filled):
- if points is None:
- continue
- if TYPE_CHECKING:
- assert offsets is not None and outer_offsets is not None
- for i in range(len(outer_offsets)-1):
- offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1]
- all_points.append(points[offs[0]:offs[-1]])
- all_offsets.append(offs - offs[0])
- else:
- raise RuntimeError(f"Rendering FillType {fill_type} not implemented")
-
- return all_points, all_offsets
-
- def _lines_to_list_of_points(
- self, lines: cpy.LineReturn, line_type: LineType,
- ) -> list[cpy.PointArray]:
- if line_type == LineType.Separate:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_Separate, lines)
- all_lines = lines
- elif line_type == LineType.SeparateCode:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_SeparateCode, lines)
- all_lines = lines[0]
- elif line_type == LineType.ChunkCombinedCode:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_ChunkCombinedCode, lines)
- all_lines = []
- for points, codes in zip(*lines):
- if points is not None:
- if TYPE_CHECKING:
- assert codes is not None
- offsets = mpl_codes_to_offsets(codes)
- for i in range(len(offsets)-1):
- all_lines.append(points[offsets[i]:offsets[i+1]])
- elif line_type == LineType.ChunkCombinedOffset:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines)
- all_lines = []
- for points, all_offsets in zip(*lines):
- if points is not None:
- if TYPE_CHECKING:
- assert all_offsets is not None
- for i in range(len(all_offsets)-1):
- all_lines.append(points[all_offsets[i]:all_offsets[i+1]])
- else:
- raise RuntimeError(f"Rendering LineType {line_type} not implemented")
-
- return all_lines
-
- def filled(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ax: Axes | int = 0,
- color: str = "C1",
- alpha: float = 0.7,
- line_color: str = "C0",
- line_alpha: float = 0.7,
- point_color: str = "C0",
- start_point_color: str = "red",
- arrow_size: float = 0.1,
- ) -> None:
- super().filled(filled, fill_type, ax, color, alpha)
-
- if line_color is None and point_color is None:
- return
-
- ax = self._get_ax(ax)
- all_points, all_offsets = self._filled_to_lists_of_points_and_offsets(filled, fill_type)
-
- # Lines.
- if line_color is not None:
- for points, offsets in zip(all_points, all_offsets):
- for start, end in zip(offsets[:-1], offsets[1:]):
- xys = points[start:end]
- ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha)
-
- if arrow_size > 0.0:
- n = len(xys)
- for i in range(n-1):
- self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size)
-
- # Points.
- if point_color is not None:
- for points, offsets in zip(all_points, all_offsets):
- mask = np.ones(offsets[-1], dtype=bool)
- mask[offsets[1:]-1] = False # Exclude end points.
- if start_point_color is not None:
- start_indices = offsets[:-1]
- mask[start_indices] = False # Exclude start points.
- ax.plot(
- points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha)
-
- if start_point_color is not None:
- ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o",
- c=start_point_color, alpha=line_alpha)
-
- def lines(
- self,
- lines: cpy.LineReturn,
- line_type: LineType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- point_color: str = "C0",
- start_point_color: str = "red",
- arrow_size: float = 0.1,
- ) -> None:
- super().lines(lines, line_type, ax, color, alpha, linewidth)
-
- if arrow_size == 0.0 and point_color is None:
- return
-
- ax = self._get_ax(ax)
- all_lines = self._lines_to_list_of_points(lines, line_type)
-
- if arrow_size > 0.0:
- for line in all_lines:
- for i in range(len(line)-1):
- self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size)
-
- if point_color is not None:
- for line in all_lines:
- start_index = 0
- end_index = len(line)
- if start_point_color is not None:
- ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha)
- start_index = 1
- if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]:
- end_index -= 1
- ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o",
- c=color, alpha=alpha)
-
- def point_numbers(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "red",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- quad = i + j*nx
- ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color,
- clip_on=True)
-
- def quad_numbers(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "blue",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(1, ny):
- for i in range(1, nx):
- quad = i + j*nx
- xmid = x[j-1:j+1, i-1:i+1].mean()
- ymid = y[j-1:j+1, i-1:i+1].mean()
- ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True)
-
- def z_levels(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- lower_level: float,
- upper_level: float | None = None,
- ax: Axes | int = 0,
- color: str = "green",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- zz = z[j, i]
- if upper_level is not None and zz > upper_level:
- z_level = 2
- elif zz > lower_level:
- z_level = 1
- else:
- z_level = 0
- ax.text(x[j, i], y[j, i], z_level, ha="left", va="bottom", color=color,
- clip_on=True)
diff --git a/spaces/cncn102/bingo1/next.config.js b/spaces/cncn102/bingo1/next.config.js
deleted file mode 100644
index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/next.config.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/** @type {import('next').NextConfig} */
-const nextConfig = {
- // output: 'export',
- // assetPrefix: '.',
- webpack: (config, { isServer }) => {
- if (!isServer) {
- config.resolve = {
- ...config.resolve,
- fallback: {
- 'bufferutil': false,
- 'utf-8-validate': false,
- http: false,
- https: false,
- stream: false,
- // fixes proxy-agent dependencies
- net: false,
- dns: false,
- tls: false,
- assert: false,
- // fixes next-i18next dependencies
- path: false,
- fs: false,
- // fixes mapbox dependencies
- events: false,
- // fixes sentry dependencies
- process: false
- }
- };
- }
- config.module.exprContextCritical = false;
-
- return config;
- },
-}
-
-module.exports = (...args) => {
- return nextConfig
-}
diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/transformer.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/transformer.py
deleted file mode 100644
index fcb8742dbdde6e80fd38b11d064211f6935aae76..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/transformer.py
+++ /dev/null
@@ -1,959 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR Transformer class.
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-from typing import Optional
-
-import torch
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-
-from groundingdino.util.misc import inverse_sigmoid
-
-from .fuse_modules import BiAttentionBlock
-from .ms_deform_attn import MultiScaleDeformableAttention as MSDeformAttn
-from .transformer_vanilla import TransformerEncoderLayer
-from .utils import (
- MLP,
- _get_activation_fn,
- _get_clones,
- gen_encoder_output_proposals,
- gen_sineembed_for_position,
- get_sine_pos_embed,
-)
-
-
-class Transformer(nn.Module):
- def __init__(
- self,
- d_model=256,
- nhead=8,
- num_queries=300,
- num_encoder_layers=6,
- num_unicoder_layers=0,
- num_decoder_layers=6,
- dim_feedforward=2048,
- dropout=0.0,
- activation="relu",
- normalize_before=False,
- return_intermediate_dec=False,
- query_dim=4,
- num_patterns=0,
- # for deformable encoder
- num_feature_levels=1,
- enc_n_points=4,
- dec_n_points=4,
- # init query
- learnable_tgt_init=False,
- # two stage
- two_stage_type="no", # ['no', 'standard', 'early', 'combine', 'enceachlayer', 'enclayer1']
- embed_init_tgt=False,
- # for text
- use_text_enhancer=False,
- use_fusion_layer=False,
- use_checkpoint=False,
- use_transformer_ckpt=False,
- use_text_cross_attention=False,
- text_dropout=0.1,
- fusion_dropout=0.1,
- fusion_droppath=0.0,
- ):
- super().__init__()
- self.num_feature_levels = num_feature_levels
- self.num_encoder_layers = num_encoder_layers
- self.num_unicoder_layers = num_unicoder_layers
- self.num_decoder_layers = num_decoder_layers
- self.num_queries = num_queries
- assert query_dim == 4
-
- # choose encoder layer type
- encoder_layer = DeformableTransformerEncoderLayer(
- d_model, dim_feedforward, dropout, activation, num_feature_levels, nhead, enc_n_points
- )
-
- if use_text_enhancer:
- text_enhance_layer = TransformerEncoderLayer(
- d_model=d_model,
- nhead=nhead // 2,
- dim_feedforward=dim_feedforward // 2,
- dropout=text_dropout,
- )
- else:
- text_enhance_layer = None
-
- if use_fusion_layer:
- feature_fusion_layer = BiAttentionBlock(
- v_dim=d_model,
- l_dim=d_model,
- embed_dim=dim_feedforward // 2,
- num_heads=nhead // 2,
- dropout=fusion_dropout,
- drop_path=fusion_droppath,
- )
- else:
- feature_fusion_layer = None
-
- encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
- assert encoder_norm is None
- self.encoder = TransformerEncoder(
- encoder_layer,
- num_encoder_layers,
- d_model=d_model,
- num_queries=num_queries,
- text_enhance_layer=text_enhance_layer,
- feature_fusion_layer=feature_fusion_layer,
- use_checkpoint=use_checkpoint,
- use_transformer_ckpt=use_transformer_ckpt,
- )
-
- # choose decoder layer type
- decoder_layer = DeformableTransformerDecoderLayer(
- d_model,
- dim_feedforward,
- dropout,
- activation,
- num_feature_levels,
- nhead,
- dec_n_points,
- use_text_cross_attention=use_text_cross_attention,
- )
-
- decoder_norm = nn.LayerNorm(d_model)
- self.decoder = TransformerDecoder(
- decoder_layer,
- num_decoder_layers,
- decoder_norm,
- return_intermediate=return_intermediate_dec,
- d_model=d_model,
- query_dim=query_dim,
- num_feature_levels=num_feature_levels,
- )
-
- self.d_model = d_model
- self.nhead = nhead
- self.dec_layers = num_decoder_layers
- self.num_queries = num_queries # useful for single stage model only
- self.num_patterns = num_patterns
- if not isinstance(num_patterns, int):
- Warning("num_patterns should be int but {}".format(type(num_patterns)))
- self.num_patterns = 0
-
- if num_feature_levels > 1:
- if self.num_encoder_layers > 0:
- self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
- else:
- self.level_embed = None
-
- self.learnable_tgt_init = learnable_tgt_init
- assert learnable_tgt_init, "why not learnable_tgt_init"
- self.embed_init_tgt = embed_init_tgt
- if (two_stage_type != "no" and embed_init_tgt) or (two_stage_type == "no"):
- self.tgt_embed = nn.Embedding(self.num_queries, d_model)
- nn.init.normal_(self.tgt_embed.weight.data)
- else:
- self.tgt_embed = None
-
- # for two stage
- self.two_stage_type = two_stage_type
- assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format(
- two_stage_type
- )
- if two_stage_type == "standard":
- # anchor selection at the output of encoder
- self.enc_output = nn.Linear(d_model, d_model)
- self.enc_output_norm = nn.LayerNorm(d_model)
- self.two_stage_wh_embedding = None
-
- if two_stage_type == "no":
- self.init_ref_points(num_queries) # init self.refpoint_embed
-
- self.enc_out_class_embed = None
- self.enc_out_bbox_embed = None
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- for m in self.modules():
- if isinstance(m, MSDeformAttn):
- m._reset_parameters()
- if self.num_feature_levels > 1 and self.level_embed is not None:
- nn.init.normal_(self.level_embed)
-
- def get_valid_ratio(self, mask):
- _, H, W = mask.shape
- valid_H = torch.sum(~mask[:, :, 0], 1)
- valid_W = torch.sum(~mask[:, 0, :], 1)
- valid_ratio_h = valid_H.float() / H
- valid_ratio_w = valid_W.float() / W
- valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
- return valid_ratio
-
- def init_ref_points(self, use_num_queries):
- self.refpoint_embed = nn.Embedding(use_num_queries, 4)
-
- def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None, text_dict=None):
- """
- Input:
- - srcs: List of multi features [bs, ci, hi, wi]
- - masks: List of multi masks [bs, hi, wi]
- - refpoint_embed: [bs, num_dn, 4]. None in infer
- - pos_embeds: List of multi pos embeds [bs, ci, hi, wi]
- - tgt: [bs, num_dn, d_model]. None in infer
-
- """
- # prepare input for encoder
- src_flatten = []
- mask_flatten = []
- lvl_pos_embed_flatten = []
- spatial_shapes = []
- for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
- bs, c, h, w = src.shape
- spatial_shape = (h, w)
- spatial_shapes.append(spatial_shape)
-
- src = src.flatten(2).transpose(1, 2) # bs, hw, c
- mask = mask.flatten(1) # bs, hw
- pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c
- if self.num_feature_levels > 1 and self.level_embed is not None:
- lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
- else:
- lvl_pos_embed = pos_embed
- lvl_pos_embed_flatten.append(lvl_pos_embed)
- src_flatten.append(src)
- mask_flatten.append(mask)
- src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c
- mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw}
- lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) # bs, \sum{hxw}, c
- spatial_shapes = torch.as_tensor(
- spatial_shapes, dtype=torch.long, device=src_flatten.device
- )
- level_start_index = torch.cat(
- (spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1])
- )
- valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
-
- # two stage
- enc_topk_proposals = enc_refpoint_embed = None
-
- #########################################################
- # Begin Encoder
- #########################################################
- memory, memory_text = self.encoder(
- src_flatten,
- pos=lvl_pos_embed_flatten,
- level_start_index=level_start_index,
- spatial_shapes=spatial_shapes,
- valid_ratios=valid_ratios,
- key_padding_mask=mask_flatten,
- memory_text=text_dict["encoded_text"],
- text_attention_mask=~text_dict["text_token_mask"],
- # we ~ the mask . False means use the token; True means pad the token
- position_ids=text_dict["position_ids"],
- text_self_attention_masks=text_dict["text_self_attention_masks"],
- )
- #########################################################
- # End Encoder
- # - memory: bs, \sum{hw}, c
- # - mask_flatten: bs, \sum{hw}
- # - lvl_pos_embed_flatten: bs, \sum{hw}, c
- # - enc_intermediate_output: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
- # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
- #########################################################
- text_dict["encoded_text"] = memory_text
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # if memory.isnan().any() | memory.isinf().any():
- # import ipdb; ipdb.set_trace()
-
- if self.two_stage_type == "standard":
- output_memory, output_proposals = gen_encoder_output_proposals(
- memory, mask_flatten, spatial_shapes
- )
- output_memory = self.enc_output_norm(self.enc_output(output_memory))
-
- if text_dict is not None:
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory, text_dict)
- else:
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory)
-
- topk_logits = enc_outputs_class_unselected.max(-1)[0]
- enc_outputs_coord_unselected = (
- self.enc_out_bbox_embed(output_memory) + output_proposals
- ) # (bs, \sum{hw}, 4) unsigmoid
- topk = self.num_queries
-
- topk_proposals = torch.topk(topk_logits, topk, dim=1)[1] # bs, nq
-
- # gather boxes
- refpoint_embed_undetach = torch.gather(
- enc_outputs_coord_unselected, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
- ) # unsigmoid
- refpoint_embed_ = refpoint_embed_undetach.detach()
- init_box_proposal = torch.gather(
- output_proposals, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
- ).sigmoid() # sigmoid
-
- # gather tgt
- tgt_undetach = torch.gather(
- output_memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, self.d_model)
- )
- if self.embed_init_tgt:
- tgt_ = (
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, d_model
- else:
- tgt_ = tgt_undetach.detach()
-
- if refpoint_embed is not None:
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
- tgt = torch.cat([tgt, tgt_], dim=1)
- else:
- refpoint_embed, tgt = refpoint_embed_, tgt_
-
- elif self.two_stage_type == "no":
- tgt_ = (
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, d_model
- refpoint_embed_ = (
- self.refpoint_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, 4
-
- if refpoint_embed is not None:
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
- tgt = torch.cat([tgt, tgt_], dim=1)
- else:
- refpoint_embed, tgt = refpoint_embed_, tgt_
-
- if self.num_patterns > 0:
- tgt_embed = tgt.repeat(1, self.num_patterns, 1)
- refpoint_embed = refpoint_embed.repeat(1, self.num_patterns, 1)
- tgt_pat = self.patterns.weight[None, :, :].repeat_interleave(
- self.num_queries, 1
- ) # 1, n_q*n_pat, d_model
- tgt = tgt_embed + tgt_pat
-
- init_box_proposal = refpoint_embed_.sigmoid()
-
- else:
- raise NotImplementedError("unknown two_stage_type {}".format(self.two_stage_type))
- #########################################################
- # End preparing tgt
- # - tgt: bs, NQ, d_model
- # - refpoint_embed(unsigmoid): bs, NQ, d_model
- #########################################################
-
- #########################################################
- # Begin Decoder
- #########################################################
- hs, references = self.decoder(
- tgt=tgt.transpose(0, 1),
- memory=memory.transpose(0, 1),
- memory_key_padding_mask=mask_flatten,
- pos=lvl_pos_embed_flatten.transpose(0, 1),
- refpoints_unsigmoid=refpoint_embed.transpose(0, 1),
- level_start_index=level_start_index,
- spatial_shapes=spatial_shapes,
- valid_ratios=valid_ratios,
- tgt_mask=attn_mask,
- memory_text=text_dict["encoded_text"],
- text_attention_mask=~text_dict["text_token_mask"],
- # we ~ the mask . False means use the token; True means pad the token
- )
- #########################################################
- # End Decoder
- # hs: n_dec, bs, nq, d_model
- # references: n_dec+1, bs, nq, query_dim
- #########################################################
-
- #########################################################
- # Begin postprocess
- #########################################################
- if self.two_stage_type == "standard":
- hs_enc = tgt_undetach.unsqueeze(0)
- ref_enc = refpoint_embed_undetach.sigmoid().unsqueeze(0)
- else:
- hs_enc = ref_enc = None
- #########################################################
- # End postprocess
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or (n_enc, bs, nq, d_model) or None
- # ref_enc: (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or (n_enc, bs, nq, d_model) or None
- #########################################################
-
- return hs, references, hs_enc, ref_enc, init_box_proposal
- # hs: (n_dec, bs, nq, d_model)
- # references: sigmoid coordinates. (n_dec+1, bs, bq, 4)
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or None
- # ref_enc: sigmoid coordinates. \
- # (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or None
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self,
- encoder_layer,
- num_layers,
- d_model=256,
- num_queries=300,
- enc_layer_share=False,
- text_enhance_layer=None,
- feature_fusion_layer=None,
- use_checkpoint=False,
- use_transformer_ckpt=False,
- ):
- """_summary_
-
- Args:
- encoder_layer (_type_): _description_
- num_layers (_type_): _description_
- norm (_type_, optional): _description_. Defaults to None.
- d_model (int, optional): _description_. Defaults to 256.
- num_queries (int, optional): _description_. Defaults to 300.
- enc_layer_share (bool, optional): _description_. Defaults to False.
-
- """
- super().__init__()
- # prepare layers
- self.layers = []
- self.text_layers = []
- self.fusion_layers = []
- if num_layers > 0:
- self.layers = _get_clones(encoder_layer, num_layers, layer_share=enc_layer_share)
-
- if text_enhance_layer is not None:
- self.text_layers = _get_clones(
- text_enhance_layer, num_layers, layer_share=enc_layer_share
- )
- if feature_fusion_layer is not None:
- self.fusion_layers = _get_clones(
- feature_fusion_layer, num_layers, layer_share=enc_layer_share
- )
- else:
- self.layers = []
- del encoder_layer
-
- if text_enhance_layer is not None:
- self.text_layers = []
- del text_enhance_layer
- if feature_fusion_layer is not None:
- self.fusion_layers = []
- del feature_fusion_layer
-
- self.query_scale = None
- self.num_queries = num_queries
- self.num_layers = num_layers
- self.d_model = d_model
-
- self.use_checkpoint = use_checkpoint
- self.use_transformer_ckpt = use_transformer_ckpt
-
- @staticmethod
- def get_reference_points(spatial_shapes, valid_ratios, device):
- reference_points_list = []
- for lvl, (H_, W_) in enumerate(spatial_shapes):
-
- ref_y, ref_x = torch.meshgrid(
- torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
- torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device),
- )
- ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
- ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
- ref = torch.stack((ref_x, ref_y), -1)
- reference_points_list.append(ref)
- reference_points = torch.cat(reference_points_list, 1)
- reference_points = reference_points[:, :, None] * valid_ratios[:, None]
- return reference_points
-
- def forward(
- self,
- # for images
- src: Tensor,
- pos: Tensor,
- spatial_shapes: Tensor,
- level_start_index: Tensor,
- valid_ratios: Tensor,
- key_padding_mask: Tensor,
- # for texts
- memory_text: Tensor = None,
- text_attention_mask: Tensor = None,
- pos_text: Tensor = None,
- text_self_attention_masks: Tensor = None,
- position_ids: Tensor = None,
- ):
- """
- Input:
- - src: [bs, sum(hi*wi), 256]
- - pos: pos embed for src. [bs, sum(hi*wi), 256]
- - spatial_shapes: h,w of each level [num_level, 2]
- - level_start_index: [num_level] start point of level in sum(hi*wi).
- - valid_ratios: [bs, num_level, 2]
- - key_padding_mask: [bs, sum(hi*wi)]
-
- - memory_text: bs, n_text, 256
- - text_attention_mask: bs, n_text
- False for no padding; True for padding
- - pos_text: bs, n_text, 256
-
- - position_ids: bs, n_text
- Intermedia:
- - reference_points: [bs, sum(hi*wi), num_level, 2]
- Outpus:
- - output: [bs, sum(hi*wi), 256]
- """
-
- output = src
-
- # preparation and reshape
- if self.num_layers > 0:
- reference_points = self.get_reference_points(
- spatial_shapes, valid_ratios, device=src.device
- )
-
- if self.text_layers:
- # generate pos_text
- bs, n_text, text_dim = memory_text.shape
- if pos_text is None and position_ids is None:
- pos_text = (
- torch.arange(n_text, device=memory_text.device)
- .float()
- .unsqueeze(0)
- .unsqueeze(-1)
- .repeat(bs, 1, 1)
- )
- pos_text = get_sine_pos_embed(pos_text, num_pos_feats=256, exchange_xy=False)
- if position_ids is not None:
- pos_text = get_sine_pos_embed(
- position_ids[..., None], num_pos_feats=256, exchange_xy=False
- )
-
- # main process
- for layer_id, layer in enumerate(self.layers):
- # if output.isnan().any() or memory_text.isnan().any():
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
- if self.fusion_layers:
- if self.use_checkpoint:
- output, memory_text = checkpoint.checkpoint(
- self.fusion_layers[layer_id],
- output,
- memory_text,
- key_padding_mask,
- text_attention_mask,
- )
- else:
- output, memory_text = self.fusion_layers[layer_id](
- v=output,
- l=memory_text,
- attention_mask_v=key_padding_mask,
- attention_mask_l=text_attention_mask,
- )
-
- if self.text_layers:
- memory_text = self.text_layers[layer_id](
- src=memory_text.transpose(0, 1),
- src_mask=~text_self_attention_masks, # note we use ~ for mask here
- src_key_padding_mask=text_attention_mask,
- pos=(pos_text.transpose(0, 1) if pos_text is not None else None),
- ).transpose(0, 1)
-
- # main process
- if self.use_transformer_ckpt:
- output = checkpoint.checkpoint(
- layer,
- output,
- pos,
- reference_points,
- spatial_shapes,
- level_start_index,
- key_padding_mask,
- )
- else:
- output = layer(
- src=output,
- pos=pos,
- reference_points=reference_points,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- key_padding_mask=key_padding_mask,
- )
-
- return output, memory_text
-
-
-class TransformerDecoder(nn.Module):
- def __init__(
- self,
- decoder_layer,
- num_layers,
- norm=None,
- return_intermediate=False,
- d_model=256,
- query_dim=4,
- num_feature_levels=1,
- ):
- super().__init__()
- if num_layers > 0:
- self.layers = _get_clones(decoder_layer, num_layers)
- else:
- self.layers = []
- self.num_layers = num_layers
- self.norm = norm
- self.return_intermediate = return_intermediate
- assert return_intermediate, "support return_intermediate only"
- self.query_dim = query_dim
- assert query_dim in [2, 4], "query_dim should be 2/4 but {}".format(query_dim)
- self.num_feature_levels = num_feature_levels
-
- self.ref_point_head = MLP(query_dim // 2 * d_model, d_model, d_model, 2)
- self.query_pos_sine_scale = None
-
- self.query_scale = None
- self.bbox_embed = None
- self.class_embed = None
-
- self.d_model = d_model
-
- self.ref_anchor_head = None
-
- def forward(
- self,
- tgt,
- memory,
- tgt_mask: Optional[Tensor] = None,
- memory_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- refpoints_unsigmoid: Optional[Tensor] = None, # num_queries, bs, 2
- # for memory
- level_start_index: Optional[Tensor] = None, # num_levels
- spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
- valid_ratios: Optional[Tensor] = None,
- # for text
- memory_text: Optional[Tensor] = None,
- text_attention_mask: Optional[Tensor] = None,
- ):
- """
- Input:
- - tgt: nq, bs, d_model
- - memory: hw, bs, d_model
- - pos: hw, bs, d_model
- - refpoints_unsigmoid: nq, bs, 2/4
- - valid_ratios/spatial_shapes: bs, nlevel, 2
- """
- output = tgt
-
- intermediate = []
- reference_points = refpoints_unsigmoid.sigmoid()
- ref_points = [reference_points]
-
- for layer_id, layer in enumerate(self.layers):
-
- if reference_points.shape[-1] == 4:
- reference_points_input = (
- reference_points[:, :, None]
- * torch.cat([valid_ratios, valid_ratios], -1)[None, :]
- ) # nq, bs, nlevel, 4
- else:
- assert reference_points.shape[-1] == 2
- reference_points_input = reference_points[:, :, None] * valid_ratios[None, :]
- query_sine_embed = gen_sineembed_for_position(
- reference_points_input[:, :, 0, :]
- ) # nq, bs, 256*2
-
- # conditional query
- raw_query_pos = self.ref_point_head(query_sine_embed) # nq, bs, 256
- pos_scale = self.query_scale(output) if self.query_scale is not None else 1
- query_pos = pos_scale * raw_query_pos
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # if query_pos.isnan().any() | query_pos.isinf().any():
- # import ipdb; ipdb.set_trace()
-
- # main process
- output = layer(
- tgt=output,
- tgt_query_pos=query_pos,
- tgt_query_sine_embed=query_sine_embed,
- tgt_key_padding_mask=tgt_key_padding_mask,
- tgt_reference_points=reference_points_input,
- memory_text=memory_text,
- text_attention_mask=text_attention_mask,
- memory=memory,
- memory_key_padding_mask=memory_key_padding_mask,
- memory_level_start_index=level_start_index,
- memory_spatial_shapes=spatial_shapes,
- memory_pos=pos,
- self_attn_mask=tgt_mask,
- cross_attn_mask=memory_mask,
- )
- if output.isnan().any() | output.isinf().any():
- print(f"output layer_id {layer_id} is nan")
- try:
- num_nan = output.isnan().sum().item()
- num_inf = output.isinf().sum().item()
- print(f"num_nan {num_nan}, num_inf {num_inf}")
- except Exception as e:
- print(e)
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # import ipdb; ipdb.set_trace()
-
- # iter update
- if self.bbox_embed is not None:
- # box_holder = self.bbox_embed(output)
- # box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points)
- # new_reference_points = box_holder[..., :self.query_dim].sigmoid()
-
- reference_before_sigmoid = inverse_sigmoid(reference_points)
- delta_unsig = self.bbox_embed[layer_id](output)
- outputs_unsig = delta_unsig + reference_before_sigmoid
- new_reference_points = outputs_unsig.sigmoid()
-
- reference_points = new_reference_points.detach()
- # if layer_id != self.num_layers - 1:
- ref_points.append(new_reference_points)
-
- intermediate.append(self.norm(output))
-
- return [
- [itm_out.transpose(0, 1) for itm_out in intermediate],
- [itm_refpoint.transpose(0, 1) for itm_refpoint in ref_points],
- ]
-
-
-class DeformableTransformerEncoderLayer(nn.Module):
- def __init__(
- self,
- d_model=256,
- d_ffn=1024,
- dropout=0.1,
- activation="relu",
- n_levels=4,
- n_heads=8,
- n_points=4,
- ):
- super().__init__()
-
- # self attention
- self.self_attn = MSDeformAttn(
- embed_dim=d_model,
- num_levels=n_levels,
- num_heads=n_heads,
- num_points=n_points,
- batch_first=True,
- )
- self.dropout1 = nn.Dropout(dropout)
- self.norm1 = nn.LayerNorm(d_model)
-
- # ffn
- self.linear1 = nn.Linear(d_model, d_ffn)
- self.activation = _get_activation_fn(activation, d_model=d_ffn)
- self.dropout2 = nn.Dropout(dropout)
- self.linear2 = nn.Linear(d_ffn, d_model)
- self.dropout3 = nn.Dropout(dropout)
- self.norm2 = nn.LayerNorm(d_model)
-
- @staticmethod
- def with_pos_embed(tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_ffn(self, src):
- src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
- src = src + self.dropout3(src2)
- src = self.norm2(src)
- return src
-
- def forward(
- self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None
- ):
- # self attention
- # import ipdb; ipdb.set_trace()
- src2 = self.self_attn(
- query=self.with_pos_embed(src, pos),
- reference_points=reference_points,
- value=src,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- key_padding_mask=key_padding_mask,
- )
- src = src + self.dropout1(src2)
- src = self.norm1(src)
-
- # ffn
- src = self.forward_ffn(src)
-
- return src
-
-
-class DeformableTransformerDecoderLayer(nn.Module):
- def __init__(
- self,
- d_model=256,
- d_ffn=1024,
- dropout=0.1,
- activation="relu",
- n_levels=4,
- n_heads=8,
- n_points=4,
- use_text_feat_guide=False,
- use_text_cross_attention=False,
- ):
- super().__init__()
-
- # cross attention
- self.cross_attn = MSDeformAttn(
- embed_dim=d_model,
- num_levels=n_levels,
- num_heads=n_heads,
- num_points=n_points,
- batch_first=True,
- )
- self.dropout1 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm1 = nn.LayerNorm(d_model)
-
- # cross attention text
- if use_text_cross_attention:
- self.ca_text = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
- self.catext_dropout = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.catext_norm = nn.LayerNorm(d_model)
-
- # self attention
- self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
- self.dropout2 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm2 = nn.LayerNorm(d_model)
-
- # ffn
- self.linear1 = nn.Linear(d_model, d_ffn)
- self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1)
- self.dropout3 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.linear2 = nn.Linear(d_ffn, d_model)
- self.dropout4 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm3 = nn.LayerNorm(d_model)
-
- self.key_aware_proj = None
- self.use_text_feat_guide = use_text_feat_guide
- assert not use_text_feat_guide
- self.use_text_cross_attention = use_text_cross_attention
-
- def rm_self_attn_modules(self):
- self.self_attn = None
- self.dropout2 = None
- self.norm2 = None
-
- @staticmethod
- def with_pos_embed(tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_ffn(self, tgt):
- with torch.cuda.amp.autocast(enabled=False):
- tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout4(tgt2)
- tgt = self.norm3(tgt)
- return tgt
-
- def forward(
- self,
- # for tgt
- tgt: Optional[Tensor], # nq, bs, d_model
- tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos))
- tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos)
- tgt_key_padding_mask: Optional[Tensor] = None,
- tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4
- memory_text: Optional[Tensor] = None, # bs, num_token, d_model
- text_attention_mask: Optional[Tensor] = None, # bs, num_token
- # for memory
- memory: Optional[Tensor] = None, # hw, bs, d_model
- memory_key_padding_mask: Optional[Tensor] = None,
- memory_level_start_index: Optional[Tensor] = None, # num_levels
- memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
- memory_pos: Optional[Tensor] = None, # pos for memory
- # sa
- self_attn_mask: Optional[Tensor] = None, # mask used for self-attention
- cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention
- ):
- """
- Input:
- - tgt/tgt_query_pos: nq, bs, d_model
- -
- """
- assert cross_attn_mask is None
-
- # self attention
- if self.self_attn is not None:
- # import ipdb; ipdb.set_trace()
- q = k = self.with_pos_embed(tgt, tgt_query_pos)
- tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0]
- tgt = tgt + self.dropout2(tgt2)
- tgt = self.norm2(tgt)
-
- if self.use_text_cross_attention:
- tgt2 = self.ca_text(
- self.with_pos_embed(tgt, tgt_query_pos),
- memory_text.transpose(0, 1),
- memory_text.transpose(0, 1),
- key_padding_mask=text_attention_mask,
- )[0]
- tgt = tgt + self.catext_dropout(tgt2)
- tgt = self.catext_norm(tgt)
-
- tgt2 = self.cross_attn(
- query=self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1),
- reference_points=tgt_reference_points.transpose(0, 1).contiguous(),
- value=memory.transpose(0, 1),
- spatial_shapes=memory_spatial_shapes,
- level_start_index=memory_level_start_index,
- key_padding_mask=memory_key_padding_mask,
- ).transpose(0, 1)
- tgt = tgt + self.dropout1(tgt2)
- tgt = self.norm1(tgt)
-
- # ffn
- tgt = self.forward_ffn(tgt)
-
- return tgt
-
-
-def build_transformer(args):
- return Transformer(
- d_model=args.hidden_dim,
- dropout=args.dropout,
- nhead=args.nheads,
- num_queries=args.num_queries,
- dim_feedforward=args.dim_feedforward,
- num_encoder_layers=args.enc_layers,
- num_decoder_layers=args.dec_layers,
- normalize_before=args.pre_norm,
- return_intermediate_dec=True,
- query_dim=args.query_dim,
- activation=args.transformer_activation,
- num_patterns=args.num_patterns,
- num_feature_levels=args.num_feature_levels,
- enc_n_points=args.enc_n_points,
- dec_n_points=args.dec_n_points,
- learnable_tgt_init=True,
- # two stage
- two_stage_type=args.two_stage_type, # ['no', 'standard', 'early']
- embed_init_tgt=args.embed_init_tgt,
- use_text_enhancer=args.use_text_enhancer,
- use_fusion_layer=args.use_fusion_layer,
- use_checkpoint=args.use_checkpoint,
- use_transformer_ckpt=args.use_transformer_ckpt,
- use_text_cross_attention=args.use_text_cross_attention,
- text_dropout=args.text_dropout,
- fusion_dropout=args.fusion_dropout,
- fusion_droppath=args.fusion_droppath,
- )
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/hpeldsp_arm.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/hpeldsp_arm.h
deleted file mode 100644
index 5f3c7741c1e141350b75beae6ee36a72206b5d3f..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/hpeldsp_arm.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Copyright (c) 2009 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_ARM_HPELDSP_ARM_H
-#define AVCODEC_ARM_HPELDSP_ARM_H
-
-#include "libavcodec/hpeldsp.h"
-
-void ff_hpeldsp_init_armv6(HpelDSPContext *c, int flags);
-void ff_hpeldsp_init_neon(HpelDSPContext *c, int flags);
-
-#endif /* AVCODEC_ARM_HPELDSP_ARM_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_ps.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_ps.h
deleted file mode 100644
index 5c35761fbc8440f9432131bd3f707820cea4d9c0..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_ps.h
+++ /dev/null
@@ -1,171 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * H.264 parameter set handling
- */
-
-#ifndef AVCODEC_H264_PS_H
-#define AVCODEC_H264_PS_H
-
-#include
-
-#include "libavutil/buffer.h"
-#include "libavutil/pixfmt.h"
-#include "libavutil/rational.h"
-
-#include "avcodec.h"
-#include "get_bits.h"
-#include "h264.h"
-#include "h2645_vui.h"
-
-#define MAX_SPS_COUNT 32
-#define MAX_PPS_COUNT 256
-#define MAX_LOG2_MAX_FRAME_NUM (12 + 4)
-
-/**
- * Sequence parameter set
- */
-typedef struct SPS {
- unsigned int sps_id;
- int profile_idc;
- int level_idc;
- int chroma_format_idc;
- int transform_bypass; ///< qpprime_y_zero_transform_bypass_flag
- int log2_max_frame_num; ///< log2_max_frame_num_minus4 + 4
- int poc_type; ///< pic_order_cnt_type
- int log2_max_poc_lsb; ///< log2_max_pic_order_cnt_lsb_minus4
- int delta_pic_order_always_zero_flag;
- int offset_for_non_ref_pic;
- int offset_for_top_to_bottom_field;
- int poc_cycle_length; ///< num_ref_frames_in_pic_order_cnt_cycle
- int ref_frame_count; ///< num_ref_frames
- int gaps_in_frame_num_allowed_flag;
- int mb_width; ///< pic_width_in_mbs_minus1 + 1
- ///< (pic_height_in_map_units_minus1 + 1) * (2 - frame_mbs_only_flag)
- int mb_height;
- int frame_mbs_only_flag;
- int mb_aff; ///< mb_adaptive_frame_field_flag
- int direct_8x8_inference_flag;
- int crop; ///< frame_cropping_flag
-
- /* those 4 are already in luma samples */
- unsigned int crop_left; ///< frame_cropping_rect_left_offset
- unsigned int crop_right; ///< frame_cropping_rect_right_offset
- unsigned int crop_top; ///< frame_cropping_rect_top_offset
- unsigned int crop_bottom; ///< frame_cropping_rect_bottom_offset
- int vui_parameters_present_flag;
- H2645VUI vui;
-
- int timing_info_present_flag;
- uint32_t num_units_in_tick;
- uint32_t time_scale;
- int fixed_frame_rate_flag;
- int32_t offset_for_ref_frame[256];
- int bitstream_restriction_flag;
- int num_reorder_frames;
- int scaling_matrix_present;
- uint8_t scaling_matrix4[6][16];
- uint8_t scaling_matrix8[6][64];
- int nal_hrd_parameters_present_flag;
- int vcl_hrd_parameters_present_flag;
- int pic_struct_present_flag;
- int time_offset_length;
- int cpb_cnt; ///< See H.264 E.1.2
- int initial_cpb_removal_delay_length; ///< initial_cpb_removal_delay_length_minus1 + 1
- int cpb_removal_delay_length; ///< cpb_removal_delay_length_minus1 + 1
- int dpb_output_delay_length; ///< dpb_output_delay_length_minus1 + 1
- int bit_depth_luma; ///< bit_depth_luma_minus8 + 8
- int bit_depth_chroma; ///< bit_depth_chroma_minus8 + 8
- int residual_color_transform_flag; ///< residual_colour_transform_flag
- int constraint_set_flags; ///< constraint_set[0-3]_flag
- uint8_t data[4096];
- size_t data_size;
-} SPS;
-
-/**
- * Picture parameter set
- */
-typedef struct PPS {
- unsigned int sps_id;
- int cabac; ///< entropy_coding_mode_flag
- int pic_order_present; ///< pic_order_present_flag
- int slice_group_count; ///< num_slice_groups_minus1 + 1
- int mb_slice_group_map_type;
- unsigned int ref_count[2]; ///< num_ref_idx_l0/1_active_minus1 + 1
- int weighted_pred; ///< weighted_pred_flag
- int weighted_bipred_idc;
- int init_qp; ///< pic_init_qp_minus26 + 26
- int init_qs; ///< pic_init_qs_minus26 + 26
- int chroma_qp_index_offset[2];
- int deblocking_filter_parameters_present; ///< deblocking_filter_parameters_present_flag
- int constrained_intra_pred; ///< constrained_intra_pred_flag
- int redundant_pic_cnt_present; ///< redundant_pic_cnt_present_flag
- int transform_8x8_mode; ///< transform_8x8_mode_flag
- uint8_t scaling_matrix4[6][16];
- uint8_t scaling_matrix8[6][64];
- uint8_t chroma_qp_table[2][QP_MAX_NUM+1]; ///< pre-scaled (with chroma_qp_index_offset) version of qp_table
- int chroma_qp_diff;
- uint8_t data[4096];
- size_t data_size;
-
- uint32_t dequant4_buffer[6][QP_MAX_NUM + 1][16];
- uint32_t dequant8_buffer[6][QP_MAX_NUM + 1][64];
- uint32_t(*dequant4_coeff[6])[16];
- uint32_t(*dequant8_coeff[6])[64];
-
- AVBufferRef *sps_ref;
- const SPS *sps;
-} PPS;
-
-typedef struct H264ParamSets {
- AVBufferRef *sps_list[MAX_SPS_COUNT];
- AVBufferRef *pps_list[MAX_PPS_COUNT];
-
- AVBufferRef *pps_ref;
- /* currently active parameters sets */
- const PPS *pps;
- const SPS *sps;
-
- int overread_warning_printed[2];
-} H264ParamSets;
-
-/**
- * compute profile from sps
- */
-int ff_h264_get_profile(const SPS *sps);
-
-/**
- * Decode SPS
- */
-int ff_h264_decode_seq_parameter_set(GetBitContext *gb, AVCodecContext *avctx,
- H264ParamSets *ps, int ignore_truncation);
-
-/**
- * Decode PPS
- */
-int ff_h264_decode_picture_parameter_set(GetBitContext *gb, AVCodecContext *avctx,
- H264ParamSets *ps, int bit_length);
-
-/**
- * Uninit H264 param sets structure.
- */
-void ff_h264_ps_uninit(H264ParamSets *ps);
-
-#endif /* AVCODEC_H264_PS_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Free Chess Books in PDF Learn from the Masters.md b/spaces/congsaPfin/Manga-OCR/logs/Free Chess Books in PDF Learn from the Masters.md
deleted file mode 100644
index 607bb8230f6ef51f66e9c5eec216bff625372432..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Free Chess Books in PDF Learn from the Masters.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-Free Download Chess Books: How to Learn and Play Chess Online
- Chess is one of the oldest and most popular games in the world. It is a game of strategy, logic, and creativity that challenges your mind and improves your cognitive skills. Chess can help you develop perspective, memory, focus, creativity, planning, problem-solving, self-awareness, and calmness under pressure.
- If you want to learn how to play chess or improve your chess skills, you might be interested in finding some free chess books online. There are many websites that offer free chess ebooks in PDF format that you can download or read online. These books cover various aspects of chess, such as the rules, the pieces, the openings, the tactics, the strategy, the endgames, and more.
-free download chess books
Download File === https://urlca.com/2uO6r0
- In this article, we will show you some of the best websites where you can find free chess books online and recommend some of the most useful and interesting ones to download. Whether you are a beginner or an advanced player, you will surely find something that suits your level and interest.
- Where to Find Free Chess Books Online
- There are many websites that offer free chess books online, but not all of them are reliable or easy to use. Some of them may have broken links, low-quality scans, or outdated information. To save you time and hassle, we have selected some of the best websites that provide high-quality and relevant chess books for free.
- Project Gutenberg
- Project Gutenberg is a library with over 70,000 free ebooks that you can download or read online. It has a collection of classic chess books by famous authors such as José Raúl Capablanca, Edward Lasker, Emanuel Lasker, Wilhelm Steinitz, Paul Morphy, and more. You can find these books by searching for "chess" in the website or by browsing this category: Chess (Bookshelf).
- InfoBooks
- InfoBooks is a website that provides free ebooks on various topics, including sports. It has a list of 20+ free chess books in PDF format that you can download or read online. These books cover different aspects of chess, such as the fundamentals, the progressive chess, the strategy, the handbook, the open games, the rules, and more. You can find these books by visiting this page: 20+ Chess Books for Free! [PDF].
- Chess Stack Exchange
- Chess Stack Exchange is a question-and-answer site for serious players and enthusiasts of chess. It has a community of experts and amateurs who share their knowledge and experience on various chess topics. One of the questions asked on this site was "where can I find free chess books?". The answer provided several useful resources for finding free chess books online, such as 1000exercices.com, pdfdrive.com, epdf.pub, Google Books, and Internet Archive. You can read the full answer by clicking this link: where can I find free chess books?.
- Some of the Best Free Chess Books to Download
- Now that you know where to find free chess books online, you might be wondering which ones to download. Of course, this depends on your level and preference, but here are some of our recommendations based on popularity and quality.
- Chess Fundamentals by José Raúl Capablanca
- This is one of the most famous and influential chess books ever written. It was written by José Raúl Capablanca, who was the world chess champion from 1921 to 1927 and one of the greatest players of all time. In this book, he explains the basic principles and techniques of chess in a clear and concise way. He covers topics such as the endgame, the middlegame, the openings, general strategy, t.
Chess Fundamentals by José Raúl Capablanca
- This is one of the most famous and influential chess books ever written. It was written by José Raúl Capablanca, who was the world chess champion from 1921 to 1927 and one of the greatest players of all time. In this book, he explains the basic principles and techniques of chess in a clear and concise way. He covers topics such as the endgame, the middlegame, the openings, general strategy, tactics, and common mistakes. He also provides many examples and exercises to illustrate his points. This book is suitable for beginners and intermediate players who want to learn from a master.
-free chess books pdf
-free chess ebooks online
-free chess books for beginners
-free chess books project gutenberg
-free chess books infobooks
-free download chess strategy books
-free download chess tactics books
-free download chess endgame books
-free download chess opening books
-free download chess puzzles books
-free download chess fundamentals by capablanca
-free download chess handbook by vision academy
-free download chess for kids by activity village
-free download chess and mathematics exercises for schools
-free download chess rules by various authors
-free download chess laws by fide
-free download learn and master progressive chess by matej guid
-free download beginner and intermediate chess by chicago chess foundation
-free download open games by chesskids academy
-free download journey through chess by richard james
-free download teach your child chess in ten easy lessons by stephen colding
-free download how to play chess by michael crowe
-free download rules of chess by eric schiller
-free download japanese chess (shogi) books
-free download 1000 exercises in shogi by yoshio kimura and richard bozulich
-free download shogi for beginners by john fairbairn
-free download the art of shogi by tony hosking
-free download better moves for better shogi by aono teruichi and john fairbairn
-free download modern joseki and fuseki vol. 1 by sakata eio and richard bozulich
-free download modern joseki and fuseki vol. 2 by sakata eio and richard bozulich
-free download the middle game of go by sakata eio and james davies
-free download the endgame of go by sakata eio and james davies
-free download tesuji and anti-suji of go by sakata eio and james davies
-free download the game of go by arthur smith and james davies
-free download go for beginners by kaoru iwamoto and james davies
-free download graded go problems for beginners vol. 1 by kano yoshinori and richard bozulich
-free download graded go problems for beginners vol. 2 by kano yoshinori and richard bozulich
-free download graded go problems for beginners vol. 3 by kano yoshinori and richard bozulich
-free download graded go problems for beginners vol. 4 by kano yoshinori and richard bozulich
-free download get strong at the opening by richard bozulich and rob van zeijst
-free download get strong at joseki vol. 1 by richard bozulich and rob van zeijst
-free download get strong at joseki vol. 2 by richard bozulich and rob van zeijst
-free download get strong at joseki vol. 3 by richard bozulich and rob van zeijst
-free download get strong at invading by richard bozulich and rob van zeijst
-free download get strong at attacking by richard bozulich and rob van zeijst
-free download get strong at tesuji by richard bozulich and rob van zeijst
-free download get strong at the endgame by richard bozulich and rob van zeijst
- Logical Chess: Move by Move by Irving Chernev
- This is another classic chess book that is highly recommended by many chess players and teachers. It was written by Irving Chernev, who was a prolific chess author and an expert player. In this book, he analyzes 33 master games in detail and explains every move with simple and logical reasoning. He shows how each move contributes to the overall plan and strategy of the game. He also points out the mistakes and blunders made by both sides and how to avoid them. This book is ideal for beginners and intermediate players who want to improve their understanding and decision-making skills.
- Modern Chess Strategy by Ludek Pachman
- This is a comprehensive and advanced chess book that covers all aspects of modern chess strategy. It was written by Ludek Pachman, who was a grandmaster and a leading theoretician of his time. In this book, he explains the principles and concepts of chess strategy in depth and with clarity. He covers topics such as the center, the pawn structure, the pieces, the initiative, the attack, the defense, the exchange, the endgame, and more. He also provides many examples and diagrams to illustrate his points. This book is suitable for intermediate and advanced players who want to master the art of chess strategy.
- Conclusion
- Chess is a fascinating and rewarding game that can enrich your life in many ways. It can help you develop your mental abilities, your creativity, your personality, and your enjoyment. If you want to learn how to play chess or improve your chess skills, you can benefit from reading some free chess books online. There are many websites that offer free chess ebooks in PDF format that you can download or read online. We have shown you some of the best websites where you can find free chess books online and recommended some of the most useful and interesting ones to download. Whether you are a beginner or an advanced player, you will surely find something that suits your level and interest.
- To improve your chess skills, you should not only read books but also practice regularly. You can practice online or offline with other players or with computer programs. You can also watch videos or listen to podcasts that teach you chess tips and tricks. You can also join a chess club or a community where you can meet other chess enthusiasts and learn from them.
- Chess is a game that requires constant learning and improvement. The more you play, the more you learn, and the more you enjoy. We hope that this article has helped you find some free chess books online that will help you on your chess journey.
- FAQs
- What are some of the benefits of playing chess?
- Some of the benefits of playing chess are:
-
-- It improves your memory, concentration, logic, creativity, problem-solving, planning, self-awareness, and calmness under pressure.
-- It enhances your academic performance, especially in math, science, and language.
-- It boosts your confidence, self-esteem, social skills, and emotional intelligence.
-- It reduces stress, anxiety, depression, and boredom.
-- It provides entertainment, fun, challenge, and satisfaction.
-
- How long does it take to learn chess?
- There is no definitive answer to this question as it depends on many factors such as your age, your interest, your motivation, your aptitude, your method of learning, your frequency of practice, your level of difficulty, etc. However, some general guidelines are:
-
-- You can learn the basic rules of chess in a few hours or days.
-- You can learn the basic moves and strategies of chess in a few weeks or months.
-- You can learn the advanced techniques and theories of chess in a few years or decades.
-- You can never stop learning chess as there is always something new to discover or improve.
-
- What are some of the best websites to play chess online?
- Some of the best websites to play chess online are:
Lichess.org: This is a free and open-source website for playing chess online. It has a simple and user-friendly interface and offers various features such as live and correspondence games, puzzles, studies, analysis, tournaments, teams, forums, and more.
-Chess24.com: This is a premium website for playing chess online. It has a modern and sleek interface and offers various features such as live and correspondence games, puzzles, lessons, articles, videos, tournaments, events, news, and more.
-Chessbase.com: This is a professional website for playing chess online. It has a sophisticated and powerful interface and offers various features such as live and correspondence games, puzzles, database, analysis, training, coaching, news, and more.
-
- What are some of the best chess apps for mobile devices?
- Some of the best chess apps for mobile devices are:
-
-- Chess.com: This is the mobile version of the Chess.com website. It has the same features and functions as the website and allows you to play chess online or offline with other players or with computer programs.
-- Lichess: This is the mobile version of the Lichess.org website. It has the same features and functions as the website and allows you to play chess online or offline with other players or with computer programs.
-- Chess Tactics Pro: This is a chess app that focuses on improving your chess tactics. It has thousands of puzzles for different levels and themes that you can solve online or offline.
-- Magnus Trainer: This is a chess app that helps you learn chess from the world champion Magnus Carlsen. It has hundreds of lessons, games, quizzes, and exercises that cover various aspects of chess.
-- DroidFish: This is a chess app that uses the powerful Stockfish engine to analyze your games and moves. It has a simple and intuitive interface and allows you to play chess online or offline with other players or with computer programs.
-
- How can I download free chess books in PDF format?
- To download free chess books in PDF format, you can follow these steps:
-
-- Visit one of the websites that offer free chess books online, such as Project Gutenberg, InfoBooks, Chess Stack Exchange, or others.
-- Search for the book that you want to download by using keywords or browsing categories.
-- Click on the book title or the download link to open the book in PDF format.
-- Save the book to your device by clicking on the download button or using the right-click menu.
-- Enjoy reading the book on your device or print it out if you prefer.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Real Gangster Crime 2 with Unlimited Money MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Real Gangster Crime 2 with Unlimited Money MOD APK.md
deleted file mode 100644
index d451c6250e1ca5b9304e7258dff12bfcefd8846b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Real Gangster Crime 2 with Unlimited Money MOD APK.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-Real Gangster Crime 2: A Review of the Game and How to Get Unlimited Money Mod APK
-If you are a fan of action, adventure, and simulation games, you might have heard of Real Gangster Crime 2. This game is a sequel to the popular Real Gangster Crime, which lets you explore a city full of gang wars, police chases, and crime simulators. In this article, we will review the game and show you how to get unlimited money mod apk for it.
-real gangster crime 2 unlimited money mod apk
DOWNLOAD –––––>>> https://urlca.com/2uOgae
-What is Real Gangster Crime 2?
-Real Gangster Crime 2 is a free action game developed by Naxeex Studio. It is available for Android devices on Google Play Store. The game has over 10 million downloads and a rating of 4.1 stars out of 5. The game is rated Mature 17+ for violence, blood, and drug references.
-Features of the game
-The game has many features that make it fun and exciting to play. Some of them are:
-
-- A great new city with sand beaches, great architecture, and tourist attractions
-- Multiple profit tasks with cool rewards
-- A variety of weapons, vehicles, and outfits to choose from
-- A helicopter to observe the city from above
-- A choice of factions to join and fight against
-
-Gameplay and graphics
-The gameplay of Real Gangster Crime 2 is similar to other open-world games like GTA. You can roam around the city, complete missions, fight enemies, steal cars, and cause chaos. You can also customize your character and upgrade your skills. The game has realistic physics and ragdoll effects that make the action more thrilling. The graphics of the game are decent, but not very impressive. The city looks colorful and detailed, but some textures are low-quality and some animations are stiff. The sound effects and music are also average, but they fit the theme of the game well.
-Pros and cons
-Like any other game, Real Gangster Crime 2 has its pros and cons. Here are some of them:
-
-Pros Cons
-Free to play Contains ads and in-app purchases
-Easy to control Sometimes buggy and laggy
-Addictive and fun Repetitive and boring after a while
-Diverse and dynamic Lacks depth and story
-
- What is unlimited money mod apk?
-A mod apk is a modified version of an original app that has some features unlocked or added. An unlimited money mod apk is a mod apk that gives you unlimited money or coins in the game. This means you can buy anything you want without worrying about running out of cash.
-Benefits of using mod apk
-Using a mod apk can have some benefits for your gaming experience. Some of them are:
-
-- You can enjoy the game without any limitations or restrictions
-- You can access premium items and features that are otherwise unavailable or expensive
-- You can enhance your skills and performance in the game
-- You can have more fun and excitement in the game
-
- Risks of using mod apk
-However, using a mod apk can also have some risks for your device and account. Some of them are:
-
-- You can get banned or suspended from the game for violating the terms of service
-- You can expose your device to malware or viruses that can harm your data or privacy
-- You can lose your progress or account if the mod apk is not compatible or updated
-- You can ruin the original gameplay and challenge of the game
-
- How to download and install mod apk
-If you still want to try the unlimited money mod apk for Real Gangster Crime 2, you need to follow these steps:
-
-
-- Find a reliable and safe source for the mod apk. You can search online or use the link below
-- Download the mod apk file to your device. Make sure you have enough storage space and a stable internet connection
-- Enable the installation of unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on
-- Locate the mod apk file on your device and tap on it to install it. Follow the instructions on the screen and wait for the installation to finish
-- Launch the game and enjoy the unlimited money mod apk
-
- Conclusion
-Real Gangster Crime 2 is a fun and addictive action game that lets you experience the life of a gangster in a new city. The game has many features, but also some drawbacks. If you want to enhance your gaming experience, you can try the unlimited money mod apk, but be aware of the risks involved. We hope this article helped you learn more about the game and how to get the mod apk.
- Summary of the main points
-In this article, we have covered:
-
-- What is Real Gangster Crime 2 and what are its features, gameplay, graphics, pros, and cons
-- What is unlimited money mod apk and what are its benefits and risks
-- How to download and install unlimited money mod apk for Real Gangster Crime 2
-
- Recommendations for the game and mod apk
-Here are some recommendations for playing the game and using the mod apk:
-
-- Play the game responsibly and do not engage in illegal or harmful activities in real life
-- Use the mod apk at your own risk and discretion. Do not use it for cheating or harming other players
-- Backup your data and device before installing the mod apk. Update the mod apk regularly to avoid compatibility issues
-- Support the developers of the game by buying in-app purchases or watching ads if you like the game
-- Have fun and enjoy the game!
-
- FAQs
-Here are some frequently asked questions about Real Gangster Crime 2 and unlimited money mod apk:
- Q: Is Real Gangster Crime 2 offline or online?
-A: Real Gangster Crime 2 is an offline game that does not require an internet connection to play. However, some features like ads or in-app purchases may require an internet connection.
- Q: How can I get more money in Real Gangster Crime 2 without using mod apk?
-A: You can get more money in Real Gangster Crime 2 by completing missions, stealing cars, robbing people, or finding hidden cash around the city. You can also watch ads or buy money with real money.
- Q: Is unlimited money mod apk safe to use?
-A: Unlimited money mod apk is not safe to use as it can cause problems for your device and account. It can also violate the terms of service of the game and get you banned or suspended. Use it at your own risk.
- Q: Can I play Real Gangster Crime 2 on PC?
-A: Yes, you can play Real Gangster Crime 2 on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. However, playing on PC may affect your performance and experience.
- Q: What are some similar games to Real Gangster Crime 2?
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Free Rewards and Remove Ads in Fill The Fridge Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Free Rewards and Remove Ads in Fill The Fridge Mod APK.md
deleted file mode 100644
index 347532337a42025a8da15d6d8762ca7c1cbadd76..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Free Rewards and Remove Ads in Fill The Fridge Mod APK.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-Fill in the Fridge Mod APK: A Fun and Easy Way to Play the Game
- Do you love playing casual games that test your creativity and logic? Do you enjoy filling up your fridge with delicious food and drinks? If you answered yes to these questions, then you might want to try out Fill in the Fridge, a popular game that lets you do just that. But what if you want to make the game more fun and easy? Well, you can do that by using Fill in the Fridge Mod APK, a modified version of the game that gives you unlimited money and other advantages. In this article, we will tell you everything you need to know about Fill in the Fridge Mod APK, including what it is, how to download and install it, and how to play it. Let's get started!
-fill in the fridge mod apk
Download Zip ⭐ https://urlca.com/2uO4FZ
- What is Fill in the Fridge?
- Fill in the Fridge is a casual game developed by SayGames, a famous developer of addictive and entertaining games. The game is available for both Android and iOS devices, and has been downloaded over 10 million times on Google Play Store alone. The game has a rating of 4.1 out of 5 stars, based on more than 100,000 reviews.
- The gameplay of Fill in the Fridge
- The gameplay of Fill in the Fridge is simple and straightforward. You have a fridge with empty slots, and you have to fill them up with food and drinks. You can drag and drop items from a conveyor belt into the fridge, but you have to be careful not to waste any space or overlap any items. You also have to follow some rules, such as placing items of the same color or shape together, or avoiding items that are not suitable for the fridge, such as hot dogs or ice cream cones. You have to complete each level within a limited time, and you can earn coins and stars based on your performance.
- The features of Fill in the Fridge
- Fill in the Fridge has many features that make it an enjoyable and relaxing game. Some of these features are:
-
-- Beautiful graphics and animations: The game has colorful and realistic graphics that make the food and drinks look appetizing and tempting. The game also has smooth and fluid animations that make the gameplay more dynamic and fun.
-- Various levels and challenges: The game has hundreds of levels with different layouts and difficulties. You can unlock new items and fridges as you progress through the game, and face new challenges and surprises along the way.
-- Funny sound effects and music: The game has amusing sound effects that match the actions and reactions of the items. The game also has cheerful and catchy music that creates a positive and lively atmosphere.
-- Easy controls and interface: The game has simple and intuitive controls that allow you to drag and drop items with ease. The game also has a user-friendly interface that shows you your score, time, coins, stars, hints, and settings.
-
- What is Fill in the Fridge Mod APK?
- Fill in the Fridge Mod APK is a modified version of Fill in the Fridge APK, allowing you to easily complete all tasks and requests in the game. Instead of spending a lot of time and money to achieve rewards, you can use Fill in the Fridge Mod APK to reach your goals in a shorter time. This is a Launch Fill in the Fridge Mod APK: Once the installation is complete, you can find the Fill in the Fridge Mod APK icon on your device's home screen or app drawer. Tap on it and enjoy playing the game with unlimited money and unlocked items and fridges.
-
-
The precautions to take before downloading and installing Fill in the Fridge Mod APK
- Before you download and install Fill in the Fridge Mod APK, you should take some precautions to avoid any problems or issues that may arise. Some of these precautions are:
-fill the fridge game mod apk
-fill the fridge 3d mod apk
-fill the fridge mod apk download
-fill the fridge mod apk unlimited money
-fill the fridge mod apk latest version
-fill the fridge mod apk android 1
-fill the fridge mod apk no ads
-fill the fridge mod apk free rewards
-fill the fridge mod apk hack
-fill the fridge mod apk offline
-fill the fridge simulation game mod apk
-fill the fridge puzzle game mod apk
-fill the fridge realistic 3d design mod apk
-fill the fridge unlock hundreds of items mod apk
-fill the fridge organize everything your way mod apk
-fill the fridge enjoy the feeling of satisfaction mod apk
-fill the fridge put the items into an empty fridge mod apk
-fill the fridge casual game mod apk
-fill the fridge relaxing game mod apk
-fill the fridge fun game mod apk
-fill the fridge premium game mod apk
-fill the fridge pro game mod apk
-fill the fridge full game mod apk
-fill the fridge cracked game mod apk
-fill the fridge free game mod apk
-download fill in the fridge mod apk for android
-download fill in the fridge mod apk for ios
-download fill in the fridge mod apk for pc
-download fill in the fridge mod apk for windows 10
-download fill in the fridge mod apk for mac
-how to install fill in the fridge mod apk
-how to play fill in the fridge mod apk
-how to update fill in the fridge mod apk
-how to get free rewards in fill in the fridge mod apk
-how to unlock all items in fill in the fridge mod apk
-how to remove ads in fill in the fridge mod apk
-how to hack fill in the fridge mod apk
-how to get unlimited money in fill in the fridge mod apk
-how to get latest version of fill in the fridge mod apk
-how to get 3d design in fill in the fridge mod apk
-best tips and tricks for fill in the fridge mod apk
-best guide and walkthrough for fill in the fridge mod apk
-best review and rating for fill in the fridge mod apk[^1^]
-best alternative and similar games to fill in the fridge mod apk[^1^]
-
-- Backup your data: You should backup your data, such as your game progress, settings, and preferences, before you install Fill in the Fridge Mod APK. This will help you restore your data in case something goes wrong or you want to switch back to the original version of the game.
-- Disable antivirus programs: You should disable any antivirus programs or firewalls that may interfere with the download and installation of Fill in the Fridge Mod APK. These programs may detect Fill in the Fridge Mod APK as a threat and block it from running on your device. You can enable them again after you have successfully installed Fill in the Fridge Mod APK.
-- Uninstall the original version of the game: You should uninstall the original version of Fill in the Fridge from your device before you install Fill in the Fridge Mod APK. This will prevent any conflicts or errors that may occur due to having two versions of the same game on your device.
-
- How to play Fill in the Fridge Mod APK?
- Playing Fill in the Fridge Mod APK is similar to playing the original version of the game, except that you have more money and options to choose from. You can use these advantages to make the game more fun and easy for yourself. Here are some tips and tricks on how to play Fill in the Fridge Mod APK:
- The tips and tricks to play Fill in the Fridge Mod APK
- Some of the tips and tricks that can help you play Fill in the Fridge Mod APK better are:
-
-- Use hints wisely: With Fill in the Fridge Mod APK, you can buy unlimited hints that can show you where to place an item or how to fill up a fridge. However, you should not rely on them too much, as they can make the game less challenging and interesting. You should use them only when you are stuck or confused, and try to figure out the solution by yourself first.
-- Try different items and fridges: With Fill in the Fridge Mod APK, you can access all the items and fridges that are available in the game. You should try different combinations of items and fridges, and see how they affect your score and gameplay. You can also experiment with different themes and styles, such as fruits, vegetables, desserts, drinks, etc.
-- Avoid wasting space or overlapping items: With Fill in the Fridge Mod APK, you can skip any level that you find too hard or boring. However, you should still try to play each level as best as you can, and avoid wasting space or overlapping items in your fridge. This will help you improve your skills and logic, and also earn more coins and stars.
-
- The challenges and rewards to play Fill in the Fridge Mod APK
- Some of the challenges and rewards that you can encounter while playing Fill in the Fridge Mod APK are:
-
-- New levels and modes: The game has new levels and modes that are added regularly by the developer. These levels and modes have different layouts, rules, and difficulties that can challenge your creativity and logic. You can also compete with other players online and see who can fill up their fridges faster and better.
-- Achievements and leaderboards: The game has various achievements that you can unlock by completing certain tasks or goals in the game. These achievements can show your progress and performance in the game, and also give you extra coins and stars. You can also check your rank on the global leaderboards and see how you compare with other players around the world.
-- Cool graphics and sounds: The game has cool graphics and sounds that make it more enjoyable and immersive. You can see realistic animations of food and drinks moving on a conveyor belt or falling into a fridge. You can also hear funny sound effects of items popping, sizzling, or splashing. The game also has upbeat music that matches the mood of each level.
-
- Conclusion
- In conclusion, Fill in the Fridge Mod APK is a fun and easy way to play the game of Fill in the Fridge, a casual game that tests your creativity and logic. You can use Fill in the Fridge Mod APK to get unlimited money and unlocked items and fridges, and enjoy the game without any ads or restrictions. However, you should also be careful of the potential risks, lack of updates, and lack of challenge that come with using Fill in the Fridge Mod APK. You should always download and install Fill in the Fridge Mod APK from a trusted source, backup your data, disable antivirus programs, and uninstall the original version of the game before using it. You should also use hints wisely, try different items and fridges, and avoid wasting space or overlapping items while playing the game. You can also face new levels and modes, unlock achievements and leaderboards, and enjoy cool graphics and sounds while playing the game. We hope that this article has helped you learn more about Fill in the Fridge Mod APK, and that you have fun filling up your fridges with delicious food and drinks.
- FAQs
- Here are some frequently asked questions about Fill in the Fridge Mod APK:
-
-- Q: Is Fill in the Fridge Mod APK safe to use?
-- A: Fill in the Fridge Mod APK is not an official version of the game, and it may contain viruses, malware, or other harmful elements that can damage your device or compromise your security. You should always download and install Fill in the Fridge Mod APK from a trusted source and scan it with an antivirus program before using it.
-- Q: How can I update Fill in the Fridge Mod APK?
-- A: Fill in the Fridge Mod APK may not be compatible with the latest version of the game, and it may not receive regular updates or bug fixes from the developer. Therefore, you should always check for updates and download the latest version of Fill in the Fridge Mod APK from a reliable website.
-- Q: How can I restore my data if I switch back to the original version of the game?
-- A: You should backup your data, such as your game progress, settings, and preferences, before you install Fill in the Fridge Mod APK. This will help you restore your data in case you want to switch back to the original version of the game. You can use a cloud service or a local storage device to backup your data.
-- Q: How can I contact the developer of Fill in the Fridge?
-- A: You can contact the developer of Fill in the Fridge by visiting their official website here, or by sending them an email at support@saygames.by.
-- Q: How can I rate and review Fill in the Fridge?
-- A: You can rate and review Fill in the Fridge by visiting its page on Google Play Store here, or on App Store here. You can also share your feedback and suggestions with other players on social media platforms such as Facebook, Twitter, or Instagram.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/VIMAGE 3D live photo animation APK The best app for making your photos come to life.md b/spaces/congsaPfin/Manga-OCR/logs/VIMAGE 3D live photo animation APK The best app for making your photos come to life.md
deleted file mode 100644
index 41e71e349548443dd9f0a810b3d321117d065fb0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/VIMAGE 3D live photo animation APK The best app for making your photos come to life.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-VIMAGE 3D Live Photo Animation APK: How to Turn Your Photos into Cinemagraphs
- Have you ever wanted to make your photos come alive with motion and sound? If so, you might be interested in VIMAGE 3D Live Photo Animation APK, a cinemagraph creator app that lets you animate your images and add hundreds of moving effects, presets, filters, and overlays onto them. In this article, we will show you what VIMAGE is, why you should use it, how to use it, and some tips and tricks to make your cinemagraphs amazing.
- What is VIMAGE 3D Live Photo Animation APK?
- VIMAGE 3D Live Photo Animation APK is an app that allows you to create cinemagraphs, which are photos that contain a subtle motion loop. Cinemagraphs are a popular form of visual storytelling that can capture attention and evoke emotions. With VIMAGE, you can easily turn any photo into a cinemagraph by adding one or more effects that animate a part of the image. You can also add sounds, texts, filters, and overlays to enhance your cinemagraph.
-vimage 3d live photo animation apk
DOWNLOAD »»» https://urlca.com/2uOaXC
- VIMAGE has many features that make it a powerful and versatile cinemagraph creator app. Some of them are:
-
-- New AI-Sky feature: You can select, change, and animate the sky in your photo in seconds. You can choose from over 100 presets of different skies, such as sunny, cloudy, rainy, stormy, sunset, night, etc.
-- 3D picture animation feature: You can create a parallax animation effect by tilting your phone or using your finger. This feature adds depth and realism to your cinemagraph.
-- Add custom sounds: You can add sound effects or music to your cinemagraph to make it more immersive and expressive. You can choose from the built-in library or upload your own sounds.
-- Tell your story with text: You can add custom texts to your cinemagraph to convey a message or a caption. You can customize the font, size, color, alignment, and animation of the text.
-- Add up to 10 different effects: You can add up to 10 different fully customizable effects onto a single photo. You can choose from over 200 effects in various categories, such as nature, light, fire, water, smoke, animals, etc.
-- Export in high quality: You can export your cinemagraph in high quality up to 2560p. You can also choose the format (GIF or video) and the resolution of your output.
-the motion of your photo. The Flow animator lets you draw the direction of the motion, while the Stretch animator lets you stretch or shrink the photo along an axis.
-- Adjust the animation speed, direction, and loop mode: You can fine-tune the animation of your cinemagraph by adjusting the speed, direction, and loop mode of the effects. You can also reverse the animation or make it bounce.
-- Apply filters and overlays: You can apply various filters and overlays to your cinemagraph to change its mood and style. You can choose from over 70 filters and overlays, such as vintage, noir, sepia, glitch, etc.
-- Share your cinemagraph with the world: You can share your cinemagraph with the VIMAGE community and get feedback and inspiration from other users. You can also share your cinemagraph on social media platforms, such as Instagram, Facebook, TikTok, etc.
-
- Why use VIMAGE 3D Live Photo Animation APK?
- VIMAGE 3D Live Photo Animation APK is a great app for anyone who wants to create stunning cinemagraphs with ease and fun. Here are some of the reasons why you should use VIMAGE:
- Engage your audience with moving pictures
- Cinemagraphs are a powerful way to capture attention and convey emotions. They are more dynamic than static photos, but less distracting than videos. They can create a sense of wonder, curiosity, nostalgia, or excitement in your viewers. Cinemagraphs are perfect for social media posts, stories, ads, blogs, websites, or any other digital platform where you want to stand out and impress your audience.
- Express your creativity with hundreds of effects and presets
- VIMAGE gives you the freedom to express your creativity and turn your photos into art. You can choose from hundreds of effects and presets that suit your theme and style. You can also mix and match different effects and customize them to your liking. You can create anything from realistic to surreal cinemagraphs with VIMAGE.
- Share your art with the VIMAGE community and beyond
- VIMAGE is not just an app, but also a community of passionate cinemagraph makers. You can join the VIMAGE community and discover amazing cinemagraphs from other users. You can also share your own cinemagraphs and get feedback and support from the community. You can also participate in contests and challenges and win prizes and recognition. Moreover, you can share your cinemagraphs on other platforms and reach a wider audience.
- How to use VIMAGE 3D Live Photo Animation APK?
- Creating cinemagraphs with VIMAGE is easy and fun. Here is a step-by-step guide to help you get started:
- Download and install the app from Google Play or AppBrain
- The first step is to download and install the app on your Android device. You can find the app on Google Play or AppBrain by searching for "VIMAGE 3D Live Photo Animation APK". The app is free to download and use, but it contains ads and in-app purchases. You can remove the ads and unlock more features by upgrading to the premium version.
-vimage 3d live photo animation app download
-vimage 3d live photo animation for android
-vimage 3d live photo animation free
-vimage 3d live photo animation mod apk
-vimage 3d live photo animation premium apk
-vimage 3d live photo animation pro apk
-vimage 3d live photo animation review
-vimage 3d live photo animation tutorial
-vimage 3d live photo animation unlocked apk
-vimage 3d live photo animation video editor
-vimage 3d live photo animator apk
-vimage 3d live wallpaper apk
-vimage 3d motion effects apk
-vimage 3d parallax effect apk
-vimage ai sky replacement apk
-vimage android app apk
-vimage animate your image apk
-vimage animated photo editor apk
-vimage app for android apk
-vimage app free download apk
-vimage app mod apk download
-vimage app premium apk download
-vimage app pro apk download
-vimage app unlocked apk download
-vimage best cinemagraph animator apk
-vimage breathe life into your photos apk
-vimage cinemagraph creator app apk
-vimage create living photos apk
-vimage download for android apk
-vimage editors choice app apk
-vimage free filters and effects apk
-vimage full version apk download
-vimage high quality export apk
-vimage latest version apk download
-vimage make your photos move apk
-vimage moving photo effects and filters apk
-vimage moving picture maker apk
-vimage new ai sky feature apk
-vimage photo animation maker apk
-vimage photo motion editor apk
-vimage photo motion maker apk
-vimage sky replacement tool apk
-vimage sound effects and music apk
-vimage text tool for photos apk
-vimage turn photos into gifs apk
- Choose a photo from your gallery or the stock library
- The next step is to choose a photo that you want to animate. You can either select a photo from your device's gallery or use one of the stock photos provided by VIMAGE. The app supports various formats, such as JPG, PNG, GIF, etc. You can also take a photo with your camera within the app.
- Add effects, filters, overlays, sounds, and texts to your photo
- The fun part begins here. You can now add various elements to your photo to make it come alive. You can tap on the "+" button at the bottom of the screen to access the menu of effects, filters, overlays, sounds, and texts. You can browse through different categories of effects and choose one or more that you like. You can also search for specific effects by using keywords.
- Once you select an effect, you can drag it onto your photo and place it where you want it. You can also resize, rotate, flip, or delete it by using the buttons at the top of the screen. You can repeat this process for as many effects as you want.
- and style of your photo. You can adjust the intensity of the filters and overlays by using the slider at the bottom of the screen.
- You can also add sounds and texts to your photo by tapping on the icons at the bottom left corner of the screen. You can choose from the built-in library of sounds or upload your own sounds. You can also add custom texts and customize their font, size, color, alignment, and animation.
- Adjust the animation speed, direction, and loop mode
- After adding all the elements to your photo, you can adjust the animation of your cinemagraph by tapping on the play button at the top right corner of the screen. You can see how your cinemagraph looks like and make any changes if needed. You can also adjust the speed, direction, and loop mode of the effects by tapping on them and using the buttons at the bottom of the screen. You can also reverse the animation or make it bounce by using the icons at the top of the screen.
- Export and share your cinemagraph as a GIF or video
- Once you are happy with your cinemagraph, you can export it as a GIF or video by tapping on the export button at the top right corner of the screen. You can choose the format, resolution, and quality of your output. You can also add a watermark or a logo to your cinemagraph if you want. The app will save your cinemagraph to your device's gallery and also to your VIMAGE profile.
- You can also share your cinemagraph with the VIMAGE community and get feedback and inspiration from other users. You can also share your cinemagraph on social media platforms, such as Instagram, Facebook, TikTok, etc. by using the share button at the bottom right corner of the screen.
- Tips and tricks for using VIMAGE 3D Live Photo Animation APK
- To make your cinemagraphs more amazing and professional, here are some tips and tricks that you can use:
- Use the AI-Sky feature to change the sky in your photo
- If you want to change the mood and atmosphere of your photo, you can use the AI-Sky feature to change the sky in your photo in seconds. You can choose from over 100 presets of different skies, such as sunny, cloudy, rainy, stormy, sunset, night, etc. The app will automatically detect and replace the sky in your photo with a realistic animation. You can also adjust the brightness, contrast, saturation, and hue of the sky to match your photo.
- Use the 3D picture animation feature to create a parallax effect
- If you want to add depth and realism to your photo, you can use the 3D picture animation feature to create a parallax effect. This feature allows you to tilt your phone or use your finger to move your photo in 3D space. The app will create a perspective shift that makes your photo look like it has layers. You can also adjust the sensitivity and angle of the tilt to control the effect.
- Use the Flow or Stretch animator to customize the motion of your photo
- If you want to create a custom motion for your photo, you can use the Flow or Stretch animator to draw the direction or shape of the motion. The Flow animator lets you draw a path that the photo will follow, while the Stretch animator lets you draw a curve that the photo will bend along. You can also adjust the speed, direction, and loop mode of the animation.
- Use the color, hue, brightness, and contrast tools to blend the effects with your photo
- If you want to make your effects look more natural and harmonious with your photo, you can use the color, hue, brightness, and contrast tools to adjust the appearance of the effects. You can access these tools by tapping on an effect and using the buttons at the bottom of the screen. You can also use the eraser tool to erase parts of the effect that you don't want.
- Use the crop tool to fit your cinemagraph to different aspect ratios
- If you want to fit your cinemagraph to different aspect ratios, such as square, portrait, landscape, etc., you can use the crop tool to change the size and shape of your photo. You can access the crop tool by tapping on the icon at the top left corner of the screen. You can also rotate or flip your photo by using the icons at the top of the screen.
- Conclusion and FAQs
- VIMAGE 3D Live Photo Animation APK is a fantastic app that lets you create stunning cinemagraphs with ease and fun. You can animate your photos and add hundreds of moving effects, presets, filters, overlays, sounds, and texts to them. You can also adjust the animation speed, direction, and loop mode of the effects. You can export your cinemagraphs in high quality and share them with the VIMAGE community and other platforms. You can also use some tips and tricks to make your cinemagraphs more amazing and professional.
- If you have any questions about VIMAGE 3D Live Photo Animation APK, here are some frequently asked questions and their answers:
- Q: How much does VIMAGE 3D Live Photo Animation APK cost?
-A: The app is free to download and use, but it contains ads and in-app purchases. You can remove the ads and unlock more features by upgrading to the premium version. The premium version costs $19.99 per year or $2.99 per month.
- Q: What are the minimum requirements for VIMAGE 3D Live Photo Animation APK?
-A: The app requires Android 5.0 or higher and at least 100 MB of free storage space.
- Q: How can I contact VIMAGE 3D Live Photo Animation APK support?
-A: You can contact VIMAGE support by sending an email to support@vimageapp.com or by using the feedback option within the app.
- Q: How can I learn more about VIMAGE 3D Live Photo Animation APK?
-A: You can learn more about VIMAGE by visiting their official website at https://vimageapp.com/ or by following their social media accounts on Instagram, Facebook, Twitter, YouTube, etc.
- Q: How can I join the VIMAGE community?
-A: You can join the VIMAGE community by creating a profile within the app and sharing your cinemagraphs with other users. You can also participate in contests and challenges and win prizes and recognition.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Diskwarrior 5 Serial Number 222 Why You Need This Powerful Tool to Repair and Optimize Your Mac.md b/spaces/contluForse/HuggingGPT/assets/Diskwarrior 5 Serial Number 222 Why You Need This Powerful Tool to Repair and Optimize Your Mac.md
deleted file mode 100644
index a0113c792af55ccd6a34a6e989adcb40d27f4eda..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Diskwarrior 5 Serial Number 222 Why You Need This Powerful Tool to Repair and Optimize Your Mac.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-A-Dock
v2.7
OOX932433
v2.6.7
KIRI39639
v2.5
PHRK44550
v2.4
DOCK44625
v2.3.2
WXGQ65772
v2.3.0
KRAK52350
v2.3fc2
(see tip)
v2.2.1
DOCK44625
v2.1.3
123419864
077719965
v2.0.1 Deutsch
name: fill it or leave it blank
code: AUQG34638
#
PKFK51750
v1.2.1
Name: (any or Cendryom)
Organization: (any)
Registration Code: 222220000
v1.x
Name: HotSix
Code: 607
v1.0
Code : 000018432
A-Dock 2.3fc2
1. Install version 2.3fc2
2. Restart
3. Download 2.2.2 to your desktop (from
)
4. Open the 2.2.2 control panel
5. Register using the old serial #:
DOCK44625
6. That's it! Once you open the 2.3fc2
control panel, you'll
see you're registered
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Diskwarrior 5 Serial Number 222
DOWNLOAD … https://ssurll.com/2uzxTq
-aClock
v2.5.2
8365qre14
8365qrel4
In order to enter the serial number, you must hold down the option key when pressing the Register button
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-ActionLine
v1.6
code: 089711234
code: 08971xxxx
(x any number 0-9)
v1.5
code: 069610000
code: 06961xxxx
(x any number 0-9)
v1.0
06961xxxx
(x any number 0-9)
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Add/Strip
v3.4
7314840
v3.4.x
Crack
Open "Edit Add/Strip" with Resorcerer
Open CODE 1, Anon 53
Anon53+0086: _SysBeep
Anon53+0088: bra Anon53+$049E --> Change to NOP (w/Resorcerer Patch
Menu)
Anon53+008C: subq.w #$4,SP
You can then open "Add/Strip" with "Edit Add/Strip", choose Personalize from
the Customize menu, and register with any number.
link: -strip-34.hqx.txt
link: -strip-322.html
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Adobe Products
Adobe softwares WARNING!!
Before installing illustrator 9 or
Photoshop 6.0 cut off your
connection. Before entering any
information into the
personalization dialog (serial
number, name, etc.).
Install the software then go into
System Folder > Application Support
> Adobe > Web : and then compress
the following files :
> AdobeOnline Inventory
> Adoberegistrationeng.html or Adoberegistrationenu.html
> Adoberegistrationfra.html
> AdobeWeb.dll
> AOM
You can now open your apps while
your connection is on!! Those
!#$@ty modules in illustrator 9 or
Photoshop 6.0 send directly your
registration number and products
informations to "Adobe's girls". so
beware!!
As recently reported by
MacInTouch.com, these modules send
your registration name and number
directly to Adobe.
Make sure to read the privacy
statement by Adobe. This is where
they inform you of the registration
number being sent.
From a reliable Adobe source
The format is the following:
PPLVVVZZXXXXXX-CCC (Single License)
PPLVVVZZXXXXXX-NNN-CCC (Multi License)
PP: Product Identifier
L: Language Identifier
W = US
E = English International
F = French
G = German
I = Italian
P = Spanish
J = Japanese
T = Chinese
K = Korean
VVV: Product Version
ZZ: Package ID/Media Type X = NFR 1 = CD
U = Upgrade 2 = CD (Bundle, I think)
B = Bundle 3 = 3,5" Floppy
R = Regular 5 = 5,25" Floppy
E = Evaluation 7 = CD
P = ?
XXXXXX: Sequence Number, 6 digits
NNN: Number of licenses
CCC: Checksum
When calculating the checksum with Adobe Checksum 2.1 (included), you must
fill the Header field with the 8 first characters of the SN (PPLVVVZZ), the
Lower and Upper fields with the Sequence Number (6 digits (XXXXXX)), and the
Users field with NNN (Number of licenses).
Some Mac Products Prefixes (Product Identifier):
Acrobat Pro < 3.0 : AN
Acrobat WorkGroup 2.x : DE
Acrobat Pro . 3.0 : AE
Acrobat Distiller
-AHOY!
v1.2.x
(see tip)
needs number generator
The algorythm of "AHOY!"
The format is:
Code: AY-xxxx-01
Reg#: xxxxx
This exchange table is:
a=B b=C c=D d=E e=F f=G g=H h=I i=J j=K k=L l=M m=N
n=O o=A p=B q=C r=D s=E t=F u=G v=H w=I x=J y=K z=L
This swap is:
AY- a b c d -01 -> * b c d a
- - - - - - - -
| | | | | | | |
| | | +------------|-|-+ |
| | +--------------|-+ |
| +----------------+ |
+------------------------+
Example:
The code is making random at start up.
But, this algorithm is very simple.
For example, if it made AY-kiri-01 at start up,
look at the exchange table:
k=L, i=J, r=D, i=J
yes, kiri is now exchanged LJDJ.
next, execute the swap:
AY-kiri-01 -> *JDJL
* is wildcard, so anything in it (Must be Uppercase).
Now Reg# is AJDJL, BJDJL, CJDJL .....ZJDJL.
link:
BSNG
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Air Combat
v1.2
Change 'CODE' 9
Offset $46CE from $6624 to $6024
and any number you enter in the Query-Dialog will work.
v1.01E
CRACK: removes pasword protection: change CODE 9 at Offset 44F0 from 660C to 4E71 and at Offset 44FA from 6700 to 6000
v1.0J
EAM-3004
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Aladdin DragStrip
v3.7.1
name : (any)
serial: 66666
code : hhhhhd
v3.7.1J
Name :urajam
serial:74200
code :jwABBd
link: _mac.html
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-
-Aladdin MacHeadlines
v1.7
code: 11929968-0009-HOTSIX16
Look at the preferences window, there is a field called "Registration or License".
Enter the serial and make sure you have marked the checkbox left from the
field, then click the ok button at the bottom of the window. That's all.
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Alien Attack
v#
Serial: MUA9JLDMAZ39
Name: HotSix
Key : 4234-QWA2-FPQH-3232-2NUG
Before you register!
Inside your Preferences folder you'll find a file named "Finder Future Prefs". Open this file with BBEdit or SimpleText and change the serial to the one above.
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-AlphaMania
v1.0.1
150703
it may run ONLY with director's serial:
DRM500-50272-87072-29378
v?
102257
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Ambrosia Software
Ok, Lets start. As you
probably know Ambrosia
Software serial numbers expire
30 days after they are issued
in an attempt to curb piracy.
I figured
that it is very easy to use
Ambrosia's expired serial
numbers. The article says that
the code once entered into the
app is good forever. What you
must do is find out when the
serial that you have was
posted/confirmed working (the
date). Once you have this set
your date back on your
computer until the software
accepts the code. Once you
have successfully registered
simply set the date back on
your computer to the current
date and enjoy. This should
work unless of course the
serial number you are trying
is blocked.
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Anarchie Pro
v3.8
( see --> Interarchy )
v3.7
Name: Akio
Code: C68829GKEBMXKDTE6I
Name: [k]rkckWorks
Code: 9BFSY85WYUGKFWUHR6
Name: The Rants
Code: E69MAFRIVHFAKXKKTU
Name: A User of Surfer's Gay Serials
Code: 2224338D2N6JCEDWG2
(see Tip)
v3.6
Name: Inpher
Code: 2224378DGU888X34VY
Name: Da M!
Code: 2224368F3PLZWJKDPK
Name: Da M!
Code: 2224368F3PQZWOKDLK
Name: Da M!
Code: 2224368I3P6ZW5KDKK
v3.5
name: MacsRule
code: 2224378F6XRYMJCOQ6
name: Inpher
code: 2224378CGUY88E34FY
v3.0
name: Macintosh
code: 2224358CUXYUME4OFS
name: I see It, I try it !
code: 2224348C9UYV8EA4F8
Anarchie 3.7 Serial Hack:
Open Anarchie in resedit.
Open resource STR# and scroll down to "Evil Serial".
Remove all the text strings and save Anarchie.
Just register with a 3.6 serial from surfers serials! Presto!
link:
BSNG
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Andromeda 3D Series II
v2.0.1
5M20304240-0816
5M20605157-1400
5M30400120-0441
5M20304526-3390
0P20000000-0051
v1.0
xM20xxxxxx (x any number 0-9)
all 0's are zeros
link:
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
-Apple Quicktime
v5.0.2
Name : Pablo
Org. :
Code : PU4W-CWNN-CKUU-KR4K-A845
#gives "Future Pro Player Edition"
Name: Apple
Org.:
Code: 10db-c756-8a9c-a85c-dead
#gives "3.0/4.0 Pro Player Edition"
Name: MACOS QA
Org.:
Code: WT8Q-UQPJ-PAEU-P3RT-CA8D
#gives "5.0 Pro Player Edition"
#
Name : Mac User
Code : P4JX-8AJJ-TEET-XJPP-41A6
Name : QuickTime
Code : WATT-RUEM-4PME-XEJM-0C29
Name : No Windows
Code : ARU2-4TPU-RMQ8-WE84-B781
Name : Freeware
Code : MQR4-PUP8-PUJA-UGMX-B781
Name : System Part
Code : UUX3-2Q43-UPAQ-W8AR-B781
Name : Value Pack
Code : 488U-AWWT-R3G2-28JQ-B781
Name : Private
Code : JUMG-A82U-X2J8-GAQR-B781
Name : Open Source
Code : P8JE-WJUT-PRXT-GGTQ-9897
Name : Low Cost
Code : M4WR-WGEM-TER2-ERJT-9897
Name : Apples Finest
Code : 228J-4R8P-XMQT-QUM2-9897
v5.0.1
Name: Apple
Org.:
Code: 10db-c756-8a9c-a85c-dead
v5 for PC
Name: NSA_CRACKERZ_TEAM
Org.: NCT
Code: WUWM-GPPJ-T4GA-W2T3-5678
v5beta
Name: MACOS QA
Org.: Leave this blank
Code: WT8Q-UQPJ-PAEU-P3RT-CA8D
Name: PPC
Org: BUG
SN: 48F7-A869-FC3C-41E4-1234
Name: ZZZZZ
Code: 5A18-A82C-E81D-23FB-57AF
(old serials still work)
v4.1.3 Pro
Name: Apple
Code: 10DB-C756-8A9C-A85C-DEAD
v4.0.3 Pro
Name: Apple
Code: 10DB-C756-8A9C-A85C-DEAD
v4.0J Pro
Name: MoonDark
Code: DE70-D250-2DBA-A153-E882
v4.0b18
Name : Hotline user
Comany: I think you can use anything here, if not use nothing
Code : 4FF8-7A84-3424-3C26-9830
v4.0b11
name: QuickTime Developer
code: AJMG-QXJR-PRRJ-GUP4-QT4!
v3.0 Pro
Name: PPC
Org: BUG
SN: 48F7-A869-FC3C-41E4-1234
Name: Anonymous
SN: F7F9-D8CD-7CE6-1677-4321
Name: MoonDark
SN: 22C6-3A5A-D2CD-8D2A-FFFF
Name: Apple
SN: BD21-A97C-6910-6C23-FFFF
Name: Apple
SN: 10DB-C756-8A9C-A85C-DEAD
Name: Undefined
SN: 4AED-19ED-094F-1048-4321
v4.0: according to Apple, the same registration number used for QT 3
Pro works on QT4. In fact, if you've already got QT3 Pro installed and
install QT4 Pro over it, you'll find the same registration is
automatically used by QT4 Pro.
link:
BSNG
[ download.com | Versiontracker | c|net | Google | theSNITCH | hlsearch ]
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cozyanduofen/bingo/src/lib/isomorphic/browser.ts b/spaces/cozyanduofen/bingo/src/lib/isomorphic/browser.ts
deleted file mode 100644
index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000
--- a/spaces/cozyanduofen/bingo/src/lib/isomorphic/browser.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-'use client'
-
-const debug = console.info.bind(console)
-
-class WebSocketAlias extends WebSocket {
- constructor(address: string | URL, ...args: any) {
- super(address)
- }
-}
-
-export default { fetch, WebSocket: WebSocketAlias, debug }
diff --git a/spaces/crobbi/LipNet/README.md b/spaces/crobbi/LipNet/README.md
deleted file mode 100644
index 1d924afc7a50b042cbe75afe182d417e7f2ece94..0000000000000000000000000000000000000000
--- a/spaces/crobbi/LipNet/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LipNet
-emoji: 👁
-colorFrom: purple
-colorTo: red
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: streamlitapp.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cvlab/zero123-live/app.py b/spaces/cvlab/zero123-live/app.py
deleted file mode 100644
index 9fcd9d9dbdaf7802278dab617a2cd9188f6c806d..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/app.py
+++ /dev/null
@@ -1,666 +0,0 @@
-'''
-conda activate zero123
-cd zero123
-python gradio_new.py 0
-'''
-
-import diffusers # 0.12.1
-import math
-import fire
-import gradio as gr
-import lovely_numpy
-import lovely_tensors
-import numpy as np
-import os
-import plotly.express as px
-import plotly.graph_objects as go
-import rich
-import sys
-import time
-import torch
-from contextlib import nullcontext
-from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
-from einops import rearrange
-from functools import partial
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.util import create_carvekit_interface, load_and_preprocess, instantiate_from_config
-from lovely_numpy import lo
-from omegaconf import OmegaConf
-from PIL import Image
-from rich import print
-from transformers import AutoFeatureExtractor
-from torch import autocast
-from torchvision import transforms
-
-
-_SHOW_DESC = True
-_SHOW_INTERMEDIATE = False
-# _SHOW_INTERMEDIATE = True
-_GPU_INDEX = 0
-# _GPU_INDEX = 2
-
-# _TITLE = 'Zero-Shot Control of Camera Viewpoints within a Single Image'
-_TITLE = 'Zero-1-to-3: Zero-shot One Image to 3D Object'
-
-# This demo allows you to generate novel viewpoints of an object depicted in an input image using a fine-tuned version of Stable Diffusion.
-_DESCRIPTION = '''
-This live demo allows you to control camera rotation and thereby generate novel viewpoints of an object within a single image.
-It is based on Stable Diffusion. Check out our [project webpage](https://zero123.cs.columbia.edu/) and [paper](https://arxiv.org/pdf/2303.11328.pdf) if you want to learn more about the method!
-Note that this model is not intended for images of humans or faces, and is unlikely to work well for them.
-'''
-
-_ARTICLE = 'See uses.md'
-
-
-def load_model_from_config(config, ckpt, device, verbose=False):
- print(f'Loading model from {ckpt}')
- pl_sd = torch.load(ckpt, map_location='cpu')
- if 'global_step' in pl_sd:
- print(f'Global Step: {pl_sd["global_step"]}')
- sd = pl_sd['state_dict']
- model = instantiate_from_config(config.model)
- m, u = model.load_state_dict(sd, strict=False)
- if len(m) > 0 and verbose:
- print('missing keys:')
- print(m)
- if len(u) > 0 and verbose:
- print('unexpected keys:')
- print(u)
-
- model.to(device)
- model.eval()
- return model
-
-
-@torch.no_grad()
-def sample_model(input_im, model, sampler, precision, h, w, ddim_steps, n_samples, scale,
- ddim_eta, x, y, z):
- precision_scope = autocast if precision == 'autocast' else nullcontext
- with precision_scope('cuda'):
- with model.ema_scope():
- c = model.get_learned_conditioning(input_im).tile(n_samples, 1, 1)
- T = torch.tensor([math.radians(x), math.sin(
- math.radians(y)), math.cos(math.radians(y)), z])
- T = T[None, None, :].repeat(n_samples, 1, 1).to(c.device)
- c = torch.cat([c, T], dim=-1)
- c = model.cc_projection(c)
- cond = {}
- cond['c_crossattn'] = [c]
- c_concat = model.encode_first_stage((input_im.to(c.device))).mode().detach()
- cond['c_concat'] = [model.encode_first_stage((input_im.to(c.device))).mode().detach()
- .repeat(n_samples, 1, 1, 1)]
- if scale != 1.0:
- uc = {}
- uc['c_concat'] = [torch.zeros(n_samples, 4, h // 8, w // 8).to(c.device)]
- uc['c_crossattn'] = [torch.zeros_like(c).to(c.device)]
- else:
- uc = None
-
- shape = [4, h // 8, w // 8]
- samples_ddim, _ = sampler.sample(S=ddim_steps,
- conditioning=cond,
- batch_size=n_samples,
- shape=shape,
- verbose=False,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=uc,
- eta=ddim_eta,
- x_T=None)
- print(samples_ddim.shape)
- # samples_ddim = torch.nn.functional.interpolate(samples_ddim, 64, mode='nearest', antialias=False)
- x_samples_ddim = model.decode_first_stage(samples_ddim)
- return torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0).cpu()
-
-
-class CameraVisualizer:
- def __init__(self, gradio_plot):
- self._gradio_plot = gradio_plot
- self._fig = None
- self._polar = 0.0
- self._azimuth = 0.0
- self._radius = 0.0
- self._raw_image = None
- self._8bit_image = None
- self._image_colorscale = None
-
- def polar_change(self, value):
- self._polar = value
- # return self.update_figure()
-
- def azimuth_change(self, value):
- self._azimuth = value
- # return self.update_figure()
-
- def radius_change(self, value):
- self._radius = value
- # return self.update_figure()
-
- def encode_image(self, raw_image):
- '''
- :param raw_image (H, W, 3) array of uint8 in [0, 255].
- '''
- # https://stackoverflow.com/questions/60685749/python-plotly-how-to-add-an-image-to-a-3d-scatter-plot
-
- dum_img = Image.fromarray(np.ones((3, 3, 3), dtype='uint8')).convert('P', palette='WEB')
- idx_to_color = np.array(dum_img.getpalette()).reshape((-1, 3))
-
- self._raw_image = raw_image
- self._8bit_image = Image.fromarray(raw_image).convert('P', palette='WEB', dither=None)
- # self._8bit_image = Image.fromarray(raw_image.clip(0, 254)).convert(
- # 'P', palette='WEB', dither=None)
- self._image_colorscale = [
- [i / 255.0, 'rgb({}, {}, {})'.format(*rgb)] for i, rgb in enumerate(idx_to_color)]
-
- # return self.update_figure()
-
- def update_figure(self):
- fig = go.Figure()
-
- if self._raw_image is not None:
- (H, W, C) = self._raw_image.shape
-
- x = np.zeros((H, W))
- (y, z) = np.meshgrid(np.linspace(-1.0, 1.0, W), np.linspace(1.0, -1.0, H) * H / W)
- print('x:', lo(x))
- print('y:', lo(y))
- print('z:', lo(z))
-
- fig.add_trace(go.Surface(
- x=x, y=y, z=z,
- surfacecolor=self._8bit_image,
- cmin=0,
- cmax=255,
- colorscale=self._image_colorscale,
- showscale=False,
- lighting_diffuse=1.0,
- lighting_ambient=1.0,
- lighting_fresnel=1.0,
- lighting_roughness=1.0,
- lighting_specular=0.3))
-
- scene_bounds = 3.5
- base_radius = 2.5
- zoom_scale = 1.5 # Note that input radius offset is in [-0.5, 0.5].
- fov_deg = 50.0
- edges = [(0, 1), (0, 2), (0, 3), (0, 4), (1, 2), (2, 3), (3, 4), (4, 1)]
-
- input_cone = calc_cam_cone_pts_3d(
- 0.0, 0.0, base_radius, fov_deg) # (5, 3).
- output_cone = calc_cam_cone_pts_3d(
- self._polar, self._azimuth, base_radius + self._radius * zoom_scale, fov_deg) # (5, 3).
- # print('input_cone:', lo(input_cone).v)
- # print('output_cone:', lo(output_cone).v)
-
- for (cone, clr, legend) in [(input_cone, 'green', 'Input view'),
- (output_cone, 'blue', 'Target view')]:
-
- for (i, edge) in enumerate(edges):
- (x1, x2) = (cone[edge[0], 0], cone[edge[1], 0])
- (y1, y2) = (cone[edge[0], 1], cone[edge[1], 1])
- (z1, z2) = (cone[edge[0], 2], cone[edge[1], 2])
- fig.add_trace(go.Scatter3d(
- x=[x1, x2], y=[y1, y2], z=[z1, z2], mode='lines',
- line=dict(color=clr, width=3),
- name=legend, showlegend=(i == 0)))
- # text=(legend if i == 0 else None),
- # textposition='bottom center'))
- # hoverinfo='text',
- # hovertext='hovertext'))
-
- # Add label.
- if cone[0, 2] <= base_radius / 2.0:
- fig.add_trace(go.Scatter3d(
- x=[cone[0, 0]], y=[cone[0, 1]], z=[cone[0, 2] - 0.05], showlegend=False,
- mode='text', text=legend, textposition='bottom center'))
- else:
- fig.add_trace(go.Scatter3d(
- x=[cone[0, 0]], y=[cone[0, 1]], z=[cone[0, 2] + 0.05], showlegend=False,
- mode='text', text=legend, textposition='top center'))
-
- # look at center of scene
- fig.update_layout(
- # width=640,
- # height=480,
- # height=400,
- height=360,
- autosize=True,
- hovermode=False,
- margin=go.layout.Margin(l=0, r=0, b=0, t=0),
- showlegend=True,
- legend=dict(
- yanchor='bottom',
- y=0.01,
- xanchor='right',
- x=0.99,
- ),
- scene=dict(
- aspectmode='manual',
- aspectratio=dict(x=1, y=1, z=1.0),
- camera=dict(
- eye=dict(x=base_radius - 1.6, y=0.0, z=0.6),
- center=dict(x=0.0, y=0.0, z=0.0),
- up=dict(x=0.0, y=0.0, z=1.0)),
- xaxis_title='',
- yaxis_title='',
- zaxis_title='',
- xaxis=dict(
- range=[-scene_bounds, scene_bounds],
- showticklabels=False,
- showgrid=True,
- zeroline=False,
- showbackground=True,
- showspikes=False,
- showline=False,
- ticks=''),
- yaxis=dict(
- range=[-scene_bounds, scene_bounds],
- showticklabels=False,
- showgrid=True,
- zeroline=False,
- showbackground=True,
- showspikes=False,
- showline=False,
- ticks=''),
- zaxis=dict(
- range=[-scene_bounds, scene_bounds],
- showticklabels=False,
- showgrid=True,
- zeroline=False,
- showbackground=True,
- showspikes=False,
- showline=False,
- ticks='')))
-
- self._fig = fig
- return fig
-
-
-def preprocess_image(models, input_im, preprocess):
- '''
- :param input_im (PIL Image).
- :return input_im (H, W, 3) array in [0, 1].
- '''
-
- print('old input_im:', input_im.size)
- start_time = time.time()
-
- if preprocess:
- input_im = load_and_preprocess(models['carvekit'], input_im)
- input_im = (input_im / 255.0).astype(np.float32)
- # (H, W, 3) array in [0, 1].
-
- else:
- input_im = input_im.resize([256, 256], Image.Resampling.LANCZOS)
- input_im = np.asarray(input_im, dtype=np.float32) / 255.0
- # (H, W, 4) array in [0, 1].
-
- # old method: thresholding background, very important
- # input_im[input_im[:, :, -1] <= 0.9] = [1., 1., 1., 1.]
-
- # new method: apply correct method of compositing to avoid sudden transitions / thresholding
- # (smoothly transition foreground to white background based on alpha values)
- alpha = input_im[:, :, 3:4]
- white_im = np.ones_like(input_im)
- input_im = alpha * input_im + (1.0 - alpha) * white_im
-
- input_im = input_im[:, :, 0:3]
- # (H, W, 3) array in [0, 1].
-
- print(f'Infer foreground mask (preprocess_image) took {time.time() - start_time:.3f}s.')
- print('new input_im:', lo(input_im))
-
- return input_im
-
-
-def main_run(models, device, cam_vis, return_what,
- x=0.0, y=0.0, z=0.0,
- raw_im=None, preprocess=True,
- scale=3.0, n_samples=4, ddim_steps=50, ddim_eta=1.0,
- precision='fp32', h=256, w=256):
- '''
- :param raw_im (PIL Image).
- '''
-
- raw_im.thumbnail([1536, 1536], Image.Resampling.LANCZOS)
- safety_checker_input = models['clip_fe'](raw_im, return_tensors='pt').to(device)
- (image, has_nsfw_concept) = models['nsfw'](
- images=np.ones((1, 3)), clip_input=safety_checker_input.pixel_values)
- print('has_nsfw_concept:', has_nsfw_concept)
- if np.any(has_nsfw_concept):
- print('NSFW content detected.')
- to_return = [None] * 10
- description = ('### Unfortunately, '
- 'potential NSFW content was detected, '
- 'which is not supported by our model. '
- 'Please try again with a different image. ')
- if 'angles' in return_what:
- to_return[0] = 0.0
- to_return[1] = 0.0
- to_return[2] = 0.0
- to_return[3] = description
- else:
- to_return[0] = description
- return to_return
-
- else:
- print('Safety check passed.')
-
- input_im = preprocess_image(models, raw_im, preprocess)
-
- # if np.random.rand() < 0.3:
- # description = ('Unfortunately, a human, a face, or potential NSFW content was detected, '
- # 'which is not supported by our model.')
- # if vis_only:
- # return (None, None, description)
- # else:
- # return (None, None, None, description)
-
- show_in_im1 = (input_im * 255.0).astype(np.uint8)
- show_in_im2 = Image.fromarray(show_in_im1)
-
- if 'rand' in return_what:
- x = int(np.round(np.arcsin(np.random.uniform(-1.0, 1.0)) * 160.0 / np.pi)) # [-80, 80].
- y = int(np.round(np.random.uniform(-150.0, 150.0)))
- z = 0.0
-
- cam_vis.polar_change(x)
- cam_vis.azimuth_change(y)
- cam_vis.radius_change(z)
- cam_vis.encode_image(show_in_im1)
- new_fig = cam_vis.update_figure()
-
- if 'vis' in return_what:
- description = ('The viewpoints are visualized on the top right. '
- 'Click Run Generation to update the results on the bottom right.')
-
- if 'angles' in return_what:
- return (x, y, z, description, new_fig, show_in_im2)
- else:
- return (description, new_fig, show_in_im2)
-
- elif 'gen' in return_what:
- input_im = transforms.ToTensor()(input_im).unsqueeze(0).to(device)
- input_im = input_im * 2 - 1
- input_im = transforms.functional.resize(input_im, [h, w])
-
- sampler = DDIMSampler(models['turncam'])
- # used_x = -x # NOTE: Polar makes more sense in Basile's opinion this way!
- used_x = x # NOTE: Set this way for consistency.
- x_samples_ddim = sample_model(input_im, models['turncam'], sampler, precision, h, w,
- ddim_steps, n_samples, scale, ddim_eta, used_x, y, z)
-
- output_ims = []
- for x_sample in x_samples_ddim:
- x_sample = 255.0 * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
- output_ims.append(Image.fromarray(x_sample.astype(np.uint8)))
-
- description = None
-
- if 'angles' in return_what:
- return (x, y, z, description, new_fig, show_in_im2, output_ims)
- else:
- return (description, new_fig, show_in_im2, output_ims)
-
-
-def calc_cam_cone_pts_3d(polar_deg, azimuth_deg, radius_m, fov_deg):
- '''
- :param polar_deg (float).
- :param azimuth_deg (float).
- :param radius_m (float).
- :param fov_deg (float).
- :return (5, 3) array of float with (x, y, z).
- '''
- polar_rad = np.deg2rad(polar_deg)
- azimuth_rad = np.deg2rad(azimuth_deg)
- fov_rad = np.deg2rad(fov_deg)
- polar_rad = -polar_rad # NOTE: Inverse of how used_x relates to x.
-
- # Camera pose center:
- cam_x = radius_m * np.cos(azimuth_rad) * np.cos(polar_rad)
- cam_y = radius_m * np.sin(azimuth_rad) * np.cos(polar_rad)
- cam_z = radius_m * np.sin(polar_rad)
-
- # Obtain four corners of camera frustum, assuming it is looking at origin.
- # First, obtain camera extrinsics (rotation matrix only):
- camera_R = np.array([[np.cos(azimuth_rad) * np.cos(polar_rad),
- -np.sin(azimuth_rad),
- -np.cos(azimuth_rad) * np.sin(polar_rad)],
- [np.sin(azimuth_rad) * np.cos(polar_rad),
- np.cos(azimuth_rad),
- -np.sin(azimuth_rad) * np.sin(polar_rad)],
- [np.sin(polar_rad),
- 0.0,
- np.cos(polar_rad)]])
- # print('camera_R:', lo(camera_R).v)
-
- # Multiply by corners in camera space to obtain go to space:
- corn1 = [-1.0, np.tan(fov_rad / 2.0), np.tan(fov_rad / 2.0)]
- corn2 = [-1.0, -np.tan(fov_rad / 2.0), np.tan(fov_rad / 2.0)]
- corn3 = [-1.0, -np.tan(fov_rad / 2.0), -np.tan(fov_rad / 2.0)]
- corn4 = [-1.0, np.tan(fov_rad / 2.0), -np.tan(fov_rad / 2.0)]
- corn1 = np.dot(camera_R, corn1)
- corn2 = np.dot(camera_R, corn2)
- corn3 = np.dot(camera_R, corn3)
- corn4 = np.dot(camera_R, corn4)
-
- # Now attach as offset to actual 3D camera position:
- corn1 = np.array(corn1) / np.linalg.norm(corn1, ord=2)
- corn_x1 = cam_x + corn1[0]
- corn_y1 = cam_y + corn1[1]
- corn_z1 = cam_z + corn1[2]
- corn2 = np.array(corn2) / np.linalg.norm(corn2, ord=2)
- corn_x2 = cam_x + corn2[0]
- corn_y2 = cam_y + corn2[1]
- corn_z2 = cam_z + corn2[2]
- corn3 = np.array(corn3) / np.linalg.norm(corn3, ord=2)
- corn_x3 = cam_x + corn3[0]
- corn_y3 = cam_y + corn3[1]
- corn_z3 = cam_z + corn3[2]
- corn4 = np.array(corn4) / np.linalg.norm(corn4, ord=2)
- corn_x4 = cam_x + corn4[0]
- corn_y4 = cam_y + corn4[1]
- corn_z4 = cam_z + corn4[2]
-
- xs = [cam_x, corn_x1, corn_x2, corn_x3, corn_x4]
- ys = [cam_y, corn_y1, corn_y2, corn_y3, corn_y4]
- zs = [cam_z, corn_z1, corn_z2, corn_z3, corn_z4]
-
- return np.array([xs, ys, zs]).T
-
-
-def run_demo(
- device_idx=_GPU_INDEX,
- ckpt='105000.ckpt',
- config='configs/sd-objaverse-finetune-c_concat-256.yaml'):
-
- print('sys.argv:', sys.argv)
- if len(sys.argv) > 1:
- print('old device_idx:', device_idx)
- device_idx = int(sys.argv[1])
- print('new device_idx:', device_idx)
-
- device = f'cuda:{device_idx}'
- config = OmegaConf.load(config)
-
- # Instantiate all models beforehand for efficiency.
- models = dict()
- print('Instantiating LatentDiffusion...')
- models['turncam'] = load_model_from_config(config, ckpt, device=device)
- print('Instantiating Carvekit HiInterface...')
- models['carvekit'] = create_carvekit_interface()
- print('Instantiating StableDiffusionSafetyChecker...')
- models['nsfw'] = StableDiffusionSafetyChecker.from_pretrained(
- 'CompVis/stable-diffusion-safety-checker').to(device)
- print('Instantiating AutoFeatureExtractor...')
- models['clip_fe'] = AutoFeatureExtractor.from_pretrained(
- 'CompVis/stable-diffusion-safety-checker')
-
- # Reduce NSFW false positives.
- # NOTE: At the time of writing, and for diffusers 0.12.1, the default parameters are:
- # models['nsfw'].concept_embeds_weights:
- # [0.1800, 0.1900, 0.2060, 0.2100, 0.1950, 0.1900, 0.1940, 0.1900, 0.1900, 0.2200, 0.1900,
- # 0.1900, 0.1950, 0.1984, 0.2100, 0.2140, 0.2000].
- # models['nsfw'].special_care_embeds_weights:
- # [0.1950, 0.2000, 0.2200].
- # We multiply all by some factor > 1 to make them less likely to be triggered.
- models['nsfw'].concept_embeds_weights *= 1.07
- models['nsfw'].special_care_embeds_weights *= 1.07
-
- with open('instructions.md', 'r') as f:
- article = f.read()
-
- # NOTE: Examples must match inputs
- # [polar_slider, azimuth_slider, radius_slider, image_block,
- # preprocess_chk, scale_slider, samples_slider, steps_slider].
- example_fns = ['1_blue_arm.png', '2_cybercar.png', '3_sushi.png', '4_blackarm.png',
- '5_cybercar.png', '6_burger.png', '7_london.png', '8_motor.png']
- num_examples = len(example_fns)
- example_fps = [os.path.join(os.path.dirname(__file__), 'configs', x) for x in example_fns]
- example_angles = [(-40.0, -65.0, 0.0), (-30.0, 90.0, 0.0), (45.0, -15.0, 0.0), (-75.0, 100.0, 0.0),
- (-40.0, -75.0, 0.0), (-45.0, 0.0, 0.0), (-55.0, 90.0, 0.0), (-20.0, 125.0, 0.0)]
- examples_full = [[*example_angles[i], example_fps[i], True, 3, 4, 50] for i in range(num_examples)]
- print('examples_full:', examples_full)
-
- # Compose demo layout & data flow.
- demo = gr.Blocks(title=_TITLE)
-
- with demo:
- gr.Markdown('# ' + _TITLE)
- gr.Markdown(_DESCRIPTION)
-
- with gr.Row():
- with gr.Column(scale=0.9, variant='panel'):
-
- image_block = gr.Image(type='pil', image_mode='RGBA',
- label='Input image of single object')
- preprocess_chk = gr.Checkbox(
- True, label='Preprocess image automatically (remove background and recenter object)')
- # info='If enabled, the uploaded image will be preprocessed to remove the background and recenter the object by cropping and/or padding as necessary. '
- # 'If disabled, the image will be used as-is, *BUT* a fully transparent or white background is required.'),
-
- gr.Markdown('*Try camera position presets:*')
- with gr.Row():
- left_btn = gr.Button('View from the Left', variant='primary')
- above_btn = gr.Button('View from Above', variant='primary')
- right_btn = gr.Button('View from the Right', variant='primary')
- with gr.Row():
- random_btn = gr.Button('Random Rotation', variant='primary')
- below_btn = gr.Button('View from Below', variant='primary')
- behind_btn = gr.Button('View from Behind', variant='primary')
-
- gr.Markdown('*Control camera position manually:*')
- polar_slider = gr.Slider(
- -90, 90, value=0, step=5, label='Polar angle (vertical rotation in degrees)')
- # info='Positive values move the camera down, while negative values move the camera up.')
- azimuth_slider = gr.Slider(
- -180, 180, value=0, step=5, label='Azimuth angle (horizontal rotation in degrees)')
- # info='Positive values move the camera right, while negative values move the camera left.')
- radius_slider = gr.Slider(
- -0.5, 0.5, value=0.0, step=0.1, label='Zoom (relative distance from center)')
- # info='Positive values move the camera further away, while negative values move the camera closer.')
-
- samples_slider = gr.Slider(1, 8, value=4, step=1,
- label='Number of samples to generate')
-
- with gr.Accordion('Advanced options', open=False):
- scale_slider = gr.Slider(0, 30, value=3, step=1,
- label='Diffusion guidance scale')
- steps_slider = gr.Slider(5, 200, value=75, step=5,
- label='Number of diffusion inference steps')
-
- with gr.Row():
- vis_btn = gr.Button('Visualize Angles', variant='secondary')
- run_btn = gr.Button('Run Generation', variant='primary')
-
- desc_output = gr.Markdown(
- 'The results will appear on the right.', visible=_SHOW_DESC)
-
- with gr.Column(scale=1.1, variant='panel'):
-
- vis_output = gr.Plot(
- label='Relationship between input (green) and output (blue) camera poses')
-
- gen_output = gr.Gallery(label='Generated images from specified new viewpoint')
- gen_output.style(grid=2)
-
- preproc_output = gr.Image(type='pil', image_mode='RGB',
- label='Preprocessed input image', visible=_SHOW_INTERMEDIATE)
-
- cam_vis = CameraVisualizer(vis_output)
-
- gr.Examples(
- examples=examples_full, # NOTE: elements must match inputs list!
- fn=partial(main_run, models, device, cam_vis, 'gen'),
- inputs=[polar_slider, azimuth_slider, radius_slider,
- image_block, preprocess_chk,
- scale_slider, samples_slider, steps_slider],
- outputs=[desc_output, vis_output, preproc_output, gen_output],
- cache_examples=True,
- run_on_click=True,
- )
-
- gr.Markdown(article)
-
- # NOTE: I am forced to update vis_output for these preset buttons,
- # because otherwise the gradio plot always resets the plotly 3D viewpoint for some reason,
- # which might confuse the user into thinking that the plot has been updated too.
-
- # polar_slider.change(fn=partial(main_run, models, device, cam_vis, 'vis'),
- # inputs=[polar_slider, azimuth_slider, radius_slider,
- # image_block, preprocess_chk],
- # outputs=[desc_output, vis_output, preproc_output],
- # queue=False)
- # azimuth_slider.change(fn=partial(main_run, models, device, cam_vis, 'vis'),
- # inputs=[polar_slider, azimuth_slider, radius_slider,
- # image_block, preprocess_chk],
- # outputs=[desc_output, vis_output, preproc_output],
- # queue=False)
-
- # radius_slider.change(fn=partial(main_run, models, device, cam_vis, 'vis'),
- # inputs=[polar_slider, azimuth_slider, radius_slider,
- # image_block, preprocess_chk],
- # outputs=[desc_output, vis_output, preproc_output],
- # queue=False)
-
- vis_btn.click(fn=partial(main_run, models, device, cam_vis, 'vis'),
- inputs=[polar_slider, azimuth_slider, radius_slider,
- image_block, preprocess_chk],
- outputs=[desc_output, vis_output, preproc_output],
- queue=False)
-
- run_btn.click(fn=partial(main_run, models, device, cam_vis, 'gen'),
- inputs=[polar_slider, azimuth_slider, radius_slider,
- image_block, preprocess_chk,
- scale_slider, samples_slider, steps_slider],
- outputs=[desc_output, vis_output, preproc_output, gen_output])
-
- # NEW:
- preset_inputs = [image_block, preprocess_chk,
- scale_slider, samples_slider, steps_slider]
- preset_outputs = [polar_slider, azimuth_slider, radius_slider,
- desc_output, vis_output, preproc_output, gen_output]
- left_btn.click(fn=partial(main_run, models, device, cam_vis, 'angles_gen',
- 0.0, -90.0, 0.0),
- inputs=preset_inputs, outputs=preset_outputs)
- above_btn.click(fn=partial(main_run, models, device, cam_vis, 'angles_gen',
- -90.0, 0.0, 0.0),
- inputs=preset_inputs, outputs=preset_outputs)
- right_btn.click(fn=partial(main_run, models, device, cam_vis, 'angles_gen',
- 0.0, 90.0, 0.0),
- inputs=preset_inputs, outputs=preset_outputs)
- random_btn.click(fn=partial(main_run, models, device, cam_vis, 'rand_angles_gen',
- -1.0, -1.0, -1.0),
- inputs=preset_inputs, outputs=preset_outputs)
- below_btn.click(fn=partial(main_run, models, device, cam_vis, 'angles_gen',
- 90.0, 0.0, 0.0),
- inputs=preset_inputs, outputs=preset_outputs)
- behind_btn.click(fn=partial(main_run, models, device, cam_vis, 'angles_gen',
- 0.0, 180.0, 0.0),
- inputs=preset_inputs, outputs=preset_outputs)
-
- demo.launch(enable_queue=True)
-
-
-if __name__ == '__main__':
-
- fire.Fire(run_demo)
diff --git a/spaces/cvlab/zero123-live/ldm/modules/evaluate/ssim.py b/spaces/cvlab/zero123-live/ldm/modules/evaluate/ssim.py
deleted file mode 100644
index 4e8883ccb3b30455a76caf2e4d1e04745f75d214..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/ldm/modules/evaluate/ssim.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# MIT Licence
-
-# Methods to predict the SSIM, taken from
-# https://github.com/Po-Hsun-Su/pytorch-ssim/blob/master/pytorch_ssim/__init__.py
-
-from math import exp
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Variable
-
-def gaussian(window_size, sigma):
- gauss = torch.Tensor(
- [
- exp(-((x - window_size // 2) ** 2) / float(2 * sigma ** 2))
- for x in range(window_size)
- ]
- )
- return gauss / gauss.sum()
-
-
-def create_window(window_size, channel):
- _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
- _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
- window = Variable(
- _2D_window.expand(channel, 1, window_size, window_size).contiguous()
- )
- return window
-
-
-def _ssim(
- img1, img2, window, window_size, channel, mask=None, size_average=True
-):
- mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel)
- mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel)
-
- mu1_sq = mu1.pow(2)
- mu2_sq = mu2.pow(2)
- mu1_mu2 = mu1 * mu2
-
- sigma1_sq = (
- F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel)
- - mu1_sq
- )
- sigma2_sq = (
- F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel)
- - mu2_sq
- )
- sigma12 = (
- F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel)
- - mu1_mu2
- )
-
- C1 = (0.01) ** 2
- C2 = (0.03) ** 2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / (
- (mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)
- )
-
- if not (mask is None):
- b = mask.size(0)
- ssim_map = ssim_map.mean(dim=1, keepdim=True) * mask
- ssim_map = ssim_map.view(b, -1).sum(dim=1) / mask.view(b, -1).sum(
- dim=1
- ).clamp(min=1)
- return ssim_map
-
- import pdb
-
- pdb.set_trace
-
- if size_average:
- return ssim_map.mean()
- else:
- return ssim_map.mean(1).mean(1).mean(1)
-
-
-class SSIM(torch.nn.Module):
- def __init__(self, window_size=11, size_average=True):
- super(SSIM, self).__init__()
- self.window_size = window_size
- self.size_average = size_average
- self.channel = 1
- self.window = create_window(window_size, self.channel)
-
- def forward(self, img1, img2, mask=None):
- (_, channel, _, _) = img1.size()
-
- if (
- channel == self.channel
- and self.window.data.type() == img1.data.type()
- ):
- window = self.window
- else:
- window = create_window(self.window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- self.window = window
- self.channel = channel
-
- return _ssim(
- img1,
- img2,
- window,
- self.window_size,
- channel,
- mask,
- self.size_average,
- )
-
-
-def ssim(img1, img2, window_size=11, mask=None, size_average=True):
- (_, channel, _, _) = img1.size()
- window = create_window(window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- return _ssim(img1, img2, window, window_size, channel, mask, size_average)
diff --git a/spaces/davidpiscasio/unpaired-img2img/models/networks.py b/spaces/davidpiscasio/unpaired-img2img/models/networks.py
deleted file mode 100644
index b3a10c99c20eea0aa6ddd7797e47f16f5f92e5ff..0000000000000000000000000000000000000000
--- a/spaces/davidpiscasio/unpaired-img2img/models/networks.py
+++ /dev/null
@@ -1,615 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.nn import init
-import functools
-from torch.optim import lr_scheduler
-
-
-###############################################################################
-# Helper Functions
-###############################################################################
-
-
-class Identity(nn.Module):
- def forward(self, x):
- return x
-
-
-def get_norm_layer(norm_type='instance'):
- """Return a normalization layer
-
- Parameters:
- norm_type (str) -- the name of the normalization layer: batch | instance | none
-
- For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).
- For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.
- """
- if norm_type == 'batch':
- norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)
- elif norm_type == 'instance':
- norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
- elif norm_type == 'none':
- def norm_layer(x): return Identity()
- else:
- raise NotImplementedError('normalization layer [%s] is not found' % norm_type)
- return norm_layer
-
-
-def get_scheduler(optimizer, opt):
- """Return a learning rate scheduler
-
- Parameters:
- optimizer -- the optimizer of the network
- opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.
- opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
-
- For 'linear', we keep the same learning rate for the first epochs
- and linearly decay the rate to zero over the next epochs.
- For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
- See https://pytorch.org/docs/stable/optim.html for more details.
- """
- if opt.lr_policy == 'linear':
- def lambda_rule(epoch):
- lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1)
- return lr_l
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
- elif opt.lr_policy == 'step':
- scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1)
- elif opt.lr_policy == 'plateau':
- scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
- elif opt.lr_policy == 'cosine':
- scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
- else:
- return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
- return scheduler
-
-
-def init_weights(net, init_type='normal', init_gain=0.02):
- """Initialize network weights.
-
- Parameters:
- net (network) -- network to be initialized
- init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
-
- We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might
- work better for some applications. Feel free to try yourself.
- """
- def init_func(m): # define the initialization function
- classname = m.__class__.__name__
- if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
- if init_type == 'normal':
- init.normal_(m.weight.data, 0.0, init_gain)
- elif init_type == 'xavier':
- init.xavier_normal_(m.weight.data, gain=init_gain)
- elif init_type == 'kaiming':
- init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
- elif init_type == 'orthogonal':
- init.orthogonal_(m.weight.data, gain=init_gain)
- else:
- raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
- elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
- init.normal_(m.weight.data, 1.0, init_gain)
- init.constant_(m.bias.data, 0.0)
-
- print('initialize network with %s' % init_type)
- net.apply(init_func) # apply the initialization function
-
-
-def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
- Parameters:
- net (network) -- the network to be initialized
- init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
- gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Return an initialized network.
- """
- if len(gpu_ids) > 0:
- assert(torch.cuda.is_available())
- net.to(gpu_ids[0])
- net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs
- init_weights(net, init_type, init_gain=init_gain)
- return net
-
-
-def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Create a generator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- output_nc (int) -- the number of channels in output images
- ngf (int) -- the number of filters in the last conv layer
- netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128
- norm (str) -- the name of normalization layers used in the network: batch | instance | none
- use_dropout (bool) -- if use dropout layers.
- init_type (str) -- the name of our initialization method.
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Returns a generator
-
- Our current implementation provides two types of generators:
- U-Net: [unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images)
- The original U-Net paper: https://arxiv.org/abs/1505.04597
-
- Resnet-based generator: [resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks)
- Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations.
- We adapt Torch code from Justin Johnson's neural style transfer project (https://github.com/jcjohnson/fast-neural-style).
-
-
- The generator has been initialized by . It uses RELU for non-linearity.
- """
- net = None
- norm_layer = get_norm_layer(norm_type=norm)
-
- if netG == 'resnet_9blocks':
- net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9)
- elif netG == 'resnet_6blocks':
- net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=6)
- elif netG == 'unet_128':
- net = UnetGenerator(input_nc, output_nc, 7, ngf, norm_layer=norm_layer, use_dropout=use_dropout)
- elif netG == 'unet_256':
- net = UnetGenerator(input_nc, output_nc, 8, ngf, norm_layer=norm_layer, use_dropout=use_dropout)
- else:
- raise NotImplementedError('Generator model name [%s] is not recognized' % netG)
- return init_net(net, init_type, init_gain, gpu_ids)
-
-
-def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Create a discriminator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the first conv layer
- netD (str) -- the architecture's name: basic | n_layers | pixel
- n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers'
- norm (str) -- the type of normalization layers used in the network.
- init_type (str) -- the name of the initialization method.
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Returns a discriminator
-
- Our current implementation provides three types of discriminators:
- [basic]: 'PatchGAN' classifier described in the original pix2pix paper.
- It can classify whether 70×70 overlapping patches are real or fake.
- Such a patch-level discriminator architecture has fewer parameters
- than a full-image discriminator and can work on arbitrarily-sized images
- in a fully convolutional fashion.
-
- [n_layers]: With this mode, you can specify the number of conv layers in the discriminator
- with the parameter (default=3 as used in [basic] (PatchGAN).)
-
- [pixel]: 1x1 PixelGAN discriminator can classify whether a pixel is real or not.
- It encourages greater color diversity but has no effect on spatial statistics.
-
- The discriminator has been initialized by . It uses Leakly RELU for non-linearity.
- """
- net = None
- norm_layer = get_norm_layer(norm_type=norm)
-
- if netD == 'basic': # default PatchGAN classifier
- net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer)
- elif netD == 'n_layers': # more options
- net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer)
- elif netD == 'pixel': # classify if each pixel is real or fake
- net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer)
- else:
- raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD)
- return init_net(net, init_type, init_gain, gpu_ids)
-
-
-##############################################################################
-# Classes
-##############################################################################
-class GANLoss(nn.Module):
- """Define different GAN objectives.
-
- The GANLoss class abstracts away the need to create the target label tensor
- that has the same size as the input.
- """
-
- def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0):
- """ Initialize the GANLoss class.
-
- Parameters:
- gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
- target_real_label (bool) - - label for a real image
- target_fake_label (bool) - - label of a fake image
-
- Note: Do not use sigmoid as the last layer of Discriminator.
- LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss.
- """
- super(GANLoss, self).__init__()
- self.register_buffer('real_label', torch.tensor(target_real_label))
- self.register_buffer('fake_label', torch.tensor(target_fake_label))
- self.gan_mode = gan_mode
- if gan_mode == 'lsgan':
- self.loss = nn.MSELoss()
- elif gan_mode == 'vanilla':
- self.loss = nn.BCEWithLogitsLoss()
- elif gan_mode in ['wgangp']:
- self.loss = None
- else:
- raise NotImplementedError('gan mode %s not implemented' % gan_mode)
-
- def get_target_tensor(self, prediction, target_is_real):
- """Create label tensors with the same size as the input.
-
- Parameters:
- prediction (tensor) - - tpyically the prediction from a discriminator
- target_is_real (bool) - - if the ground truth label is for real images or fake images
-
- Returns:
- A label tensor filled with ground truth label, and with the size of the input
- """
-
- if target_is_real:
- target_tensor = self.real_label
- else:
- target_tensor = self.fake_label
- return target_tensor.expand_as(prediction)
-
- def __call__(self, prediction, target_is_real):
- """Calculate loss given Discriminator's output and grount truth labels.
-
- Parameters:
- prediction (tensor) - - tpyically the prediction output from a discriminator
- target_is_real (bool) - - if the ground truth label is for real images or fake images
-
- Returns:
- the calculated loss.
- """
- if self.gan_mode in ['lsgan', 'vanilla']:
- target_tensor = self.get_target_tensor(prediction, target_is_real)
- loss = self.loss(prediction, target_tensor)
- elif self.gan_mode == 'wgangp':
- if target_is_real:
- loss = -prediction.mean()
- else:
- loss = prediction.mean()
- return loss
-
-
-def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0):
- """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028
-
- Arguments:
- netD (network) -- discriminator network
- real_data (tensor array) -- real images
- fake_data (tensor array) -- generated images from the generator
- device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu')
- type (str) -- if we mix real and fake data or not [real | fake | mixed].
- constant (float) -- the constant used in formula ( ||gradient||_2 - constant)^2
- lambda_gp (float) -- weight for this loss
-
- Returns the gradient penalty loss
- """
- if lambda_gp > 0.0:
- if type == 'real': # either use real images, fake images, or a linear interpolation of two.
- interpolatesv = real_data
- elif type == 'fake':
- interpolatesv = fake_data
- elif type == 'mixed':
- alpha = torch.rand(real_data.shape[0], 1, device=device)
- alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view(*real_data.shape)
- interpolatesv = alpha * real_data + ((1 - alpha) * fake_data)
- else:
- raise NotImplementedError('{} not implemented'.format(type))
- interpolatesv.requires_grad_(True)
- disc_interpolates = netD(interpolatesv)
- gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv,
- grad_outputs=torch.ones(disc_interpolates.size()).to(device),
- create_graph=True, retain_graph=True, only_inputs=True)
- gradients = gradients[0].view(real_data.size(0), -1) # flat the data
- gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps
- return gradient_penalty, gradients
- else:
- return 0.0, None
-
-
-class ResnetGenerator(nn.Module):
- """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
-
- We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
- """
-
- def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):
- """Construct a Resnet-based generator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- output_nc (int) -- the number of channels in output images
- ngf (int) -- the number of filters in the last conv layer
- norm_layer -- normalization layer
- use_dropout (bool) -- if use dropout layers
- n_blocks (int) -- the number of ResNet blocks
- padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
- """
- assert(n_blocks >= 0)
- super(ResnetGenerator, self).__init__()
- if type(norm_layer) == functools.partial:
- use_bias = norm_layer.func == nn.InstanceNorm2d
- else:
- use_bias = norm_layer == nn.InstanceNorm2d
-
- model = [nn.ReflectionPad2d(3),
- nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
- norm_layer(ngf),
- nn.ReLU(True)]
-
- n_downsampling = 2
- for i in range(n_downsampling): # add downsampling layers
- mult = 2 ** i
- model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
- norm_layer(ngf * mult * 2),
- nn.ReLU(True)]
-
- mult = 2 ** n_downsampling
- for i in range(n_blocks): # add ResNet blocks
-
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
-
- for i in range(n_downsampling): # add upsampling layers
- mult = 2 ** (n_downsampling - i)
- model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
- kernel_size=3, stride=2,
- padding=1, output_padding=1,
- bias=use_bias),
- norm_layer(int(ngf * mult / 2)),
- nn.ReLU(True)]
- model += [nn.ReflectionPad2d(3)]
- model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
- model += [nn.Tanh()]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, input):
- """Standard forward"""
- return self.model(input)
-
-
-class ResnetBlock(nn.Module):
- """Define a Resnet block"""
-
- def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias):
- """Initialize the Resnet block
-
- A resnet block is a conv block with skip connections
- We construct a conv block with build_conv_block function,
- and implement skip connections in function.
- Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf
- """
- super(ResnetBlock, self).__init__()
- self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias)
-
- def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias):
- """Construct a convolutional block.
-
- Parameters:
- dim (int) -- the number of channels in the conv layer.
- padding_type (str) -- the name of padding layer: reflect | replicate | zero
- norm_layer -- normalization layer
- use_dropout (bool) -- if use dropout layers.
- use_bias (bool) -- if the conv layer uses bias or not
-
- Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU))
- """
- conv_block = []
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(1)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(1)]
- elif padding_type == 'zero':
- p = 1
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
-
- conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)]
- if use_dropout:
- conv_block += [nn.Dropout(0.5)]
-
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(1)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(1)]
- elif padding_type == 'zero':
- p = 1
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
- conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)]
-
- return nn.Sequential(*conv_block)
-
- def forward(self, x):
- """Forward function (with skip connections)"""
- out = x + self.conv_block(x) # add skip connections
- return out
-
-
-class UnetGenerator(nn.Module):
- """Create a Unet-based generator"""
-
- def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
- """Construct a Unet generator
- Parameters:
- input_nc (int) -- the number of channels in input images
- output_nc (int) -- the number of channels in output images
- num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
- image of size 128x128 will become of size 1x1 # at the bottleneck
- ngf (int) -- the number of filters in the last conv layer
- norm_layer -- normalization layer
-
- We construct the U-Net from the innermost layer to the outermost layer.
- It is a recursive process.
- """
- super(UnetGenerator, self).__init__()
- # construct unet structure
- unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
- for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
- unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
- # gradually reduce the number of filters from ngf * 8 to ngf
- unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
- unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
- unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
- self.model = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
-
- def forward(self, input):
- """Standard forward"""
- return self.model(input)
-
-
-class UnetSkipConnectionBlock(nn.Module):
- """Defines the Unet submodule with skip connection.
- X -------------------identity----------------------
- |-- downsampling -- |submodule| -- upsampling --|
- """
-
- def __init__(self, outer_nc, inner_nc, input_nc=None,
- submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
- """Construct a Unet submodule with skip connections.
-
- Parameters:
- outer_nc (int) -- the number of filters in the outer conv layer
- inner_nc (int) -- the number of filters in the inner conv layer
- input_nc (int) -- the number of channels in input images/features
- submodule (UnetSkipConnectionBlock) -- previously defined submodules
- outermost (bool) -- if this module is the outermost module
- innermost (bool) -- if this module is the innermost module
- norm_layer -- normalization layer
- use_dropout (bool) -- if use dropout layers.
- """
- super(UnetSkipConnectionBlock, self).__init__()
- self.outermost = outermost
- if type(norm_layer) == functools.partial:
- use_bias = norm_layer.func == nn.InstanceNorm2d
- else:
- use_bias = norm_layer == nn.InstanceNorm2d
- if input_nc is None:
- input_nc = outer_nc
- downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,
- stride=2, padding=1, bias=use_bias)
- downrelu = nn.LeakyReLU(0.2, True)
- downnorm = norm_layer(inner_nc)
- uprelu = nn.ReLU(True)
- upnorm = norm_layer(outer_nc)
-
- if outermost:
- upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
- kernel_size=4, stride=2,
- padding=1)
- down = [downconv]
- up = [uprelu, upconv, nn.Tanh()]
- model = down + [submodule] + up
- elif innermost:
- upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
- kernel_size=4, stride=2,
- padding=1, bias=use_bias)
- down = [downrelu, downconv]
- up = [uprelu, upconv, upnorm]
- model = down + up
- else:
- upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
- kernel_size=4, stride=2,
- padding=1, bias=use_bias)
- down = [downrelu, downconv, downnorm]
- up = [uprelu, upconv, upnorm]
-
- if use_dropout:
- model = down + [submodule] + up + [nn.Dropout(0.5)]
- else:
- model = down + [submodule] + up
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- if self.outermost:
- return self.model(x)
- else: # add skip connections
- return torch.cat([x, self.model(x)], 1)
-
-
-class NLayerDiscriminator(nn.Module):
- """Defines a PatchGAN discriminator"""
-
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d):
- """Construct a PatchGAN discriminator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the last conv layer
- n_layers (int) -- the number of conv layers in the discriminator
- norm_layer -- normalization layer
- """
- super(NLayerDiscriminator, self).__init__()
- if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
- use_bias = norm_layer.func == nn.InstanceNorm2d
- else:
- use_bias = norm_layer == nn.InstanceNorm2d
-
- kw = 4
- padw = 1
- sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
- nf_mult = 1
- nf_mult_prev = 1
- for n in range(1, n_layers): # gradually increase the number of filters
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n_layers, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
- self.model = nn.Sequential(*sequence)
-
- def forward(self, input):
- """Standard forward."""
- return self.model(input)
-
-
-class PixelDiscriminator(nn.Module):
- """Defines a 1x1 PatchGAN discriminator (pixelGAN)"""
-
- def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d):
- """Construct a 1x1 PatchGAN discriminator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the last conv layer
- norm_layer -- normalization layer
- """
- super(PixelDiscriminator, self).__init__()
- if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
- use_bias = norm_layer.func == nn.InstanceNorm2d
- else:
- use_bias = norm_layer == nn.InstanceNorm2d
-
- self.net = [
- nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias),
- norm_layer(ndf * 2),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)]
-
- self.net = nn.Sequential(*self.net)
-
- def forward(self, input):
- """Standard forward."""
- return self.net(input)
diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/weights/README.md b/spaces/dawood17/SayBot_Enchancer/CodeFormer/weights/README.md
deleted file mode 100644
index 67ad334bd672eeb9f82813cd54e8885331bbb2f2..0000000000000000000000000000000000000000
--- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/weights/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Weights
-
-Put the downloaded pre-trained models to this folder.
\ No newline at end of file
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/qu2cu/cli.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/qu2cu/cli.py
deleted file mode 100644
index a07fd6dcd0d8256b4bb8db45a8d88cdf2d381ff2..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/qu2cu/cli.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import os
-import argparse
-import logging
-from fontTools.misc.cliTools import makeOutputFileName
-from fontTools.ttLib import TTFont
-from fontTools.pens.qu2cuPen import Qu2CuPen
-from fontTools.pens.ttGlyphPen import TTGlyphPen
-import fontTools
-
-
-logger = logging.getLogger("fontTools.qu2cu")
-
-
-def _font_to_cubic(input_path, output_path=None, **kwargs):
- font = TTFont(input_path)
- logger.info("Converting curves for %s", input_path)
-
- stats = {} if kwargs["dump_stats"] else None
- qu2cu_kwargs = {
- "stats": stats,
- "max_err": kwargs["max_err_em"] * font["head"].unitsPerEm,
- "all_cubic": kwargs["all_cubic"],
- }
-
- assert "gvar" not in font, "Cannot convert variable font"
- glyphSet = font.getGlyphSet()
- glyphOrder = font.getGlyphOrder()
- glyf = font["glyf"]
- for glyphName in glyphOrder:
- glyph = glyphSet[glyphName]
- ttpen = TTGlyphPen(glyphSet)
- pen = Qu2CuPen(ttpen, **qu2cu_kwargs)
- glyph.draw(pen)
- glyf[glyphName] = ttpen.glyph(dropImpliedOnCurves=True)
-
- font["head"].glyphDataFormat = 1
-
- if kwargs["dump_stats"]:
- logger.info("Stats: %s", stats)
-
- logger.info("Saving %s", output_path)
- font.save(output_path)
-
-
-def main(args=None):
- """Convert an OpenType font from quadratic to cubic curves"""
- parser = argparse.ArgumentParser(prog="qu2cu")
- parser.add_argument("--version", action="version", version=fontTools.__version__)
- parser.add_argument(
- "infiles",
- nargs="+",
- metavar="INPUT",
- help="one or more input TTF source file(s).",
- )
- parser.add_argument("-v", "--verbose", action="count", default=0)
- parser.add_argument(
- "-e",
- "--conversion-error",
- type=float,
- metavar="ERROR",
- default=0.001,
- help="maxiumum approximation error measured in EM (default: 0.001)",
- )
- parser.add_argument(
- "-c",
- "--all-cubic",
- default=False,
- action="store_true",
- help="whether to only use cubic curves",
- )
-
- output_parser = parser.add_mutually_exclusive_group()
- output_parser.add_argument(
- "-o",
- "--output-file",
- default=None,
- metavar="OUTPUT",
- help=("output filename for the converted TTF."),
- )
- output_parser.add_argument(
- "-d",
- "--output-dir",
- default=None,
- metavar="DIRECTORY",
- help="output directory where to save converted TTFs",
- )
-
- options = parser.parse_args(args)
-
- if not options.verbose:
- level = "WARNING"
- elif options.verbose == 1:
- level = "INFO"
- else:
- level = "DEBUG"
- logging.basicConfig(level=level)
-
- if len(options.infiles) > 1 and options.output_file:
- parser.error("-o/--output-file can't be used with multile inputs")
-
- if options.output_dir:
- output_dir = options.output_dir
- if not os.path.exists(output_dir):
- os.mkdir(output_dir)
- elif not os.path.isdir(output_dir):
- parser.error("'%s' is not a directory" % output_dir)
- output_paths = [
- os.path.join(output_dir, os.path.basename(p)) for p in options.infiles
- ]
- elif options.output_file:
- output_paths = [options.output_file]
- else:
- output_paths = [
- makeOutputFileName(p, overWrite=True, suffix=".cubic")
- for p in options.infiles
- ]
-
- kwargs = dict(
- dump_stats=options.verbose > 0,
- max_err_em=options.conversion_error,
- all_cubic=options.all_cubic,
- )
-
- for input_path, output_path in zip(options.infiles, output_paths):
- _font_to_cubic(input_path, output_path, **kwargs)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_m_a_p.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
deleted file mode 100644
index 6c00aaf63dea48bd96e718809319f3e27c08567e..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
+++ /dev/null
@@ -1,1578 +0,0 @@
-from fontTools.misc.textTools import bytesjoin, safeEval, readHex
-from fontTools.misc.encodingTools import getEncoding
-from fontTools.ttLib import getSearchRange
-from fontTools.unicode import Unicode
-from . import DefaultTable
-import sys
-import struct
-import array
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-def _make_map(font, chars, gids):
- assert len(chars) == len(gids)
- glyphNames = font.getGlyphNameMany(gids)
- cmap = {}
- for char, gid, name in zip(chars, gids, glyphNames):
- if gid == 0:
- continue
- cmap[char] = name
- return cmap
-
-
-class table__c_m_a_p(DefaultTable.DefaultTable):
- """Character to Glyph Index Mapping Table
-
- This class represents the `cmap `_
- table, which maps between input characters (in Unicode or other system encodings)
- and glyphs within the font. The ``cmap`` table contains one or more subtables
- which determine the mapping of of characters to glyphs across different platforms
- and encoding systems.
-
- ``table__c_m_a_p`` objects expose an accessor ``.tables`` which provides access
- to the subtables, although it is normally easier to retrieve individual subtables
- through the utility methods described below. To add new subtables to a font,
- first determine the subtable format (if in doubt use format 4 for glyphs within
- the BMP, format 12 for glyphs outside the BMP, and format 14 for Unicode Variation
- Sequences) construct subtable objects with ``CmapSubtable.newSubtable(format)``,
- and append them to the ``.tables`` list.
-
- Within a subtable, the mapping of characters to glyphs is provided by the ``.cmap``
- attribute.
-
- Example::
-
- cmap4_0_3 = CmapSubtable.newSubtable(4)
- cmap4_0_3.platformID = 0
- cmap4_0_3.platEncID = 3
- cmap4_0_3.language = 0
- cmap4_0_3.cmap = { 0xC1: "Aacute" }
-
- cmap = newTable("cmap")
- cmap.tableVersion = 0
- cmap.tables = [cmap4_0_3]
- """
-
- def getcmap(self, platformID, platEncID):
- """Returns the first subtable which matches the given platform and encoding.
-
- Args:
- platformID (int): The platform ID. Use 0 for Unicode, 1 for Macintosh
- (deprecated for new fonts), 2 for ISO (deprecated) and 3 for Windows.
- encodingID (int): Encoding ID. Interpretation depends on the platform ID.
- See the OpenType specification for details.
-
- Returns:
- An object which is a subclass of :py:class:`CmapSubtable` if a matching
- subtable is found within the font, or ``None`` otherwise.
- """
-
- for subtable in self.tables:
- if subtable.platformID == platformID and subtable.platEncID == platEncID:
- return subtable
- return None # not found
-
- def getBestCmap(
- self,
- cmapPreferences=(
- (3, 10),
- (0, 6),
- (0, 4),
- (3, 1),
- (0, 3),
- (0, 2),
- (0, 1),
- (0, 0),
- ),
- ):
- """Returns the 'best' Unicode cmap dictionary available in the font
- or ``None``, if no Unicode cmap subtable is available.
-
- By default it will search for the following (platformID, platEncID)
- pairs in order::
-
- (3, 10), # Windows Unicode full repertoire
- (0, 6), # Unicode full repertoire (format 13 subtable)
- (0, 4), # Unicode 2.0 full repertoire
- (3, 1), # Windows Unicode BMP
- (0, 3), # Unicode 2.0 BMP
- (0, 2), # Unicode ISO/IEC 10646
- (0, 1), # Unicode 1.1
- (0, 0) # Unicode 1.0
-
- This particular order matches what HarfBuzz uses to choose what
- subtable to use by default. This order prefers the largest-repertoire
- subtable, and among those, prefers the Windows-platform over the
- Unicode-platform as the former has wider support.
-
- This order can be customized via the ``cmapPreferences`` argument.
- """
- for platformID, platEncID in cmapPreferences:
- cmapSubtable = self.getcmap(platformID, platEncID)
- if cmapSubtable is not None:
- return cmapSubtable.cmap
- return None # None of the requested cmap subtables were found
-
- def buildReversed(self):
- """Builds a reverse mapping dictionary
-
- Iterates over all Unicode cmap tables and returns a dictionary mapping
- glyphs to sets of codepoints, such as::
-
- {
- 'one': {0x31}
- 'A': {0x41,0x391}
- }
-
- The values are sets of Unicode codepoints because
- some fonts map different codepoints to the same glyph.
- For example, ``U+0041 LATIN CAPITAL LETTER A`` and ``U+0391
- GREEK CAPITAL LETTER ALPHA`` are sometimes the same glyph.
- """
- result = {}
- for subtable in self.tables:
- if subtable.isUnicode():
- for codepoint, name in subtable.cmap.items():
- result.setdefault(name, set()).add(codepoint)
- return result
-
- def decompile(self, data, ttFont):
- tableVersion, numSubTables = struct.unpack(">HH", data[:4])
- self.tableVersion = int(tableVersion)
- self.tables = tables = []
- seenOffsets = {}
- for i in range(numSubTables):
- platformID, platEncID, offset = struct.unpack(
- ">HHl", data[4 + i * 8 : 4 + (i + 1) * 8]
- )
- platformID, platEncID = int(platformID), int(platEncID)
- format, length = struct.unpack(">HH", data[offset : offset + 4])
- if format in [8, 10, 12, 13]:
- format, reserved, length = struct.unpack(
- ">HHL", data[offset : offset + 8]
- )
- elif format in [14]:
- format, length = struct.unpack(">HL", data[offset : offset + 6])
-
- if not length:
- log.error(
- "cmap subtable is reported as having zero length: platformID %s, "
- "platEncID %s, format %s offset %s. Skipping table.",
- platformID,
- platEncID,
- format,
- offset,
- )
- continue
- table = CmapSubtable.newSubtable(format)
- table.platformID = platformID
- table.platEncID = platEncID
- # Note that by default we decompile only the subtable header info;
- # any other data gets decompiled only when an attribute of the
- # subtable is referenced.
- table.decompileHeader(data[offset : offset + int(length)], ttFont)
- if offset in seenOffsets:
- table.data = None # Mark as decompiled
- table.cmap = tables[seenOffsets[offset]].cmap
- else:
- seenOffsets[offset] = i
- tables.append(table)
- if ttFont.lazy is False: # Be lazy for None and True
- self.ensureDecompiled()
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- for st in self.tables:
- st.ensureDecompiled()
-
- def compile(self, ttFont):
- self.tables.sort() # sort according to the spec; see CmapSubtable.__lt__()
- numSubTables = len(self.tables)
- totalOffset = 4 + 8 * numSubTables
- data = struct.pack(">HH", self.tableVersion, numSubTables)
- tableData = b""
- seen = (
- {}
- ) # Some tables are the same object reference. Don't compile them twice.
- done = (
- {}
- ) # Some tables are different objects, but compile to the same data chunk
- for table in self.tables:
- offset = seen.get(id(table.cmap))
- if offset is None:
- chunk = table.compile(ttFont)
- offset = done.get(chunk)
- if offset is None:
- offset = seen[id(table.cmap)] = done[chunk] = totalOffset + len(
- tableData
- )
- tableData = tableData + chunk
- data = data + struct.pack(">HHl", table.platformID, table.platEncID, offset)
- return data + tableData
-
- def toXML(self, writer, ttFont):
- writer.simpletag("tableVersion", version=self.tableVersion)
- writer.newline()
- for table in self.tables:
- table.toXML(writer, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "tableVersion":
- self.tableVersion = safeEval(attrs["version"])
- return
- if name[:12] != "cmap_format_":
- return
- if not hasattr(self, "tables"):
- self.tables = []
- format = safeEval(name[12:])
- table = CmapSubtable.newSubtable(format)
- table.platformID = safeEval(attrs["platformID"])
- table.platEncID = safeEval(attrs["platEncID"])
- table.fromXML(name, attrs, content, ttFont)
- self.tables.append(table)
-
-
-class CmapSubtable(object):
- """Base class for all cmap subtable formats.
-
- Subclasses which handle the individual subtable formats are named
- ``cmap_format_0``, ``cmap_format_2`` etc. Use :py:meth:`getSubtableClass`
- to retrieve the concrete subclass, or :py:meth:`newSubtable` to get a
- new subtable object for a given format.
-
- The object exposes a ``.cmap`` attribute, which contains a dictionary mapping
- character codepoints to glyph names.
- """
-
- @staticmethod
- def getSubtableClass(format):
- """Return the subtable class for a format."""
- return cmap_classes.get(format, cmap_format_unknown)
-
- @staticmethod
- def newSubtable(format):
- """Return a new instance of a subtable for the given format
- ."""
- subtableClass = CmapSubtable.getSubtableClass(format)
- return subtableClass(format)
-
- def __init__(self, format):
- self.format = format
- self.data = None
- self.ttFont = None
- self.platformID = None #: The platform ID of this subtable
- self.platEncID = None #: The encoding ID of this subtable (interpretation depends on ``platformID``)
- self.language = (
- None #: The language ID of this subtable (Macintosh platform only)
- )
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- if self.data is None:
- return
- self.decompile(None, None) # use saved data.
- self.data = None # Once this table has been decompiled, make sure we don't
- # just return the original data. Also avoids recursion when
- # called with an attribute that the cmap subtable doesn't have.
-
- def __getattr__(self, attr):
- # allow lazy decompilation of subtables.
- if attr[:2] == "__": # don't handle requests for member functions like '__lt__'
- raise AttributeError(attr)
- if self.data is None:
- raise AttributeError(attr)
- self.ensureDecompiled()
- return getattr(self, attr)
-
- def decompileHeader(self, data, ttFont):
- format, length, language = struct.unpack(">HHH", data[:6])
- assert (
- len(data) == length
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- format,
- len(data),
- length,
- )
- self.format = int(format)
- self.length = int(length)
- self.language = int(language)
- self.data = data[6:]
- self.ttFont = ttFont
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("language", self.language),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def getEncoding(self, default=None):
- """Returns the Python encoding name for this cmap subtable based on its platformID,
- platEncID, and language. If encoding for these values is not known, by default
- ``None`` is returned. That can be overridden by passing a value to the ``default``
- argument.
-
- Note that if you want to choose a "preferred" cmap subtable, most of the time
- ``self.isUnicode()`` is what you want as that one only returns true for the modern,
- commonly used, Unicode-compatible triplets, not the legacy ones.
- """
- return getEncoding(self.platformID, self.platEncID, self.language, default)
-
- def isUnicode(self):
- """Returns true if the characters are interpreted as Unicode codepoints."""
- return self.platformID == 0 or (
- self.platformID == 3 and self.platEncID in [0, 1, 10]
- )
-
- def isSymbol(self):
- """Returns true if the subtable is for the Symbol encoding (3,0)"""
- return self.platformID == 3 and self.platEncID == 0
-
- def _writeCodes(self, codes, writer):
- isUnicode = self.isUnicode()
- for code, name in codes:
- writer.simpletag("map", code=hex(code), name=name)
- if isUnicode:
- writer.comment(Unicode[code])
- writer.newline()
-
- def __lt__(self, other):
- if not isinstance(other, CmapSubtable):
- return NotImplemented
-
- # implemented so that list.sort() sorts according to the spec.
- selfTuple = (
- getattr(self, "platformID", None),
- getattr(self, "platEncID", None),
- getattr(self, "language", None),
- self.__dict__,
- )
- otherTuple = (
- getattr(other, "platformID", None),
- getattr(other, "platEncID", None),
- getattr(other, "language", None),
- other.__dict__,
- )
- return selfTuple < otherTuple
-
-
-class cmap_format_0(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- assert 262 == self.length, "Format 0 cmap subtable not 262 bytes"
- gids = array.array("B")
- gids.frombytes(self.data)
- charCodes = list(range(len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return struct.pack(">HHH", 0, 262, self.language) + self.data
-
- cmap = self.cmap
- assert set(cmap.keys()).issubset(range(256))
- getGlyphID = ttFont.getGlyphID
- valueList = [getGlyphID(cmap[i]) if i in cmap else 0 for i in range(256)]
-
- gids = array.array("B", valueList)
- data = struct.pack(">HHH", 0, 262, self.language) + gids.tobytes()
- assert len(data) == 262
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-subHeaderFormat = ">HHhH"
-
-
-class SubHeader(object):
- def __init__(self):
- self.firstCode = None
- self.entryCount = None
- self.idDelta = None
- self.idRangeOffset = None
- self.glyphIndexArray = []
-
-
-class cmap_format_2(CmapSubtable):
- def setIDDelta(self, subHeader):
- subHeader.idDelta = 0
- # find the minGI which is not zero.
- minGI = subHeader.glyphIndexArray[0]
- for gid in subHeader.glyphIndexArray:
- if (gid != 0) and (gid < minGI):
- minGI = gid
- # The lowest gid in glyphIndexArray, after subtracting idDelta, must be 1.
- # idDelta is a short, and must be between -32K and 32K. minGI can be between 1 and 64K.
- # We would like to pick an idDelta such that the first glyphArray GID is 1,
- # so that we are more likely to be able to combine glypharray GID subranges.
- # This means that we have a problem when minGI is > 32K
- # Since the final gi is reconstructed from the glyphArray GID by:
- # (short)finalGID = (gid + idDelta) % 0x10000),
- # we can get from a glypharray GID of 1 to a final GID of 65K by subtracting 2, and casting the
- # negative number to an unsigned short.
-
- if minGI > 1:
- if minGI > 0x7FFF:
- subHeader.idDelta = -(0x10000 - minGI) - 1
- else:
- subHeader.idDelta = minGI - 1
- idDelta = subHeader.idDelta
- for i in range(subHeader.entryCount):
- gid = subHeader.glyphIndexArray[i]
- if gid > 0:
- subHeader.glyphIndexArray[i] = gid - idDelta
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- subHeaderKeys = []
- maxSubHeaderindex = 0
- # get the key array, and determine the number of subHeaders.
- allKeys = array.array("H")
- allKeys.frombytes(data[:512])
- data = data[512:]
- if sys.byteorder != "big":
- allKeys.byteswap()
- subHeaderKeys = [key // 8 for key in allKeys]
- maxSubHeaderindex = max(subHeaderKeys)
-
- # Load subHeaders
- subHeaderList = []
- pos = 0
- for i in range(maxSubHeaderindex + 1):
- subHeader = SubHeader()
- (
- subHeader.firstCode,
- subHeader.entryCount,
- subHeader.idDelta,
- subHeader.idRangeOffset,
- ) = struct.unpack(subHeaderFormat, data[pos : pos + 8])
- pos += 8
- giDataPos = pos + subHeader.idRangeOffset - 2
- giList = array.array("H")
- giList.frombytes(data[giDataPos : giDataPos + subHeader.entryCount * 2])
- if sys.byteorder != "big":
- giList.byteswap()
- subHeader.glyphIndexArray = giList
- subHeaderList.append(subHeader)
- # How this gets processed.
- # Charcodes may be one or two bytes.
- # The first byte of a charcode is mapped through the subHeaderKeys, to select
- # a subHeader. For any subheader but 0, the next byte is then mapped through the
- # selected subheader. If subheader Index 0 is selected, then the byte itself is
- # mapped through the subheader, and there is no second byte.
- # Then assume that the subsequent byte is the first byte of the next charcode,and repeat.
- #
- # Each subheader references a range in the glyphIndexArray whose length is entryCount.
- # The range in glyphIndexArray referenced by a sunheader may overlap with the range in glyphIndexArray
- # referenced by another subheader.
- # The only subheader that will be referenced by more than one first-byte value is the subheader
- # that maps the entire range of glyphID values to glyphIndex 0, e.g notdef:
- # {firstChar 0, EntryCount 0,idDelta 0,idRangeOffset xx}
- # A byte being mapped though a subheader is treated as in index into a mapping of array index to font glyphIndex.
- # A subheader specifies a subrange within (0...256) by the
- # firstChar and EntryCount values. If the byte value is outside the subrange, then the glyphIndex is zero
- # (e.g. glyph not in font).
- # If the byte index is in the subrange, then an offset index is calculated as (byteIndex - firstChar).
- # The index to glyphIndex mapping is a subrange of the glyphIndexArray. You find the start of the subrange by
- # counting idRangeOffset bytes from the idRangeOffset word. The first value in this subrange is the
- # glyphIndex for the index firstChar. The offset index should then be used in this array to get the glyphIndex.
- # Example for Logocut-Medium
- # first byte of charcode = 129; selects subheader 1.
- # subheader 1 = {firstChar 64, EntryCount 108,idDelta 42,idRangeOffset 0252}
- # second byte of charCode = 66
- # the index offset = 66-64 = 2.
- # The subrange of the glyphIndexArray starting at 0x0252 bytes from the idRangeOffset word is:
- # [glyphIndexArray index], [subrange array index] = glyphIndex
- # [256], [0]=1 from charcode [129, 64]
- # [257], [1]=2 from charcode [129, 65]
- # [258], [2]=3 from charcode [129, 66]
- # [259], [3]=4 from charcode [129, 67]
- # So, the glyphIndex = 3 from the array. Then if idDelta is not zero and the glyph ID is not zero,
- # add it to the glyphID to get the final glyphIndex
- # value. In this case the final glyph index = 3+ 42 -> 45 for the final glyphIndex. Whew!
-
- self.data = b""
- cmap = {}
- notdefGI = 0
- for firstByte in range(256):
- subHeadindex = subHeaderKeys[firstByte]
- subHeader = subHeaderList[subHeadindex]
- if subHeadindex == 0:
- if (firstByte < subHeader.firstCode) or (
- firstByte >= subHeader.firstCode + subHeader.entryCount
- ):
- continue # gi is notdef.
- else:
- charCode = firstByte
- offsetIndex = firstByte - subHeader.firstCode
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue # gi is notdef.
- cmap[charCode] = gi
- else:
- if subHeader.entryCount:
- charCodeOffset = firstByte * 256 + subHeader.firstCode
- for offsetIndex in range(subHeader.entryCount):
- charCode = charCodeOffset + offsetIndex
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue
- cmap[charCode] = gi
- # If not subHeader.entryCount, then all char codes with this first byte are
- # mapped to .notdef. We can skip this subtable, and leave the glyphs un-encoded, which is the
- # same as mapping it to .notdef.
-
- gids = list(cmap.values())
- charCodes = list(cmap.keys())
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- kEmptyTwoCharCodeRange = -1
- notdefGI = 0
-
- items = sorted(self.cmap.items())
- charCodes = [item[0] for item in items]
- names = [item[1] for item in items]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 2 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- # Process the (char code to gid) item list in char code order.
- # By definition, all one byte char codes map to subheader 0.
- # For all the two byte char codes, we assume that the first byte maps maps to the empty subhead (with an entry count of 0,
- # which defines all char codes in its range to map to notdef) unless proven otherwise.
- # Note that since the char code items are processed in char code order, all the char codes with the
- # same first byte are in sequential order.
-
- subHeaderKeys = [
- kEmptyTwoCharCodeRange for x in range(256)
- ] # list of indices into subHeaderList.
- subHeaderList = []
-
- # We force this subheader entry 0 to exist in the subHeaderList in the case where some one comes up
- # with a cmap where all the one byte char codes map to notdef,
- # with the result that the subhead 0 would not get created just by processing the item list.
- charCode = charCodes[0]
- if charCode > 255:
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 0
- subHeaderList.append(subHeader)
-
- lastFirstByte = -1
- items = zip(charCodes, gids)
- for charCode, gid in items:
- if gid == 0:
- continue
- firstbyte = charCode >> 8
- secondByte = charCode & 0x00FF
-
- if (
- firstbyte != lastFirstByte
- ): # Need to update the current subhead, and start a new one.
- if lastFirstByte > -1:
- # fix GI's and iDelta of current subheader.
- self.setIDDelta(subHeader)
-
- # If it was sunheader 0 for one-byte charCodes, then we need to set the subHeaderKeys value to zero
- # for the indices matching the char codes.
- if lastFirstByte == 0:
- for index in range(subHeader.entryCount):
- charCode = subHeader.firstCode + index
- subHeaderKeys[charCode] = 0
-
- assert subHeader.entryCount == len(
- subHeader.glyphIndexArray
- ), "Error - subhead entry count does not match len of glyphID subrange."
- # init new subheader
- subHeader = SubHeader()
- subHeader.firstCode = secondByte
- subHeader.entryCount = 1
- subHeader.glyphIndexArray.append(gid)
- subHeaderList.append(subHeader)
- subHeaderKeys[firstbyte] = len(subHeaderList) - 1
- lastFirstByte = firstbyte
- else:
- # need to fill in with notdefs all the code points between the last charCode and the current charCode.
- codeDiff = secondByte - (subHeader.firstCode + subHeader.entryCount)
- for i in range(codeDiff):
- subHeader.glyphIndexArray.append(notdefGI)
- subHeader.glyphIndexArray.append(gid)
- subHeader.entryCount = subHeader.entryCount + codeDiff + 1
-
- # fix GI's and iDelta of last subheader that we we added to the subheader array.
- self.setIDDelta(subHeader)
-
- # Now we add a final subheader for the subHeaderKeys which maps to empty two byte charcode ranges.
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 2
- subHeaderList.append(subHeader)
- emptySubheadIndex = len(subHeaderList) - 1
- for index in range(256):
- if subHeaderKeys[index] == kEmptyTwoCharCodeRange:
- subHeaderKeys[index] = emptySubheadIndex
- # Since this is the last subheader, the GlyphIndex Array starts two bytes after the start of the
- # idRangeOffset word of this subHeader. We can safely point to the first entry in the GlyphIndexArray,
- # since the first subrange of the GlyphIndexArray is for subHeader 0, which always starts with
- # charcode 0 and GID 0.
-
- idRangeOffset = (
- len(subHeaderList) - 1
- ) * 8 + 2 # offset to beginning of glyphIDArray from first subheader idRangeOffset.
- subheadRangeLen = (
- len(subHeaderList) - 1
- ) # skip last special empty-set subheader; we've already hardocodes its idRangeOffset to 2.
- for index in range(subheadRangeLen):
- subHeader = subHeaderList[index]
- subHeader.idRangeOffset = 0
- for j in range(index):
- prevSubhead = subHeaderList[j]
- if (
- prevSubhead.glyphIndexArray == subHeader.glyphIndexArray
- ): # use the glyphIndexArray subarray
- subHeader.idRangeOffset = (
- prevSubhead.idRangeOffset - (index - j) * 8
- )
- subHeader.glyphIndexArray = []
- break
- if subHeader.idRangeOffset == 0: # didn't find one.
- subHeader.idRangeOffset = idRangeOffset
- idRangeOffset = (
- idRangeOffset - 8
- ) + subHeader.entryCount * 2 # one less subheader, one more subArray.
- else:
- idRangeOffset = idRangeOffset - 8 # one less subheader
-
- # Now we can write out the data!
- length = (
- 6 + 512 + 8 * len(subHeaderList)
- ) # header, 256 subHeaderKeys, and subheader array.
- for subhead in subHeaderList[:-1]:
- length = (
- length + len(subhead.glyphIndexArray) * 2
- ) # We can't use subhead.entryCount, as some of the subhead may share subArrays.
- dataList = [struct.pack(">HHH", 2, length, self.language)]
- for index in subHeaderKeys:
- dataList.append(struct.pack(">H", index * 8))
- for subhead in subHeaderList:
- dataList.append(
- struct.pack(
- subHeaderFormat,
- subhead.firstCode,
- subhead.entryCount,
- subhead.idDelta,
- subhead.idRangeOffset,
- )
- )
- for subhead in subHeaderList[:-1]:
- for gi in subhead.glyphIndexArray:
- dataList.append(struct.pack(">H", gi))
- data = bytesjoin(dataList)
- assert len(data) == length, (
- "Error: cmap format 2 is not same length as calculated! actual: "
- + str(len(data))
- + " calc : "
- + str(length)
- )
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-cmap_format_4_format = ">7H"
-
-# uint16 endCode[segCount] # Ending character code for each segment, last = 0xFFFF.
-# uint16 reservedPad # This value should be zero
-# uint16 startCode[segCount] # Starting character code for each segment
-# uint16 idDelta[segCount] # Delta for all character codes in segment
-# uint16 idRangeOffset[segCount] # Offset in bytes to glyph indexArray, or 0
-# uint16 glyphIndexArray[variable] # Glyph index array
-
-
-def splitRange(startCode, endCode, cmap):
- # Try to split a range of character codes into subranges with consecutive
- # glyph IDs in such a way that the cmap4 subtable can be stored "most"
- # efficiently. I can't prove I've got the optimal solution, but it seems
- # to do well with the fonts I tested: none became bigger, many became smaller.
- if startCode == endCode:
- return [], [endCode]
-
- lastID = cmap[startCode]
- lastCode = startCode
- inOrder = None
- orderedBegin = None
- subRanges = []
-
- # Gather subranges in which the glyph IDs are consecutive.
- for code in range(startCode + 1, endCode + 1):
- glyphID = cmap[code]
-
- if glyphID - 1 == lastID:
- if inOrder is None or not inOrder:
- inOrder = 1
- orderedBegin = lastCode
- else:
- if inOrder:
- inOrder = 0
- subRanges.append((orderedBegin, lastCode))
- orderedBegin = None
-
- lastID = glyphID
- lastCode = code
-
- if inOrder:
- subRanges.append((orderedBegin, lastCode))
- assert lastCode == endCode
-
- # Now filter out those new subranges that would only make the data bigger.
- # A new segment cost 8 bytes, not using a new segment costs 2 bytes per
- # character.
- newRanges = []
- for b, e in subRanges:
- if b == startCode and e == endCode:
- break # the whole range, we're fine
- if b == startCode or e == endCode:
- threshold = 4 # split costs one more segment
- else:
- threshold = 8 # split costs two more segments
- if (e - b + 1) > threshold:
- newRanges.append((b, e))
- subRanges = newRanges
-
- if not subRanges:
- return [], [endCode]
-
- if subRanges[0][0] != startCode:
- subRanges.insert(0, (startCode, subRanges[0][0] - 1))
- if subRanges[-1][1] != endCode:
- subRanges.append((subRanges[-1][1] + 1, endCode))
-
- # Fill the "holes" in the segments list -- those are the segments in which
- # the glyph IDs are _not_ consecutive.
- i = 1
- while i < len(subRanges):
- if subRanges[i - 1][1] + 1 != subRanges[i][0]:
- subRanges.insert(i, (subRanges[i - 1][1] + 1, subRanges[i][0] - 1))
- i = i + 1
- i = i + 1
-
- # Transform the ranges into startCode/endCode lists.
- start = []
- end = []
- for b, e in subRanges:
- start.append(b)
- end.append(e)
- start.pop(0)
-
- assert len(start) + 1 == len(end)
- return start, end
-
-
-class cmap_format_4(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- (segCountX2, searchRange, entrySelector, rangeShift) = struct.unpack(
- ">4H", data[:8]
- )
- data = data[8:]
- segCount = segCountX2 // 2
-
- allCodes = array.array("H")
- allCodes.frombytes(data)
- self.data = data = None
-
- if sys.byteorder != "big":
- allCodes.byteswap()
-
- # divide the data
- endCode = allCodes[:segCount]
- allCodes = allCodes[segCount + 1 :] # the +1 is skipping the reservedPad field
- startCode = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idDelta = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idRangeOffset = allCodes[:segCount]
- glyphIndexArray = allCodes[segCount:]
- lenGIArray = len(glyphIndexArray)
-
- # build 2-byte character mapping
- charCodes = []
- gids = []
- for i in range(len(startCode) - 1): # don't do 0xffff!
- start = startCode[i]
- delta = idDelta[i]
- rangeOffset = idRangeOffset[i]
- partial = rangeOffset // 2 - start + i - len(idRangeOffset)
-
- rangeCharCodes = list(range(startCode[i], endCode[i] + 1))
- charCodes.extend(rangeCharCodes)
- if rangeOffset == 0:
- gids.extend(
- [(charCode + delta) & 0xFFFF for charCode in rangeCharCodes]
- )
- else:
- for charCode in rangeCharCodes:
- index = charCode + partial
- assert index < lenGIArray, (
- "In format 4 cmap, range (%d), the calculated index (%d) into the glyph index array is not less than the length of the array (%d) !"
- % (i, index, lenGIArray)
- )
- if glyphIndexArray[index] != 0: # if not missing glyph
- glyphID = glyphIndexArray[index] + delta
- else:
- glyphID = 0 # missing glyph
- gids.append(glyphID & 0xFFFF)
-
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
-
- charCodes = list(self.cmap.keys())
- if not charCodes:
- startCode = [0xFFFF]
- endCode = [0xFFFF]
- else:
- charCodes.sort()
- names = [self.cmap[code] for code in charCodes]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 4 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- # Build startCode and endCode lists.
- # Split the char codes in ranges of consecutive char codes, then split
- # each range in more ranges of consecutive/not consecutive glyph IDs.
- # See splitRange().
- lastCode = charCodes[0]
- endCode = []
- startCode = [lastCode]
- for charCode in charCodes[
- 1:
- ]: # skip the first code, it's the first start code
- if charCode == lastCode + 1:
- lastCode = charCode
- continue
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(charCode)
- lastCode = charCode
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(0xFFFF)
- endCode.append(0xFFFF)
-
- # build up rest of cruft
- idDelta = []
- idRangeOffset = []
- glyphIndexArray = []
- for i in range(len(endCode) - 1): # skip the closing codes (0xffff)
- indices = []
- for charCode in range(startCode[i], endCode[i] + 1):
- indices.append(cmap[charCode])
- if indices == list(range(indices[0], indices[0] + len(indices))):
- idDelta.append((indices[0] - startCode[i]) % 0x10000)
- idRangeOffset.append(0)
- else:
- idDelta.append(0)
- idRangeOffset.append(2 * (len(endCode) + len(glyphIndexArray) - i))
- glyphIndexArray.extend(indices)
- idDelta.append(1) # 0xffff + 1 == (tadaa!) 0. So this end code maps to .notdef
- idRangeOffset.append(0)
-
- # Insane.
- segCount = len(endCode)
- segCountX2 = segCount * 2
- searchRange, entrySelector, rangeShift = getSearchRange(segCount, 2)
-
- charCodeArray = array.array("H", endCode + [0] + startCode)
- idDeltaArray = array.array("H", idDelta)
- restArray = array.array("H", idRangeOffset + glyphIndexArray)
- if sys.byteorder != "big":
- charCodeArray.byteswap()
- if sys.byteorder != "big":
- idDeltaArray.byteswap()
- if sys.byteorder != "big":
- restArray.byteswap()
- data = charCodeArray.tobytes() + idDeltaArray.tobytes() + restArray.tobytes()
-
- length = struct.calcsize(cmap_format_4_format) + len(data)
- header = struct.pack(
- cmap_format_4_format,
- self.format,
- length,
- self.language,
- segCountX2,
- searchRange,
- entrySelector,
- rangeShift,
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- nameMap, attrsMap, dummyContent = element
- if nameMap != "map":
- assert 0, "Unrecognized keyword in cmap subtable"
- cmap[safeEval(attrsMap["code"])] = attrsMap["name"]
-
-
-class cmap_format_6(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- firstCode, entryCount = struct.unpack(">HH", data[:4])
- firstCode = int(firstCode)
- data = data[4:]
- # assert len(data) == 2 * entryCount # XXX not true in Apple's Helvetica!!!
- gids = array.array("H")
- gids.frombytes(data[: 2 * int(entryCount)])
- if sys.byteorder != "big":
- gids.byteswap()
- self.data = data = None
-
- charCodes = list(range(firstCode, firstCode + len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- cmap = self.cmap
- codes = sorted(cmap.keys())
- if codes: # yes, there are empty cmap tables.
- codes = list(range(codes[0], codes[-1] + 1))
- firstCode = codes[0]
- valueList = [
- ttFont.getGlyphID(cmap[code]) if code in cmap else 0 for code in codes
- ]
- gids = array.array("H", valueList)
- if sys.byteorder != "big":
- gids.byteswap()
- data = gids.tobytes()
- else:
- data = b""
- firstCode = 0
- header = struct.pack(
- ">HHHHH", 6, len(data) + 10, self.language, firstCode, len(codes)
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12_or_13(CmapSubtable):
- def __init__(self, format):
- self.format = format
- self.reserved = 0
- self.data = None
- self.ttFont = None
-
- def decompileHeader(self, data, ttFont):
- format, reserved, length, language, nGroups = struct.unpack(">HHLLL", data[:16])
- assert (
- len(data) == (16 + nGroups * 12) == (length)
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- self.format,
- len(data),
- length,
- )
- self.format = format
- self.reserved = reserved
- self.length = length
- self.language = language
- self.nGroups = nGroups
- self.data = data[16:]
- self.ttFont = ttFont
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- charCodes = []
- gids = []
- pos = 0
- for i in range(self.nGroups):
- startCharCode, endCharCode, glyphID = struct.unpack(
- ">LLL", data[pos : pos + 12]
- )
- pos += 12
- lenGroup = 1 + endCharCode - startCharCode
- charCodes.extend(list(range(startCharCode, endCharCode + 1)))
- gids.extend(self._computeGIDs(glyphID, lenGroup))
- self.data = data = None
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- self.length,
- self.language,
- self.nGroups,
- )
- + self.data
- )
- charCodes = list(self.cmap.keys())
- names = list(self.cmap.values())
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 12 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- charCodes.sort()
- index = 0
- startCharCode = charCodes[0]
- startGlyphID = cmap[startCharCode]
- lastGlyphID = startGlyphID - self._format_step
- lastCharCode = startCharCode - 1
- nGroups = 0
- dataList = []
- maxIndex = len(charCodes)
- for index in range(maxIndex):
- charCode = charCodes[index]
- glyphID = cmap[charCode]
- if not self._IsInSameRun(glyphID, lastGlyphID, charCode, lastCharCode):
- dataList.append(
- struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID)
- )
- startCharCode = charCode
- startGlyphID = glyphID
- nGroups = nGroups + 1
- lastGlyphID = glyphID
- lastCharCode = charCode
- dataList.append(struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID))
- nGroups = nGroups + 1
- data = bytesjoin(dataList)
- lengthSubtable = len(data) + 16
- assert len(data) == (nGroups * 12) == (lengthSubtable - 16)
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- lengthSubtable,
- self.language,
- nGroups,
- )
- + data
- )
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("format", self.format),
- ("reserved", self.reserved),
- ("length", self.length),
- ("language", self.language),
- ("nGroups", self.nGroups),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.format = safeEval(attrs["format"])
- self.reserved = safeEval(attrs["reserved"])
- self.length = safeEval(attrs["length"])
- self.language = safeEval(attrs["language"])
- self.nGroups = safeEval(attrs["nGroups"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12(cmap_format_12_or_13):
-
- _format_step = 1
-
- def __init__(self, format=12):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return list(range(startingGlyph, startingGlyph + numberOfGlyphs))
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == 1 + lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-class cmap_format_13(cmap_format_12_or_13):
-
- _format_step = 0
-
- def __init__(self, format=13):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return [startingGlyph] * numberOfGlyphs
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-def cvtToUVS(threeByteString):
- data = b"\0" + threeByteString
- (val,) = struct.unpack(">L", data)
- return val
-
-
-def cvtFromUVS(val):
- assert 0 <= val < 0x1000000
- fourByteString = struct.pack(">L", val)
- return fourByteString[1:]
-
-
-class cmap_format_14(CmapSubtable):
- def decompileHeader(self, data, ttFont):
- format, length, numVarSelectorRecords = struct.unpack(">HLL", data[:10])
- self.data = data[10:]
- self.length = length
- self.numVarSelectorRecords = numVarSelectorRecords
- self.ttFont = ttFont
- self.language = 0xFF # has no language.
-
- def decompile(self, data, ttFont):
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = self.data
-
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- uvsDict = {}
- recOffset = 0
- for n in range(self.numVarSelectorRecords):
- uvs, defOVSOffset, nonDefUVSOffset = struct.unpack(
- ">3sLL", data[recOffset : recOffset + 11]
- )
- recOffset += 11
- varUVS = cvtToUVS(uvs)
- if defOVSOffset:
- startOffset = defOVSOffset - 10
- (numValues,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- for r in range(numValues):
- uv, addtlCnt = struct.unpack(
- ">3sB", data[startOffset : startOffset + 4]
- )
- startOffset += 4
- firstBaseUV = cvtToUVS(uv)
- cnt = addtlCnt + 1
- baseUVList = list(range(firstBaseUV, firstBaseUV + cnt))
- glyphList = [None] * cnt
- localUVList = zip(baseUVList, glyphList)
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = list(localUVList)
-
- if nonDefUVSOffset:
- startOffset = nonDefUVSOffset - 10
- (numRecs,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- localUVList = []
- for r in range(numRecs):
- uv, gid = struct.unpack(">3sH", data[startOffset : startOffset + 5])
- startOffset += 5
- uv = cvtToUVS(uv)
- glyphName = self.ttFont.getGlyphName(gid)
- localUVList.append((uv, glyphName))
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = localUVList
-
- self.uvsDict = uvsDict
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- for uvs in uvsList:
- uvList = uvsDict[uvs]
- uvList.sort(key=lambda item: (item[1] is not None, item[0], item[1]))
- for uv, gname in uvList:
- attrs = [("uv", hex(uv)), ("uvs", hex(uvs))]
- if gname is not None:
- attrs.append(("name", gname))
- writer.simpletag("map", attrs)
- writer.newline()
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = 0xFF # provide a value so that CmapSubtable.__lt__() won't fail
- if not hasattr(self, "cmap"):
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- if not hasattr(self, "uvsDict"):
- self.uvsDict = {}
- uvsDict = self.uvsDict
-
- # For backwards compatibility reasons we accept "None" as an indicator
- # for "default mapping", unless the font actually has a glyph named
- # "None".
- _hasGlyphNamedNone = None
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- uvs = safeEval(attrs["uvs"])
- uv = safeEval(attrs["uv"])
- gname = attrs.get("name")
- if gname == "None":
- if _hasGlyphNamedNone is None:
- _hasGlyphNamedNone = "None" in ttFont.getGlyphOrder()
- if not _hasGlyphNamedNone:
- gname = None
- try:
- uvsDict[uvs].append((uv, gname))
- except KeyError:
- uvsDict[uvs] = [(uv, gname)]
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
- + self.data
- )
-
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- self.numVarSelectorRecords = len(uvsList)
- offset = (
- 10 + self.numVarSelectorRecords * 11
- ) # current value is end of VarSelectorRecords block.
- data = []
- varSelectorRecords = []
- for uvs in uvsList:
- entryList = uvsDict[uvs]
-
- defList = [entry for entry in entryList if entry[1] is None]
- if defList:
- defList = [entry[0] for entry in defList]
- defOVSOffset = offset
- defList.sort()
-
- lastUV = defList[0]
- cnt = -1
- defRecs = []
- for defEntry in defList:
- cnt += 1
- if (lastUV + cnt) != defEntry:
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt - 1)
- lastUV = defEntry
- defRecs.append(rec)
- cnt = 0
-
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt)
- defRecs.append(rec)
-
- numDefRecs = len(defRecs)
- data.append(struct.pack(">L", numDefRecs))
- data.extend(defRecs)
- offset += 4 + numDefRecs * 4
- else:
- defOVSOffset = 0
-
- ndefList = [entry for entry in entryList if entry[1] is not None]
- if ndefList:
- nonDefUVSOffset = offset
- ndefList.sort()
- numNonDefRecs = len(ndefList)
- data.append(struct.pack(">L", numNonDefRecs))
- offset += 4 + numNonDefRecs * 5
-
- for uv, gname in ndefList:
- gid = ttFont.getGlyphID(gname)
- ndrec = struct.pack(">3sH", cvtFromUVS(uv), gid)
- data.append(ndrec)
- else:
- nonDefUVSOffset = 0
-
- vrec = struct.pack(">3sLL", cvtFromUVS(uvs), defOVSOffset, nonDefUVSOffset)
- varSelectorRecords.append(vrec)
-
- data = bytesjoin(varSelectorRecords) + bytesjoin(data)
- self.length = 10 + len(data)
- headerdata = struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
-
- return headerdata + data
-
-
-class cmap_format_unknown(CmapSubtable):
- def toXML(self, writer, ttFont):
- cmapName = self.__class__.__name__[:12] + str(self.format)
- writer.begintag(
- cmapName,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- writer.dumphex(self.data)
- writer.endtag(cmapName)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.data = readHex(content)
- self.cmap = {}
-
- def decompileHeader(self, data, ttFont):
- self.language = 0 # dummy value
- self.data = data
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- def compile(self, ttFont):
- if self.data:
- return self.data
- else:
- return None
-
-
-cmap_classes = {
- 0: cmap_format_0,
- 2: cmap_format_2,
- 4: cmap_format_4,
- 6: cmap_format_6,
- 12: cmap_format_12,
- 13: cmap_format_13,
- 14: cmap_format_14,
-}
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicode.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicode.py
deleted file mode 100644
index a9ffeefac1c9e553c53bc12346e49e7ece8d364a..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicode.py
+++ /dev/null
@@ -1,50 +0,0 @@
-def _makeunicodes(f):
- lines = iter(f.readlines())
- unicodes = {}
- for line in lines:
- if not line:
- continue
- num, name = line.split(";")[:2]
- if name[0] == "<":
- continue # "", etc.
- num = int(num, 16)
- unicodes[num] = name
- return unicodes
-
-
-class _UnicodeCustom(object):
- def __init__(self, f):
- if isinstance(f, str):
- with open(f) as fd:
- codes = _makeunicodes(fd)
- else:
- codes = _makeunicodes(f)
- self.codes = codes
-
- def __getitem__(self, charCode):
- try:
- return self.codes[charCode]
- except KeyError:
- return "????"
-
-
-class _UnicodeBuiltin(object):
- def __getitem__(self, charCode):
- try:
- # use unicodedata backport to python2, if available:
- # https://github.com/mikekap/unicodedata2
- import unicodedata2 as unicodedata
- except ImportError:
- import unicodedata
- try:
- return unicodedata.name(chr(charCode))
- except ValueError:
- return "????"
-
-
-Unicode = _UnicodeBuiltin()
-
-
-def setUnicodeData(f):
- global Unicode
- Unicode = _UnicodeCustom(f)
diff --git a/spaces/dcq/freegpt-webui/client/css/global.css b/spaces/dcq/freegpt-webui/client/css/global.css
deleted file mode 100644
index e1a25f09c0860516bb8ceca8f63d4eb0ff0d538f..0000000000000000000000000000000000000000
--- a/spaces/dcq/freegpt-webui/client/css/global.css
+++ /dev/null
@@ -1,67 +0,0 @@
-@import url("https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap");
-* {
- --font-1: "Inter", sans-serif;
- --section-gap: 24px;
- --border-radius-1: 8px;
- margin: 0;
- padding: 0;
- box-sizing: border-box;
- position: relative;
- font-family: var(--font-1);
-}
-
-.theme-light {
- --colour-1: #f5f5f5;
- --colour-2: #222222;
- --colour-3: #333333;
- --colour-4: #444444;
- --colour-5: #fafafa;
- --colour-6: #e0e0e0;
-
- --accent: #3a3a3a;
- --blur-bg: #f9f9f9;
- --blur-border: #ebebeb;
- --user-input: #333333;
- --conversations: #555555;
-}
-
-
-.theme-dark {
- --colour-1: #181818;
- --colour-2: #ccc;
- --colour-3: #dadada;
- --colour-4: #f0f0f0;
- --colour-5: #181818;
- --colour-6: #242424;
-
- --accent: #151718;
- --blur-bg: #242627;
- --blur-border: #242627;
- --user-input: #f5f5f5;
- --conversations: #555555;
-}
-
-html,
-body {
- background: var(--colour-1);
- color: var(--colour-3);
-}
-
-ol,
-ul {
- padding-left: 20px;
-}
-
-.shown {
- display: flex !important;
-}
-
-a:-webkit-any-link {
- color: var(--accent);
-}
-
-@media screen and (max-height: 720px) {
- :root {
- --section-gap: 16px;
- }
-}
diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/H2o.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/H2o.py
deleted file mode 100644
index eabf94e2dc1e6167f746a820e34c335f2aa8578e..0000000000000000000000000000000000000000
--- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/H2o.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from requests import Session
-from uuid import uuid4
-from json import loads
-import os
-import json
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt-gm.h2o.ai'
-model = ['falcon-40b', 'falcon-7b', 'llama-13b']
-supports_stream = True
-needs_auth = False
-
-models = {
- 'falcon-7b': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3',
- 'falcon-40b': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
- 'llama-13b': 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b'
-}
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- conversation = 'instruction: this is a conversation beween, a user and an AI assistant, respond to the latest message, referring to the conversation if needed\n'
- for message in messages:
- conversation += '%s: %s\n' % (message['role'], message['content'])
- conversation += 'assistant:'
-
- client = Session()
- client.headers = {
- 'authority': 'gpt-gm.h2o.ai',
- 'origin': 'https://gpt-gm.h2o.ai',
- 'referer': 'https://gpt-gm.h2o.ai/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'document',
- 'sec-fetch-mode': 'navigate',
- 'sec-fetch-site': 'same-origin',
- 'sec-fetch-user': '?1',
- 'upgrade-insecure-requests': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
-
- client.get('https://gpt-gm.h2o.ai/')
- response = client.post('https://gpt-gm.h2o.ai/settings', data={
- 'ethicsModalAccepted': 'true',
- 'shareConversationsWithModelAuthors': 'true',
- 'ethicsModalAcceptedAt': '',
- 'activeModel': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
- 'searchEnabled': 'true',
- })
-
- headers = {
- 'authority': 'gpt-gm.h2o.ai',
- 'accept': '*/*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'origin': 'https://gpt-gm.h2o.ai',
- 'referer': 'https://gpt-gm.h2o.ai/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
-
- json_data = {
- 'model': models[model]
- }
-
- response = client.post('https://gpt-gm.h2o.ai/conversation',
- headers=headers, json=json_data)
- conversationId = response.json()['conversationId']
-
-
- completion = client.post(f'https://gpt-gm.h2o.ai/conversation/{conversationId}', stream=True, json = {
- 'inputs': conversation,
- 'parameters': {
- 'temperature': kwargs.get('temperature', 0.4),
- 'truncate': kwargs.get('truncate', 2048),
- 'max_new_tokens': kwargs.get('max_new_tokens', 1024),
- 'do_sample': kwargs.get('do_sample', True),
- 'repetition_penalty': kwargs.get('repetition_penalty', 1.2),
- 'return_full_text': kwargs.get('return_full_text', False)
- },
- 'stream': True,
- 'options': {
- 'id': kwargs.get('id', str(uuid4())),
- 'response_id': kwargs.get('response_id', str(uuid4())),
- 'is_retry': False,
- 'use_cache': False,
- 'web_search_id': ''
- }
- })
-
- for line in completion.iter_lines():
- if b'data' in line:
- line = loads(line.decode('utf-8').replace('data:', ''))
- token = line['token']['text']
-
- if token == '<|endoftext|>':
- break
- else:
- yield (token)
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/inference_bf16.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/inference_bf16.py
deleted file mode 100644
index 96ec709f433cd13dad0b93d5368d61e169b9df28..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/inference_bf16.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import argparse
-
-import intel_extension_for_pytorch as ipex
-import torch
-
-from diffusers import DPMSolverMultistepScheduler, StableDiffusionPipeline
-
-
-parser = argparse.ArgumentParser("Stable Diffusion script with intel optimization", add_help=False)
-parser.add_argument("--dpm", action="store_true", help="Enable DPMSolver or not")
-parser.add_argument("--steps", default=None, type=int, help="Num inference steps")
-args = parser.parse_args()
-
-
-device = "cpu"
-prompt = "a lovely in red dress and hat, in the snowly and brightly night, with many brighly buildings"
-
-model_id = "path-to-your-trained-model"
-pipe = StableDiffusionPipeline.from_pretrained(model_id)
-if args.dpm:
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
-pipe = pipe.to(device)
-
-# to channels last
-pipe.unet = pipe.unet.to(memory_format=torch.channels_last)
-pipe.vae = pipe.vae.to(memory_format=torch.channels_last)
-pipe.text_encoder = pipe.text_encoder.to(memory_format=torch.channels_last)
-if pipe.requires_safety_checker:
- pipe.safety_checker = pipe.safety_checker.to(memory_format=torch.channels_last)
-
-# optimize with ipex
-sample = torch.randn(2, 4, 64, 64)
-timestep = torch.rand(1) * 999
-encoder_hidden_status = torch.randn(2, 77, 768)
-input_example = (sample, timestep, encoder_hidden_status)
-try:
- pipe.unet = ipex.optimize(pipe.unet.eval(), dtype=torch.bfloat16, inplace=True, sample_input=input_example)
-except Exception:
- pipe.unet = ipex.optimize(pipe.unet.eval(), dtype=torch.bfloat16, inplace=True)
-pipe.vae = ipex.optimize(pipe.vae.eval(), dtype=torch.bfloat16, inplace=True)
-pipe.text_encoder = ipex.optimize(pipe.text_encoder.eval(), dtype=torch.bfloat16, inplace=True)
-if pipe.requires_safety_checker:
- pipe.safety_checker = ipex.optimize(pipe.safety_checker.eval(), dtype=torch.bfloat16, inplace=True)
-
-# compute
-seed = 666
-generator = torch.Generator(device).manual_seed(seed)
-generate_kwargs = {"generator": generator}
-if args.steps is not None:
- generate_kwargs["num_inference_steps"] = args.steps
-
-with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
- image = pipe(prompt, **generate_kwargs).images[0]
-
-# save image
-image.save("generated.png")
diff --git a/spaces/deepset/search-all-the-docs/README.md b/spaces/deepset/search-all-the-docs/README.md
deleted file mode 100644
index bcf046a01177b6b059c5731857cc237cc1757e0a..0000000000000000000000000000000000000000
--- a/spaces/deepset/search-all-the-docs/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: SEARCH ALL THE DOCS
-emoji: 🔎
-colorFrom: yellow
-colorTo: pink
-python_version: 3.11
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: main.py
-pinned: false
----
-
-
-
-## Getting started
-
-First create your virtual env so you don't pollute your OS environment.
-This demo has only been tested with Python 3.11, so I suggest you use that.
-
-```shell
-mkvirtualenv search-all-the-docs
-workon search-all-the-docs
-```
-
-Install the dependencies:
-
-```shell
-pip install -r requirements.txt
-```
-
-Create a `.env` file with your OpenAI key:
-
-```
-OPENAI_API_KEY=""
-```
-
-And you're good to go!
-
-```shell
-streamlit run main.py
-```
diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_base_gpt_api.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_base_gpt_api.py
deleted file mode 100644
index 882338a01dd19250fa919f4f5e16b83f627d4a82..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_base_gpt_api.py
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/7 17:40
-@Author : alexanderwu
-@File : test_base_gpt_api.py
-"""
-
-from metagpt.schema import Message
-
-
-def test_message():
- message = Message(role='user', content='wtf')
- assert 'role' in message.to_dict()
- assert 'user' in str(message)
diff --git a/spaces/dfyinc/GeniusChat/README.md b/spaces/dfyinc/GeniusChat/README.md
deleted file mode 100644
index 4e495b4a2b16f7e13e3985f5ad42809e4c361117..0000000000000000000000000000000000000000
--- a/spaces/dfyinc/GeniusChat/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GeniusChat
-emoji: 📊
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/diacanFperku/AutoGPT/Igo Primo 2.4.5 Europe Torrent Download LINK.md b/spaces/diacanFperku/AutoGPT/Igo Primo 2.4.5 Europe Torrent Download LINK.md
deleted file mode 100644
index c649e1b9c20fb52538078d0ac9e2c71b340fa41e..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Igo Primo 2.4.5 Europe Torrent Download LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-igo primo 2.4.5 europe torrent download
Download - https://gohhs.com/2uFT7l
-
-igo primo download — iGO 2020 World maps .torrent download free Jan 08, 2020 · If ... Igo Primo 2.4.5 Eastern Europe iPhone; 2021-02-02 Maps ... 1fdad05405
-
-
-
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/training/__init__.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/training/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/diego2554/RemBG_super/rembg/sessions/dis_anime.py b/spaces/diego2554/RemBG_super/rembg/sessions/dis_anime.py
deleted file mode 100644
index a71618f03f5655488b135aeee7caf9de50cedf60..0000000000000000000000000000000000000000
--- a/spaces/diego2554/RemBG_super/rembg/sessions/dis_anime.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-from typing import List
-
-import numpy as np
-import pooch
-from PIL import Image
-from PIL.Image import Image as PILImage
-
-from .base import BaseSession
-
-
-class DisSession(BaseSession):
- def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]:
- ort_outs = self.inner_session.run(
- None,
- self.normalize(img, (0.485, 0.456, 0.406), (1.0, 1.0, 1.0), (1024, 1024)),
- )
-
- pred = ort_outs[0][:, 0, :, :]
-
- ma = np.max(pred)
- mi = np.min(pred)
-
- pred = (pred - mi) / (ma - mi)
- pred = np.squeeze(pred)
-
- mask = Image.fromarray((pred * 255).astype("uint8"), mode="L")
- mask = mask.resize(img.size, Image.LANCZOS)
-
- return [mask]
-
- @classmethod
- def download_models(cls, *args, **kwargs):
- fname = f"{cls.name()}.onnx"
- pooch.retrieve(
- "https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-anime.onnx",
- None
- if cls.checksum_disabled(*args, **kwargs)
- else "md5:6f184e756bb3bd901c8849220a83e38e",
- fname=fname,
- path=cls.u2net_home(*args, **kwargs),
- progressbar=True,
- )
-
- return os.path.join(cls.u2net_home(), fname)
-
- @classmethod
- def name(cls, *args, **kwargs):
- return "isnet-anime"
diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/english_bert_mock.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/text/english_bert_mock.py
deleted file mode 100644
index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/english_bert_mock.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch
-
-
-def get_bert_feature(norm_text, word2ph):
- return torch.zeros(1024, sum(word2ph))
diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/score_hlr_sampler.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/score_hlr_sampler.py
deleted file mode 100644
index 11d46b97705db60fb6a4eb5fa7da10ac78acb8bc..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/score_hlr_sampler.py
+++ /dev/null
@@ -1,264 +0,0 @@
-import torch
-from mmcv.ops import nms_match
-
-from ..builder import BBOX_SAMPLERS
-from ..transforms import bbox2roi
-from .base_sampler import BaseSampler
-from .sampling_result import SamplingResult
-
-
-@BBOX_SAMPLERS.register_module()
-class ScoreHLRSampler(BaseSampler):
- r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample
- Attention in Object Detection `_.
-
- Score hierarchical local rank (HLR) differentiates with RandomSampler in
- negative part. It firstly computes Score-HLR in a two-step way,
- then linearly maps score hlr to the loss weights.
-
- Args:
- num (int): Total number of sampled RoIs.
- pos_fraction (float): Fraction of positive samples.
- context (:class:`BaseRoIHead`): RoI head that the sampler belongs to.
- neg_pos_ub (int): Upper bound of the ratio of num negative to num
- positive, -1 means no upper bound.
- add_gt_as_proposals (bool): Whether to add ground truth as proposals.
- k (float): Power of the non-linear mapping.
- bias (float): Shift of the non-linear mapping.
- score_thr (float): Minimum score that a negative sample is to be
- considered as valid bbox.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- context,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- k=0.5,
- bias=0,
- score_thr=0.05,
- iou_thr=0.5,
- **kwargs):
- super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals)
- self.k = k
- self.bias = bias
- self.score_thr = score_thr
- self.iou_thr = iou_thr
- self.context = context
- # context of cascade detectors is a list, so distinguish them here.
- if not hasattr(context, 'num_stages'):
- self.bbox_roi_extractor = context.bbox_roi_extractor
- self.bbox_head = context.bbox_head
- self.with_shared_head = context.with_shared_head
- if self.with_shared_head:
- self.shared_head = context.shared_head
- else:
- self.bbox_roi_extractor = context.bbox_roi_extractor[
- context.current_stage]
- self.bbox_head = context.bbox_head[context.current_stage]
-
- @staticmethod
- def random_choice(gallery, num):
- """Randomly select some elements from the gallery.
-
- If `gallery` is a Tensor, the returned indices will be a Tensor;
- If `gallery` is a ndarray or list, the returned indices will be a
- ndarray.
-
- Args:
- gallery (Tensor | ndarray | list): indices pool.
- num (int): expected sample num.
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- assert len(gallery) >= num
-
- is_tensor = isinstance(gallery, torch.Tensor)
- if not is_tensor:
- if torch.cuda.is_available():
- device = torch.cuda.current_device()
- else:
- device = 'cpu'
- gallery = torch.tensor(gallery, dtype=torch.long, device=device)
- perm = torch.randperm(gallery.numel(), device=gallery.device)[:num]
- rand_inds = gallery[perm]
- if not is_tensor:
- rand_inds = rand_inds.cpu().numpy()
- return rand_inds
-
- def _sample_pos(self, assign_result, num_expected, **kwargs):
- """Randomly sample some positive samples."""
- pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten()
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.random_choice(pos_inds, num_expected)
-
- def _sample_neg(self,
- assign_result,
- num_expected,
- bboxes,
- feats=None,
- img_meta=None,
- **kwargs):
- """Sample negative samples.
-
- Score-HLR sampler is done in the following steps:
- 1. Take the maximum positive score prediction of each negative samples
- as s_i.
- 2. Filter out negative samples whose s_i <= score_thr, the left samples
- are called valid samples.
- 3. Use NMS-Match to divide valid samples into different groups,
- samples in the same group will greatly overlap with each other
- 4. Rank the matched samples in two-steps to get Score-HLR.
- (1) In the same group, rank samples with their scores.
- (2) In the same score rank across different groups,
- rank samples with their scores again.
- 5. Linearly map Score-HLR to the final label weights.
-
- Args:
- assign_result (:obj:`AssignResult`): result of assigner.
- num_expected (int): Expected number of samples.
- bboxes (Tensor): bbox to be sampled.
- feats (Tensor): Features come from FPN.
- img_meta (dict): Meta information dictionary.
- """
- neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten()
- num_neg = neg_inds.size(0)
- if num_neg == 0:
- return neg_inds, None
- with torch.no_grad():
- neg_bboxes = bboxes[neg_inds]
- neg_rois = bbox2roi([neg_bboxes])
- bbox_result = self.context._bbox_forward(feats, neg_rois)
- cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[
- 'bbox_pred']
-
- ori_loss = self.bbox_head.loss(
- cls_score=cls_score,
- bbox_pred=None,
- rois=None,
- labels=neg_inds.new_full((num_neg, ),
- self.bbox_head.num_classes),
- label_weights=cls_score.new_ones(num_neg),
- bbox_targets=None,
- bbox_weights=None,
- reduction_override='none')['loss_cls']
-
- # filter out samples with the max score lower than score_thr
- max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1)
- valid_inds = (max_score > self.score_thr).nonzero().view(-1)
- invalid_inds = (max_score <= self.score_thr).nonzero().view(-1)
- num_valid = valid_inds.size(0)
- num_invalid = invalid_inds.size(0)
-
- num_expected = min(num_neg, num_expected)
- num_hlr = min(num_valid, num_expected)
- num_rand = num_expected - num_hlr
- if num_valid > 0:
- valid_rois = neg_rois[valid_inds]
- valid_max_score = max_score[valid_inds]
- valid_argmax_score = argmax_score[valid_inds]
- valid_bbox_pred = bbox_pred[valid_inds]
-
- # valid_bbox_pred shape: [num_valid, #num_classes, 4]
- valid_bbox_pred = valid_bbox_pred.view(
- valid_bbox_pred.size(0), -1, 4)
- selected_bbox_pred = valid_bbox_pred[range(num_valid),
- valid_argmax_score]
- pred_bboxes = self.bbox_head.bbox_coder.decode(
- valid_rois[:, 1:], selected_bbox_pred)
- pred_bboxes_with_score = torch.cat(
- [pred_bboxes, valid_max_score[:, None]], -1)
- group = nms_match(pred_bboxes_with_score, self.iou_thr)
-
- # imp: importance
- imp = cls_score.new_zeros(num_valid)
- for g in group:
- g_score = valid_max_score[g]
- # g_score has already sorted
- rank = g_score.new_tensor(range(g_score.size(0)))
- imp[g] = num_valid - rank + g_score
- _, imp_rank_inds = imp.sort(descending=True)
- _, imp_rank = imp_rank_inds.sort()
- hlr_inds = imp_rank_inds[:num_expected]
-
- if num_rand > 0:
- rand_inds = torch.randperm(num_invalid)[:num_rand]
- select_inds = torch.cat(
- [valid_inds[hlr_inds], invalid_inds[rand_inds]])
- else:
- select_inds = valid_inds[hlr_inds]
-
- neg_label_weights = cls_score.new_ones(num_expected)
-
- up_bound = max(num_expected, num_valid)
- imp_weights = (up_bound -
- imp_rank[hlr_inds].float()) / up_bound
- neg_label_weights[:num_hlr] = imp_weights
- neg_label_weights[num_hlr:] = imp_weights.min()
- neg_label_weights = (self.bias +
- (1 - self.bias) * neg_label_weights).pow(
- self.k)
- ori_selected_loss = ori_loss[select_inds]
- new_loss = ori_selected_loss * neg_label_weights
- norm_ratio = ori_selected_loss.sum() / new_loss.sum()
- neg_label_weights *= norm_ratio
- else:
- neg_label_weights = cls_score.new_ones(num_expected)
- select_inds = torch.randperm(num_neg)[:num_expected]
-
- return neg_inds[select_inds], neg_label_weights
-
- def sample(self,
- assign_result,
- bboxes,
- gt_bboxes,
- gt_labels=None,
- img_meta=None,
- **kwargs):
- """Sample positive and negative bboxes.
-
- This is a simple implementation of bbox sampling given candidates,
- assigning results and ground truth bboxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Bbox assigning results.
- bboxes (Tensor): Boxes to be sampled from.
- gt_bboxes (Tensor): Ground truth bboxes.
- gt_labels (Tensor, optional): Class labels of ground truth bboxes.
-
- Returns:
- tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negetive
- label weights.
- """
- bboxes = bboxes[:, :4]
-
- gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8)
- if self.add_gt_as_proposals:
- bboxes = torch.cat([gt_bboxes, bboxes], dim=0)
- assign_result.add_gt_(gt_labels)
- gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8)
- gt_flags = torch.cat([gt_ones, gt_flags])
-
- num_expected_pos = int(self.num * self.pos_fraction)
- pos_inds = self.pos_sampler._sample_pos(
- assign_result, num_expected_pos, bboxes=bboxes, **kwargs)
- num_sampled_pos = pos_inds.numel()
- num_expected_neg = self.num - num_sampled_pos
- if self.neg_pos_ub >= 0:
- _pos = max(1, num_sampled_pos)
- neg_upper_bound = int(self.neg_pos_ub * _pos)
- if num_expected_neg > neg_upper_bound:
- num_expected_neg = neg_upper_bound
- neg_inds, neg_label_weights = self.neg_sampler._sample_neg(
- assign_result,
- num_expected_neg,
- bboxes,
- img_meta=img_meta,
- **kwargs)
-
- return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes,
- assign_result, gt_flags), neg_label_weights
diff --git a/spaces/doevent/blip/models/nlvr_encoder.py b/spaces/doevent/blip/models/nlvr_encoder.py
deleted file mode 100644
index 1946bb4a300f75afa4848f6622839445903c34a9..0000000000000000000000000000000000000000
--- a/spaces/doevent/blip/models/nlvr_encoder.py
+++ /dev/null
@@ -1,843 +0,0 @@
-import math
-import os
-import warnings
-from dataclasses import dataclass
-from typing import Optional, Tuple
-
-import torch
-from torch import Tensor, device, dtype, nn
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import CrossEntropyLoss
-import torch.nn.functional as F
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- ModelOutput,
-)
-from transformers.modeling_outputs import (
- BaseModelOutputWithPastAndCrossAttentions,
- BaseModelOutputWithPoolingAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- NextSentencePredictorOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from transformers.modeling_utils import (
- PreTrainedModel,
- apply_chunking_to_forward,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import logging
-from transformers.models.bert.configuration_bert import BertConfig
-
-
-logger = logging.get_logger(__name__)
-
-
-class BertEmbeddings(nn.Module):
- """Construct the embeddings from word and position embeddings."""
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
-
- self.config = config
-
- def forward(
- self, input_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- if position_ids is None:
- position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
-
- embeddings = inputs_embeds
-
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class BertSelfAttention(nn.Module):
- def __init__(self, config, is_cross_attention):
- super().__init__()
- self.config = config
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
- raise ValueError(
- "The hidden size (%d) is not a multiple of the number of attention "
- "heads (%d)" % (config.hidden_size, config.num_attention_heads)
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- if is_cross_attention:
- self.key = nn.Linear(config.encoder_width, self.all_head_size)
- self.value = nn.Linear(config.encoder_width, self.all_head_size)
- else:
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)
- self.save_attention = False
-
- def save_attn_gradients(self, attn_gradients):
- self.attn_gradients = attn_gradients
-
- def get_attn_gradients(self):
- return self.attn_gradients
-
- def save_attention_map(self, attention_map):
- self.attention_map = attention_map
-
- def get_attention_map(self):
- return self.attention_map
-
- def transpose_for_scores(self, x):
- new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
- x = x.view(*new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
- mixed_query_layer = self.query(hidden_states)
-
- # If this is instantiated as a cross-attention module, the keys
- # and values come from an encoder; the attention mask needs to be
- # such that the encoder's padding tokens are not attended to.
- is_cross_attention = encoder_hidden_states is not None
-
- if is_cross_attention:
- key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
- value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
- attention_mask = encoder_attention_mask
- elif past_key_value is not None:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
- key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
- value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
- else:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
-
- past_key_value = (key_layer, value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
-
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- seq_length = hidden_states.size()[1]
- position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
- position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
- distance = position_ids_l - position_ids_r
- positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
- positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key
-
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
-
- if is_cross_attention and self.save_attention:
- self.save_attention_map(attention_probs)
- attention_probs.register_hook(self.save_attn_gradients)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs_dropped = self.dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs_dropped = attention_probs_dropped * head_mask
-
- context_layer = torch.matmul(attention_probs_dropped, value_layer)
-
- context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
- new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
- context_layer = context_layer.view(*new_context_layer_shape)
-
- outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
-
- outputs = outputs + (past_key_value,)
- return outputs
-
-
-class BertSelfOutput(nn.Module):
- def __init__(self, config, twin=False, merge=False):
- super().__init__()
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- if twin:
- self.dense0 = nn.Linear(config.hidden_size, config.hidden_size)
- self.dense1 = nn.Linear(config.hidden_size, config.hidden_size)
- else:
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- if merge:
- self.act = ACT2FN[config.hidden_act]
- self.merge_layer = nn.Linear(config.hidden_size * 2, config.hidden_size)
- self.merge = True
- else:
- self.merge = False
-
- def forward(self, hidden_states, input_tensor):
- if type(hidden_states) == list:
- hidden_states0 = self.dense0(hidden_states[0])
- hidden_states1 = self.dense1(hidden_states[1])
- if self.merge:
- #hidden_states = self.merge_layer(self.act(torch.cat([hidden_states0,hidden_states1],dim=-1)))
- hidden_states = self.merge_layer(torch.cat([hidden_states0,hidden_states1],dim=-1))
- else:
- hidden_states = (hidden_states0+hidden_states1)/2
- else:
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertAttention(nn.Module):
- def __init__(self, config, is_cross_attention=False, layer_num=-1):
- super().__init__()
- if is_cross_attention:
- self.self0 = BertSelfAttention(config, is_cross_attention)
- self.self1 = BertSelfAttention(config, is_cross_attention)
- else:
- self.self = BertSelfAttention(config, is_cross_attention)
- self.output = BertSelfOutput(config, twin=is_cross_attention, merge=(is_cross_attention and layer_num>=6))
- self.pruned_heads = set()
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
- )
-
- # Prune linear layers
- self.self.query = prune_linear_layer(self.self.query, index)
- self.self.key = prune_linear_layer(self.self.key, index)
- self.self.value = prune_linear_layer(self.self.value, index)
- self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
- self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
- if type(encoder_hidden_states)==list:
- self_outputs0 = self.self0(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states[0],
- encoder_attention_mask[0],
- past_key_value,
- output_attentions,
- )
- self_outputs1 = self.self1(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states[1],
- encoder_attention_mask[1],
- past_key_value,
- output_attentions,
- )
- attention_output = self.output([self_outputs0[0],self_outputs1[0]], hidden_states)
-
- outputs = (attention_output,) + self_outputs0[1:] # add attentions if we output them
- else:
- self_outputs = self.self(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- )
- attention_output = self.output(self_outputs[0], hidden_states)
- outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
- return outputs
-
-
-class BertIntermediate(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
- if isinstance(config.hidden_act, str):
- self.intermediate_act_fn = ACT2FN[config.hidden_act]
- else:
- self.intermediate_act_fn = config.hidden_act
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.intermediate_act_fn(hidden_states)
- return hidden_states
-
-
-class BertOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states, input_tensor):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertLayer(nn.Module):
- def __init__(self, config, layer_num):
- super().__init__()
- self.config = config
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.attention = BertAttention(config)
- self.layer_num = layer_num
- if self.config.add_cross_attention:
- self.crossattention = BertAttention(config, is_cross_attention=self.config.add_cross_attention, layer_num=layer_num)
- self.intermediate = BertIntermediate(config)
- self.output = BertOutput(config)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- mode=None,
- ):
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
- self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
- self_attention_outputs = self.attention(
- hidden_states,
- attention_mask,
- head_mask,
- output_attentions=output_attentions,
- past_key_value=self_attn_past_key_value,
- )
- attention_output = self_attention_outputs[0]
-
- outputs = self_attention_outputs[1:-1]
- present_key_value = self_attention_outputs[-1]
-
- if mode=='multimodal':
- assert encoder_hidden_states is not None, "encoder_hidden_states must be given for cross-attention layers"
- cross_attention_outputs = self.crossattention(
- attention_output,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- output_attentions=output_attentions,
- )
- attention_output = cross_attention_outputs[0]
- outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
- )
- outputs = (layer_output,) + outputs
-
- outputs = outputs + (present_key_value,)
-
- return outputs
-
- def feed_forward_chunk(self, attention_output):
- intermediate_output = self.intermediate(attention_output)
- layer_output = self.output(intermediate_output, attention_output)
- return layer_output
-
-
-class BertEncoder(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.config = config
- self.layer = nn.ModuleList([BertLayer(config,i) for i in range(config.num_hidden_layers)])
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=False,
- output_hidden_states=False,
- return_dict=True,
- mode='multimodal',
- ):
- all_hidden_states = () if output_hidden_states else None
- all_self_attentions = () if output_attentions else None
- all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
-
- next_decoder_cache = () if use_cache else None
-
- for i in range(self.config.num_hidden_layers):
- layer_module = self.layer[i]
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- layer_head_mask = head_mask[i] if head_mask is not None else None
- past_key_value = past_key_values[i] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- if use_cache:
- logger.warn(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs, past_key_value, output_attentions)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- mode=mode,
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- mode=mode,
- )
-
- hidden_states = layer_outputs[0]
- if use_cache:
- next_decoder_cache += (layer_outputs[-1],)
- if output_attentions:
- all_self_attentions = all_self_attentions + (layer_outputs[1],)
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- next_decoder_cache,
- all_hidden_states,
- all_self_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=next_decoder_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-class BertPooler(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.activation = nn.Tanh()
-
- def forward(self, hidden_states):
- # We "pool" the model by simply taking the hidden state corresponding
- # to the first token.
- first_token_tensor = hidden_states[:, 0]
- pooled_output = self.dense(first_token_tensor)
- pooled_output = self.activation(pooled_output)
- return pooled_output
-
-
-class BertPredictionHeadTransform(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- if isinstance(config.hidden_act, str):
- self.transform_act_fn = ACT2FN[config.hidden_act]
- else:
- self.transform_act_fn = config.hidden_act
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.transform_act_fn(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- return hidden_states
-
-
-class BertLMPredictionHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.transform = BertPredictionHeadTransform(config)
-
- # The output weights are the same as the input embeddings, but there is
- # an output-only bias for each token.
- self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
-
- # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.transform(hidden_states)
- hidden_states = self.decoder(hidden_states)
- return hidden_states
-
-
-class BertOnlyMLMHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.predictions = BertLMPredictionHead(config)
-
- def forward(self, sequence_output):
- prediction_scores = self.predictions(sequence_output)
- return prediction_scores
-
-
-class BertPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = BertConfig
- base_model_prefix = "bert"
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """ Initialize the weights """
- if isinstance(module, (nn.Linear, nn.Embedding)):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
-
-class BertModel(BertPreTrainedModel):
- """
- The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
- cross-attention is added between the self-attention layers, following the architecture described in `Attention is
- all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
- Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
- argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an
- input to the forward pass.
- """
-
- def __init__(self, config, add_pooling_layer=True):
- super().__init__(config)
- self.config = config
-
- self.embeddings = BertEmbeddings(config)
-
- self.encoder = BertEncoder(config)
-
- self.pooler = BertPooler(config) if add_pooling_layer else None
-
- self.init_weights()
-
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
-
- def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple[int], device: device, is_decoder: bool) -> Tensor:
- """
- Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
-
- Arguments:
- attention_mask (:obj:`torch.Tensor`):
- Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
- input_shape (:obj:`Tuple[int]`):
- The shape of the input to the model.
- device: (:obj:`torch.device`):
- The device of the input to the model.
-
- Returns:
- :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`.
- """
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- if attention_mask.dim() == 3:
- extended_attention_mask = attention_mask[:, None, :, :]
- elif attention_mask.dim() == 2:
- # Provided a padding mask of dimensions [batch_size, seq_length]
- # - if the model is a decoder, apply a causal mask in addition to the padding mask
- # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if is_decoder:
- batch_size, seq_length = input_shape
-
- seq_ids = torch.arange(seq_length, device=device)
- causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None]
- # in case past_key_values are used we need to add a prefix ones mask to the causal mask
- # causal and attention masks must have same type with pytorch version < 1.3
- causal_mask = causal_mask.to(attention_mask.dtype)
-
- if causal_mask.shape[1] < attention_mask.shape[1]:
- prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1]
- causal_mask = torch.cat(
- [
- torch.ones((batch_size, seq_length, prefix_seq_len), device=device, dtype=causal_mask.dtype),
- causal_mask,
- ],
- axis=-1,
- )
-
- extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
- else:
- extended_attention_mask = attention_mask[:, None, None, :]
- else:
- raise ValueError(
- "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(
- input_shape, attention_mask.shape
- )
- )
-
- # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
- # masked positions, this operation will create a tensor which is 0.0 for
- # positions we want to attend and -10000.0 for masked positions.
- # Since we are adding it to the raw scores before the softmax, this is
- # effectively the same as removing these entirely.
- extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- return extended_attention_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- is_decoder=False,
- mode='multimodal',
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- device = input_ids.device
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- device = inputs_embeds.device
- elif encoder_embeds is not None:
- input_shape = encoder_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- device = encoder_embeds.device
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds or encoder_embeds")
-
- # past_key_values_length
- past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
-
- if attention_mask is None:
- attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape,
- device, is_decoder)
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if encoder_hidden_states is not None:
- if type(encoder_hidden_states) == list:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size()
- else:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
-
- if type(encoder_attention_mask) == list:
- encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask]
- elif encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- if encoder_embeds is None:
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
- else:
- embedding_output = encoder_embeds
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- mode=mode,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
diff --git a/spaces/dongsiqie/gptnb/README.md b/spaces/dongsiqie/gptnb/README.md
deleted file mode 100644
index f55944e4187a43cff8d9d5d1141cb44f805a0234..0000000000000000000000000000000000000000
--- a/spaces/dongsiqie/gptnb/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGPT-Next-Web
-emoji: 💻
-colorFrom: blue
-colorTo: yellow
-sdk: docker
-pinned: false
-license: mit
-app_port: 3000
----
-免费key的来源:https://github.com/pengzhile/pandora/issues/837
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/dorkai/ChatUIPro/app/components/index.tsx b/spaces/dorkai/ChatUIPro/app/components/index.tsx
deleted file mode 100644
index 448f7639de15a7a0a31efe1781f0a628c8668b85..0000000000000000000000000000000000000000
--- a/spaces/dorkai/ChatUIPro/app/components/index.tsx
+++ /dev/null
@@ -1,433 +0,0 @@
-'use client'
-import type { FC } from 'react'
-import React, { useEffect, useRef, useState } from 'react'
-import { useTranslation } from 'react-i18next'
-import produce from 'immer'
-import { useBoolean, useGetState } from 'ahooks'
-import useConversation from '@/hooks/use-conversation'
-import Toast from '@/app/components/base/toast'
-import Sidebar from '@/app/components/sidebar'
-import ConfigSence from '@/app/components/config-scence'
-import Header from '@/app/components/header'
-import { fetchAppParams, fetchChatList, fetchConversations, sendChatMessage, updateFeedback } from '@/service'
-import type { ConversationItem, Feedbacktype, IChatItem, PromptConfig, AppInfo } from '@/types/app'
-import Chat from '@/app/components/chat'
-import { setLocaleOnClient } from '@/i18n/client'
-import useBreakpoints, { MediaType } from '@/hooks/use-breakpoints'
-import Loading from '@/app/components/base/loading'
-import { replaceVarWithValues } from '@/utils/prompt'
-import AppUnavailable from '@/app/components/app-unavailable'
-import { APP_ID, API_KEY, APP_INFO, isShowPrompt, promptTemplate } from '@/config'
-import { userInputsFormToPromptVariables } from '@/utils/prompt'
-
-const Main: FC = () => {
- const { t } = useTranslation()
- const media = useBreakpoints()
- const isMobile = media === MediaType.mobile
- const hasSetAppConfig = APP_ID && API_KEY
-
- /*
- * app info
- */
- const [appUnavailable, setAppUnavailable] = useState(false)
- const [isUnknwonReason, setIsUnknwonReason] = useState(false)
- const [promptConfig, setPromptConfig] = useState(null)
- const [inited, setInited] = useState(false)
- // in mobile, show sidebar by click button
- const [isShowSidebar, { setTrue: showSidebar, setFalse: hideSidebar }] = useBoolean(false)
-
- useEffect(() => {
- if (APP_INFO?.title) {
- document.title = `${APP_INFO.title} - Powered by Dify`
- }
- }, [APP_INFO?.title])
-
- /*
- * conversation info
- */
- const {
- conversationList,
- setConversationList,
- currConversationId,
- setCurrConversationId,
- getConversationIdFromStorage,
- isNewConversation,
- currConversationInfo,
- currInputs,
- newConversationInputs,
- resetNewConversationInputs,
- setCurrInputs,
- setNewConversationInfo,
- setExistConversationInfo,
- } = useConversation()
-
- const [conversationIdChangeBecauseOfNew, setConversationIdChangeBecauseOfNew, getConversationIdChangeBecauseOfNew] = useGetState(false)
- const [isChatStarted, { setTrue: setChatStarted, setFalse: setChatNotStarted }] = useBoolean(false)
- const handleStartChat = (inputs: Record) => {
- setCurrInputs(inputs)
- setChatStarted()
- // parse variables in introduction
- setChatList(generateNewChatListWithOpenstatement('', inputs))
- }
- const hasSetInputs = (() => {
- if (!isNewConversation)
- return true
-
- return isChatStarted
- })()
-
- const conversationName = currConversationInfo?.name || t('app.chat.newChatDefaultName') as string
- const conversationIntroduction = currConversationInfo?.introduction || ''
-
- const handleConversationSwitch = () => {
- if (!inited)
- return
-
- // update inputs of current conversation
- let notSyncToStateIntroduction = ''
- let notSyncToStateInputs: Record | undefined | null = {}
- if (!isNewConversation) {
- const item = conversationList.find(item => item.id === currConversationId)
- notSyncToStateInputs = item?.inputs || {}
- setCurrInputs(notSyncToStateInputs as any)
- notSyncToStateIntroduction = item?.introduction || ''
- setExistConversationInfo({
- name: item?.name || '',
- introduction: notSyncToStateIntroduction,
- })
- }
- else {
- notSyncToStateInputs = newConversationInputs
- setCurrInputs(notSyncToStateInputs)
- }
-
- // update chat list of current conversation
- if (!isNewConversation && !conversationIdChangeBecauseOfNew && !isResponsing) {
- fetchChatList(currConversationId).then((res: any) => {
- const { data } = res
- const newChatList: IChatItem[] = generateNewChatListWithOpenstatement(notSyncToStateIntroduction, notSyncToStateInputs)
-
- data.forEach((item: any) => {
- newChatList.push({
- id: `question-${item.id}`,
- content: item.query,
- isAnswer: false,
- })
- newChatList.push({
- id: item.id,
- content: item.answer,
- feedback: item.feedback,
- isAnswer: true,
- })
- })
- setChatList(newChatList)
- })
- }
-
- if (isNewConversation && isChatStarted)
- setChatList(generateNewChatListWithOpenstatement())
-
- setControlFocus(Date.now())
- }
- useEffect(handleConversationSwitch, [currConversationId, inited])
-
- const handleConversationIdChange = (id: string) => {
- if (id === '-1') {
- createNewChat()
- setConversationIdChangeBecauseOfNew(true)
- }
- else {
- setConversationIdChangeBecauseOfNew(false)
- }
- // trigger handleConversationSwitch
- setCurrConversationId(id, APP_ID)
- hideSidebar()
- }
-
- /*
- * chat info. chat is under conversation.
- */
- const [chatList, setChatList, getChatList] = useGetState([])
- const chatListDomRef = useRef(null)
- useEffect(() => {
- // scroll to bottom
- if (chatListDomRef.current)
- chatListDomRef.current.scrollTop = chatListDomRef.current.scrollHeight
- }, [chatList, currConversationId])
- // user can not edit inputs if user had send message
- const canEditInpus = !chatList.some(item => item.isAnswer === false) && isNewConversation
- const createNewChat = () => {
- // if new chat is already exist, do not create new chat
- if (conversationList.some(item => item.id === '-1'))
- return
-
- setConversationList(produce(conversationList, (draft) => {
- draft.unshift({
- id: '-1',
- name: t('app.chat.newChatDefaultName'),
- inputs: newConversationInputs,
- introduction: conversationIntroduction,
- })
- }))
- }
-
- // sometime introduction is not applied to state
- const generateNewChatListWithOpenstatement = (introduction?: string, inputs?: Record | null) => {
- let caculatedIntroduction = introduction || conversationIntroduction || ''
- const caculatedPromptVariables = inputs || currInputs || null
- if (caculatedIntroduction && caculatedPromptVariables)
- caculatedIntroduction = replaceVarWithValues(caculatedIntroduction, promptConfig?.prompt_variables || [], caculatedPromptVariables)
-
- const openstatement = {
- id: `${Date.now()}`,
- content: caculatedIntroduction,
- isAnswer: true,
- feedbackDisabled: true,
- isOpeningStatement: isShowPrompt,
- }
- if (caculatedIntroduction)
- return [openstatement]
-
- return []
- }
-
- // init
- useEffect(() => {
- if (!hasSetAppConfig) {
- setAppUnavailable(true)
- return
- }
- (async () => {
- try {
- const [conversationData, appParams] = await Promise.all([fetchConversations(), fetchAppParams()])
-
- // handle current conversation id
- const { data: conversations } = conversationData as { data: ConversationItem[] }
- const _conversationId = getConversationIdFromStorage(APP_ID)
- const isNotNewConversation = conversations.some(item => item.id === _conversationId)
-
- // fetch new conversation info
- const { user_input_form, opening_statement: introduction }: any = appParams
- setLocaleOnClient(APP_INFO.default_language, true)
- setNewConversationInfo({
- name: t('app.chat.newChatDefaultName'),
- introduction,
- })
- const prompt_variables = userInputsFormToPromptVariables(user_input_form)
- setPromptConfig({
- prompt_template: promptTemplate,
- prompt_variables,
- } as PromptConfig)
-
- setConversationList(conversations as ConversationItem[])
-
- if (isNotNewConversation)
- setCurrConversationId(_conversationId, APP_ID, false)
-
- setInited(true)
- }
- catch (e: any) {
- if (e.status === 404) {
- setAppUnavailable(true)
- }
- else {
- setIsUnknwonReason(true)
- setAppUnavailable(true)
- }
- }
- })()
- }, [])
-
- const [isResponsing, { setTrue: setResponsingTrue, setFalse: setResponsingFalse }] = useBoolean(false)
- const { notify } = Toast
- const logError = (message: string) => {
- notify({ type: 'error', message })
- }
-
- const checkCanSend = () => {
- if (!currInputs || !promptConfig?.prompt_variables)
- return true
-
- const inputLens = Object.values(currInputs).length
- const promptVariablesLens = promptConfig.prompt_variables.length
-
- const emytyInput = inputLens < promptVariablesLens || Object.values(currInputs).find(v => !v)
- if (emytyInput) {
- logError(t('app.errorMessage.valueOfVarRequired'))
- return false
- }
- return true
- }
-
- const [controlFocus, setControlFocus] = useState(0)
- const handleSend = async (message: string) => {
- if (isResponsing) {
- notify({ type: 'info', message: t('app.errorMessage.waitForResponse') })
- return
- }
- const data = {
- inputs: currInputs,
- query: message,
- conversation_id: isNewConversation ? null : currConversationId,
- }
-
- // qustion
- const questionId = `question-${Date.now()}`
- const questionItem = {
- id: questionId,
- content: message,
- isAnswer: false,
- }
-
- const placeholderAnswerId = `answer-placeholder-${Date.now()}`
- const placeholderAnswerItem = {
- id: placeholderAnswerId,
- content: '',
- isAnswer: true,
- }
-
- const newList = [...getChatList(), questionItem, placeholderAnswerItem]
- setChatList(newList)
-
- // answer
- const responseItem = {
- id: `${Date.now()}`,
- content: '',
- isAnswer: true,
- }
-
- let tempNewConversationId = ''
- setResponsingTrue()
- sendChatMessage(data, {
- onData: (message: string, isFirstMessage: boolean, { conversationId: newConversationId, messageId }: any) => {
- responseItem.content = responseItem.content + message
- responseItem.id = messageId
- if (isFirstMessage && newConversationId)
- tempNewConversationId = newConversationId
-
- // closesure new list is outdated.
- const newListWithAnswer = produce(
- getChatList().filter(item => item.id !== responseItem.id && item.id !== placeholderAnswerId),
- (draft) => {
- if (!draft.find(item => item.id === questionId))
- draft.push({ ...questionItem })
-
- draft.push({ ...responseItem })
- })
- setChatList(newListWithAnswer)
- },
- async onCompleted() {
- setResponsingFalse()
- if (!tempNewConversationId) {
- return
- }
- if (getConversationIdChangeBecauseOfNew()) {
- const { data: conversations }: any = await fetchConversations()
- setConversationList(conversations as ConversationItem[])
- }
- setConversationIdChangeBecauseOfNew(false)
- resetNewConversationInputs()
- setChatNotStarted()
- setCurrConversationId(tempNewConversationId, APP_ID, true)
- },
- onError() {
- setResponsingFalse()
- // role back placeholder answer
- setChatList(produce(getChatList(), (draft) => {
- draft.splice(draft.findIndex(item => item.id === placeholderAnswerId), 1)
- }))
- },
- })
- }
-
- const handleFeedback = async (messageId: string, feedback: Feedbacktype) => {
- await updateFeedback({ url: `/messages/${messageId}/feedbacks`, body: { rating: feedback.rating } })
- const newChatList = chatList.map((item) => {
- if (item.id === messageId) {
- return {
- ...item,
- feedback,
- }
- }
- return item
- })
- setChatList(newChatList)
- notify({ type: 'success', message: t('common.api.success') })
- }
-
- const renderSidebar = () => {
- if (!APP_ID || !APP_INFO || !promptConfig)
- return null
- return (
-
- )
- }
-
- if (appUnavailable)
- return
-
- if (!APP_ID || !APP_INFO || !promptConfig)
- return
-
- return (
-
- handleConversationIdChange('-1')}
- />
-
- {/* sidebar */}
- {!isMobile && renderSidebar()}
- {isMobile && isShowSidebar && (
-
- e.stopPropagation()}>
- {renderSidebar()}
-
-
- )}
- {/* main */}
-
- }
- onInputsChange={setCurrInputs}
- >
-
- {
- hasSetInputs && (
-
-
-
-
- )
- }
-
-
-
- )
-}
-
-export default React.memo(Main)
diff --git a/spaces/dreambooth-hackathon/dreambooth-hackathon-evaluator/README.md b/spaces/dreambooth-hackathon/dreambooth-hackathon-evaluator/README.md
deleted file mode 100644
index 7165935b1eeb14d1f6970bc7309a1eb25d035f2f..0000000000000000000000000000000000000000
--- a/spaces/dreambooth-hackathon/dreambooth-hackathon-evaluator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Hackathon-Evaluator
-emoji: 😻
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: dreambooth-hackathon/leaderboard
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ds520/bingo/src/components/header.tsx b/spaces/ds520/bingo/src/components/header.tsx
deleted file mode 100644
index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/components/header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from 'react'
-import { UserMenu } from './user-menu'
-
-export async function Header() {
- return (
-
-
-
-
-
- )
-}
diff --git a/spaces/ecody726/stable-diffusion/app.py b/spaces/ecody726/stable-diffusion/app.py
deleted file mode 100644
index b6730756dd60dba2ae618391e0632d19e88c5b62..0000000000000000000000000000000000000000
--- a/spaces/ecody726/stable-diffusion/app.py
+++ /dev/null
@@ -1,349 +0,0 @@
-import gradio as gr
-import cv2
-import torch
-import os
-from imwatermark import WatermarkEncoder
-import numpy as np
-from PIL import Image
-import re
-from datasets import load_dataset
-from diffusers import DiffusionPipeline, EulerDiscreteScheduler
-
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-REPO_ID = "stabilityai/stable-diffusion-2"
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-wm = "SDV2"
-wm_encoder = WatermarkEncoder()
-wm_encoder.set_watermark('bytes', wm.encode('utf-8'))
-def put_watermark(img, wm_encoder=None):
- if wm_encoder is not None:
- img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
- img = wm_encoder.encode(img, 'dwtDct')
- img = Image.fromarray(img[:, :, ::-1])
- return img
-
-repo_id = "stabilityai/stable-diffusion-2"
-scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler", prediction_type="v_prediction")
-pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16", scheduler=scheduler)
-pipe = pipe.to(device)
-pipe.enable_xformers_memory_efficient_attention()
-
-#If you have duplicated this Space or is running locally, you can remove this snippet
-if "HUGGING_FACE_HUB_TOKEN" in os.environ:
- word_list_dataset = load_dataset("stabilityai/word-list", data_files="list.txt", use_auth_token=True)
- word_list = word_list_dataset["train"]['text']
-
-def infer(prompt, samples, steps, scale, seed):
- #If you have duplicated this Space or is running locally, you can remove this snippet
- if "HUGGING_FACE_HUB_TOKEN" in os.environ:
- for filter in word_list:
- if re.search(rf"\b{filter}\b", prompt):
- raise gr.Error("Unsafe content found. Please try again with different prompts.")
- generator = torch.Generator(device=device).manual_seed(seed)
- images = pipe(prompt, width=768, height=768, num_inference_steps=steps, guidance_scale=scale, num_images_per_prompt=samples, generator=generator).images
- images_watermarked = []
- for image in images:
- image = put_watermark(image, wm_encoder)
- images_watermarked.append(image)
- return images_watermarked
-
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
- #gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
- }
- #gallery>div>.h-full {
- min-height: 20rem;
- }
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- #advanced-btn {
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 12px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
- .animate-spin {
- animation: spin 1s linear infinite;
- }
- @keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
- }
- #share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
- margin-top: 10px;
- margin-left: auto;
- }
- #share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0;
- }
- #share-btn * {
- all: unset;
- }
- #share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
- }
- #share-btn-container .wrap {
- display: none !important;
- }
- .gr-form{
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
- }
- #prompt-container{
- gap: 0;
- }
- #component-9{margin-top: -19px}
- .image_duplication{position: absolute; width: 100px; left: 50px}
-"""
-
-block = gr.Blocks(css=css)
-
-examples = [
- [
- 'A high tech solarpunk utopia in the Amazon rainforest',
- 4,
- 25,
- 9,
- 1024,
- ],
- [
- 'A pikachu fine dining with a view to the Eiffel Tower',
- 4,
- 25,
- 9,
- 1024,
- ],
- [
- 'A mecha robot in a favela in expressionist style',
- 4,
- 25,
- 9,
- 1024,
- ],
- [
- 'an insect robot preparing a delicious meal',
- 4,
- 25,
- 9,
- 1024,
- ],
- [
- "A small cabin on top of a snowy mountain in the style of Disney, artstation",
- 4,
- 25,
- 9,
- 1024,
- ],
-]
-
-with block:
- gr.HTML(
- """
-
-
-
-
- Stable Diffusion 2 Demo
-
-
-
- Stable Diffusion 2 is the latest text-to-image model from StabilityAI. Access Stable Diffusion 1 Space here
For faster generation and API
- access you can try
- DreamStudio Beta.
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True):
- text = gr.Textbox(
- label="Enter your prompt",
- show_label=False,
- max_lines=1,
- placeholder="Enter your prompt",
- elem_id="prompt-text-input",
- ).style(
- border=(True, False, True, True),
- rounded=(True, False, False, True),
- container=False,
- )
- btn = gr.Button("Generate image").style(
- margin=False,
- rounded=(False, True, True, False),
- full_width=False,
- )
-
- gallery = gr.Gallery(
- label="Generated images", show_label=False, elem_id="gallery"
- ).style(grid=[2], height="auto")
-
-
-
- with gr.Accordion("Custom options", open=False):
- samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1)
- steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=25, step=1)
- scale = gr.Slider(
- label="Guidance Scale", minimum=0, maximum=50, value=9, step=0.1
- )
- seed = gr.Slider(
- label="Seed",
- minimum=0,
- maximum=2147483647,
- step=1,
- randomize=True,
- )
-
- with gr.Group():
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button("Share to community", elem_id="share-btn")
-
- ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery], cache_examples=False)
- ex.dataset.headers = [""]
-
- text.submit(infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery])
- btn.click(infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery])
-
- share_button.click(
- None,
- [],
- [],
- _js=share_js,
- )
- gr.HTML(
- """
-
-
- LICENSE
-The model is licensed with a CreativeML OpenRAIL++ license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license
- Biases and content acknowledgment
-Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card
-
- """
- )
-
-block.queue(concurrency_count=1, max_size=50).launch(max_threads=150)
\ No newline at end of file
diff --git a/spaces/exbert-project/exbert/client/src/ts/test.ts b/spaces/exbert-project/exbert/client/src/ts/test.ts
deleted file mode 100644
index f43301fe2937063f70534dbe8d83f4affa526d4c..0000000000000000000000000000000000000000
--- a/spaces/exbert-project/exbert/client/src/ts/test.ts
+++ /dev/null
@@ -1,151 +0,0 @@
-// import { BertAPI } from './api/bertApi'
-// import { DemoAPI } from './api/demoApi'
-import {API} from './api/mainApi'
-import * as d3 from 'd3'
-import * as R from 'ramda'
-import * as _ from 'lodash'
-import * as nj from 'numjs'
-import * as x_ from './etc/_Tools'
-import * as tf from '@tensorflow/tfjs'
-import {TokenDisplay, TokenWrapper, sideToLetter} from './data/TokenWrapper'
-import {AttentionWrapper} from "./data/AttentionCapsule"
-import {FaissSearchResultWrapper} from "./data/FaissSearchWrapper"
-
-const api = new API()
-
-
-/**
- * To learn about the behavior of the functions that I write, without writing a professional test suite
- * (cuz time constraints / I don't know how to do a testing suite well in Typescript)
- */
-export class Tester {
- // static testTf() {
- // const a = tf.randomUniform([3,3,4]);
- // const b = a.gather([0, 1], 0);
- // const a_out = a.arraySync();
- // console.log(a_out);
- // }
-
- // static testAttWrapperConstructor() {
- // api.getAttentions("Simple test one", "another test two").then(r => {
- // const att = new AttentionWrapper(r);
- // console.log(att.all);
- // })
- // }
-
- // static testNjAray() {
- // const a = nj.ones([1,7,12], 'int32')
- // const b = a
- // b.slice(null, 0, 11).assign(0, false)
- // console.log(b.tolist());
- // }
-
- // static testFindIdx() {
- // const bad_toks = ['[CLS]', '[SEP]']
- // const left_text = ['[CLS]', 'this', 'is', 'sentence', '[SEP]', '[CLS]']
- // // const bad_inds = _.findAllIndexes(left_text, (a) => _.includes(bad_toks, a))
- // const bad_inds = x_.findAllIndexes(left_text, (a) => _.includes(bad_toks, a))
- // console.log(bad_inds);
- // }
-
- // static testUpdateMaskedAttention(){
- // const as = 'this is a long string that has some meaning'
- // const bs = 'String part 2'
- // const a = ['[CLS]', 'this', 'is', 'a', 'long', 'string', 'that', 'has', 'some', 'meaning', '[SEP]']
- // const b = ['string', 'part', '2', '[SEP]']
- // const maskA = [1, 7, 9]
- // const maskB = [] // CAN'T BE EMPTY
-
- // const api = new BertAPI()
-
- // const val1 = new TokenDisplay(a, maskA)
- // const val2 = new TokenDisplay(b, maskB)
-
- // api.updateMaskedAttentions(val1, val2).then(
- // (r) => {
- // console.log(r.ab.left_text);
- // console.log(r.ab.right_text);
- // }
- // )
- // }
-
- // static testOrderedInsert() {
- // const a = [1, 3, 6, 8, 9]
- // const a2 = [1, 6, 8, 22, 9]
- // const a3 = []
- // const val = 4
- // x_.orderedInsert_(a, val)
- // console.log(a);
-
- // x_.orderedInsert_(a2, val, true)
- // console.log(a2);
-
- // x_.orderedInsert_(a3, val)
- // console.log(a3);
- // }
-
- // static testTokenDisplay() {
- // const toksa = ['yes', 'my', 'good', 'sir']
- // const toksb = ['hi', 'there']
- // const masksa = []
- // const masksb = []
- // const td = new TokenDisplay(toksa, masksa)
- // const td2 = new TokenDisplay(toksb, masksb)
- // const twrap = new TokenWrapper(toksa, toksb, masksa, masksb)
-
- // // console.log(twrap.a);
- // // console.log(twrap.b);
- // // console.log(twrap.all);
- // // twrap.mask("a", 3)
-
- // // console.log(twrap.a);
- // // console.log(twrap.all);
- // twrap.mask("all", 1)
- // console.log(twrap.b);
- // console.log(twrap.all);
- // }
-
- // static testFaissWrapper() {
- // const q = x_.makeRandom(768);
- // api.getNearestWozEmbeddings(q, 0, 10).then(
- // r => {
- // const fsw = new FaissSearchResultWrapper(r)
- // console.log(fsw.toStringArr());
- // }
- // )
- // }
-
- // static testSideToLetter() {
- // const side = "left"
- // console.log( sideToLetter(side, "all"));
- // console.log( sideToLetter(side, "ab"));
- // console.log( sideToLetter(side, "ba"));
- // console.log( sideToLetter(side, "bb"));
- // console.log( sideToLetter(side, "aa"));
- // console.log( sideToLetter("right", "aa"));
- // console.log( sideToLetter("abc", "aa")); // no error thrown... But linting catches an issue
- // }
-
- // static testRandomArrayCreation() {
- // console.log(x_.makeRandom(10));
- // }
-
- // static testFaissSearchResultsHist () {
- // api.getNearestWozEmbeddings(x_.makeRandom(768), 0).then(val => {
- // const fsw = new FaissSearchResultWrapper(val);
- // console.log(fsw.getHistogram());
- // })
-
- // }
-
- static testReadingJSON () {
- // console.log("RUNNING THE THING");
- let promise = new Promise(function(resolve, reject) {
- resolve(DemoAPI)
- })
-
- promise.then(x => console.log(x))
- // console.log(DemoAPI)
- // d3.json("demoAPI.json").then(d => console.log(Object.keys(d)))
- }
-}
diff --git a/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h
deleted file mode 100644
index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000
--- a/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h
+++ /dev/null
@@ -1,433 +0,0 @@
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-#include "libipc/def.h"
-
-#include "libipc/platform/detail.h"
-#include "libipc/circ/elem_def.h"
-#include "libipc/utility/log.h"
-#include "libipc/utility/utility.h"
-
-namespace ipc {
-
-////////////////////////////////////////////////////////////////
-/// producer-consumer implementation
-////////////////////////////////////////////////////////////////
-
-template
-struct prod_cons_impl;
-
-template <>
-struct prod_cons_impl> {
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- };
-
- alignas(cache_line_size) std::atomic rd_; // read index
- alignas(cache_line_size) std::atomic wt_; // write index
-
- constexpr circ::u2_t cursor() const noexcept {
- return 0;
- }
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
- if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
- return false; // full
- }
- std::forward(f)(&(elems[cur_wt].data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- /**
- * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
- * So we could just disconnect all connections of receiver, and return false.
- */
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(~static_cast(0u));
- return false;
- }
-
- template
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
- if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::forward(f)(&(elems[cur_rd].data_));
- std::forward(out)(true);
- rd_.fetch_add(1, std::memory_order_release);
- return true;
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- if (circ::index_of(cur_rd) ==
- circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- using flag_t = std::uint64_t;
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- circ::u2_t cur_ct, nxt_ct;
- for (unsigned k = 0;;) {
- cur_ct = ct_.load(std::memory_order_relaxed);
- if (circ::index_of(nxt_ct = cur_ct + 1) ==
- circ::index_of(rd_.load(std::memory_order_acquire))) {
- return false; // full
- }
- if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- auto* el = elems + circ::index_of(cur_ct);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- while (1) {
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if (cur_ct != wt_.load(std::memory_order_relaxed)) {
- return true;
- }
- if ((~cac_ct) != cur_ct) {
- return true;
- }
- if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
- return true;
- }
- wt_.store(nxt_ct, std::memory_order_release);
- cur_ct = nxt_ct;
- nxt_ct = cur_ct + 1;
- el = elems + circ::index_of(cur_ct);
- }
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- auto cur_wt = wt_.load(std::memory_order_acquire);
- auto id_rd = circ::index_of(cur_rd);
- auto id_wt = circ::index_of(cur_wt);
- if (id_rd == id_wt) {
- auto* el = elems + id_wt;
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if ((~cac_ct) != cur_wt) {
- return false; // empty
- }
- if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
- wt_.store(cur_wt + 1, std::memory_order_release);
- }
- k = 0;
- }
- else {
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
-
- enum : rc_t {
- ep_mask = 0x00000000ffffffffull,
- ep_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- };
-
- alignas(cache_line_size) std::atomic wt_; // write index
- alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
-
- circ::u2_t cursor() const noexcept {
- return wt_.load(std::memory_order_acquire);
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
- return false; // has not finished yet
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- epoch_ += ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
- if (cur == cursor()) return false; // acquire
- auto* el = elems + circ::index_of(cur++);
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & ep_mask) == 0) {
- std::forward(out)(true);
- return true;
- }
- auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id());
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)((nxt_rc & ep_mask) == 0);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
- using flag_t = std::uint64_t;
-
- enum : rc_t {
- rc_mask = 0x00000000ffffffffull,
- ep_mask = 0x00ffffffffffffffull,
- ep_incr = 0x0100000000000000ull,
- ic_mask = 0xff000000ffffffffull,
- ic_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
- alignas(cache_line_size) std::atomic epoch_ { 0 };
-
- circ::u2_t cursor() const noexcept {
- return ct_.load(std::memory_order_acquire);
- }
-
- constexpr static rc_t inc_rc(rc_t rc) noexcept {
- return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
- }
-
- constexpr static rc_t inc_mask(rc_t rc) noexcept {
- return inc_rc(rc) & ~rc_mask;
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.load(std::memory_order_acquire);
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_relaxed);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
- return false; // has not finished yet
- }
- else if (!rem_cc) {
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if ((cur_fl != cur_ct) && cur_fl) {
- return false; // full
- }
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) &&
- epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) {
- if (epoch == epoch_.load(std::memory_order_acquire)) {
- break;
- }
- else if (push(wrapper, std::forward(f), elems)) {
- return true;
- }
- epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
- auto* el = elems + circ::index_of(cur);
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if (cur_fl != ~static_cast(cur)) {
- return false; // empty
- }
- ++cur;
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & rc_mask) == 0) {
- std::forward(out)(true);
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- return true;
- }
- auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id());
- bool last_one = false;
- if ((last_one = (nxt_rc & rc_mask) == 0)) {
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- }
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)(last_one);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-} // namespace ipc
diff --git a/spaces/falterWliame/Face_Mask_Detection/Callofdutyghostsenglishlanguagepack.md b/spaces/falterWliame/Face_Mask_Detection/Callofdutyghostsenglishlanguagepack.md
deleted file mode 100644
index 8d168d9bb17bf1eaee759770a28e766170dd0861..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Callofdutyghostsenglishlanguagepack.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-Call of Duty: Ghosts English Language Pack: How to Download and Use It
-
-If you are a fan of Call of Duty: Ghosts, you might want to play the game in English language instead of Russian or any other language. However, you might encounter some difficulties in finding and installing the English language pack for the game. In this article, we will show you how to download and use the Call of Duty: Ghosts English language pack easily and quickly.
-
-What is Call of Duty: Ghosts English Language Pack?
-
-Call of Duty: Ghosts English Language Pack is a file that contains the English audio and text files for the game. It allows you to play the game in English language instead of Russian or any other language. It is compatible with the Steam version of the game, but not with other versions. It is also not compatible with other language packs, so you need to make sure you don't have any other language packs installed before using it.
-callofdutyghostsenglishlanguagepack
DOWNLOAD ---> https://urlca.com/2uDcoh
-
-Call of Duty: Ghosts English Language Pack has many benefits, such as:
-
-
-- It lets you enjoy the game in English language, which is the original and most popular language for the game.
-- It lets you understand the story, dialogues, instructions, and menus better.
-- It lets you communicate with other players online more easily.
-- It lets you avoid any errors or glitches that might occur due to language mismatch.
-
-
-How to download Call of Duty: Ghosts English Language Pack?
-
-To download Call of Duty: Ghosts English Language Pack, you need to follow these steps:
-
-
-- Click on this link to download Call of Duty: Ghosts English Language Pack.
-- Extract the zip file using Winrar or any other software.
-- Open the folder and copy the file named "english" (without quotes).
-
-
-How to use Call of Duty: Ghosts English Language Pack?
-
-To use Call of Duty: Ghosts English Language Pack, you need to follow these steps:
-
-
-- Open your Steam library and right-click on Call of Duty: Ghosts.
-- Select Properties and then click on Local Files tab.
-- Click on Browse Local Files button and open the folder named "zone" (without quotes).
-- Paste the file named "english" (without quotes) that you copied earlier into this folder.
-- Close all windows and launch Call of Duty: Ghosts from Steam.
-- Select Options and then click on Language tab.
-- Select English from the drop-down menu and click on Apply button.
-
-
-Congratulations! You have successfully installed and used Call of Duty: Ghosts English Language Pack. You can now play the game in English language and enjoy it fully.
-
-Tips and tricks for using Call of Duty: Ghosts English Language Pack
-
-To get the most out of Call of Duty: Ghosts English Language Pack, here are some tips and tricks that you can use:
-
-
-- Use the contextual help menu to learn more about the game mechanics and features. You can access it by pressing F1 key on your keyboard or clicking on the question mark icon on any window or dialog box.
-- Use the online multiplayer mode to play with other players from around the world. You can join or create a match by selecting Online Play from the main menu.
-- Use the Steam Workshop to download and install custom maps, modes, skins, and more for the game. You can access it by selecting Steam Workshop from the main menu.
-- Use the Steam Cloud to save your progress and settings online. You can enable it by selecting Steam Cloud from the main menu.
-
-
-Conclusion
-
-Call of Duty: Ghosts English Language Pack is a file that allows you to play Call of Duty: Ghosts in English language instead of Russian or any other language. It is compatible with the Steam version of the game, but not with other versions. It is also not compatible with other language packs, so you need to make sure you don't have any other language packs installed before using it.
-
-To download and use Call of Duty: Ghosts English Language Pack, you need to follow the steps in this article. You can also use the tips and tricks in this article to get the most out of Call of Duty: Ghosts English Language Pack. You can also see some examples of how Call of Duty: Ghosts looks like in English language below:
-
-
-
-
-
-
-
-
-We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-What is Call of Duty: Ghosts?
-
-Call of Duty: Ghosts is a first-person shooter video game that was released in 2013. It is the tenth main installment in the Call of Duty series and the sixth developed by Infinity Ward. The game is set in a near future where a global event known as "The Odin Strike" has devastated the world and changed the balance of power. The game follows the story of a group of elite soldiers known as "Ghosts" who fight against a new superpower called "The Federation". The game features a single-player campaign, an online multiplayer mode, a cooperative mode called "Extinction", and a downloadable content mode called "Squads".
-
-Call of Duty: Ghosts is a game that offers a variety of gameplay modes and features, such as:
-
-
-- A single-player campaign that spans across different locations and scenarios, such as underwater missions, space missions, stealth missions, etc.
-- An online multiplayer mode that supports up to 18 players in various modes and maps, such as Team Deathmatch, Domination, Search and Rescue, etc.
-- A cooperative mode called "Extinction" that pits up to four players against waves of alien creatures in a survival mode.
-- A downloadable content mode called "Squads" that allows players to create and customize their own squad of soldiers and compete against other squads in various modes.
-- A dynamic map system that changes the environment and events during gameplay, such as earthquakes, floods, explosions, etc.
-- A character customization system that allows players to create and customize their own soldier with different outfits, weapons, perks, etc.
-- A prestige system that allows players to reset their rank and unlock new rewards after reaching the maximum level.
-
-
-Why play Call of Duty: Ghosts?
-
-Call of Duty: Ghosts is a game that can appeal to different types of players and preferences, such as:
-
-
-- Players who enjoy a cinematic and immersive single-player campaign with a variety of missions and scenarios.
-- Players who enjoy a competitive and social online multiplayer mode with different modes and maps.
-- Players who enjoy a cooperative and challenging mode with alien creatures and survival elements.
-- Players who enjoy a customizable and creative mode with their own squad of soldiers.
-- Players who enjoy a dynamic and interactive map system that changes the gameplay experience.
-- Players who enjoy a character customization system that allows them to create their own soldier with different options.
-- Players who enjoy a prestige system that allows them to reset their rank and unlock new rewards.
-
-
-Call of Duty: Ghosts is a game that can offer a fun and engaging gameplay experience for different types of players. It is a game that can keep you entertained for hours with its various modes and features. It is also a game that can help you improve your skills and knowledge in first-person shooter games.
-
-Conclusion
-
-Call of Duty: Ghosts English Language Pack is a file that allows you to play Call of Duty: Ghosts in English language instead of Russian or any other language. It is compatible with the Steam version of the game, but not with other versions. It is also not compatible with other language packs, so you need to make sure you don't have any other language packs installed before using it.
-
-To download and use Call of Duty: Ghosts English Language Pack, you need to follow the steps in this article. You can also use the tips and tricks in this article to get the most out of Call of Duty: Ghosts English Language Pack. You can also see some examples of how Call of Duty: Ghosts looks like in English language below:
-
-
-
-
-
-
-
-We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
-
-How to uninstall Call of Duty: Ghosts English Language Pack?
-
-If you want to uninstall Call of Duty: Ghosts English Language Pack for any reason, you can do so by following these steps:
-
-
-- Open your Steam library and right-click on Call of Duty: Ghosts.
-- Select Properties and then click on Local Files tab.
-- Click on Browse Local Files button and open the folder named "zone" (without quotes).
-- Delete the file named "english" (without quotes) from this folder.
-- Close all windows and launch Call of Duty: Ghosts from Steam.
-- Select Options and then click on Language tab.
-- Select Russian or any other language from the drop-down menu and click on Apply button.
-
-
-You have successfully uninstalled Call of Duty: Ghosts English Language Pack. You can now play the game in Russian or any other language that you have installed.
-
-Conclusion
-
-Call of Duty: Ghosts English Language Pack is a file that allows you to play Call of Duty: Ghosts in English language instead of Russian or any other language. It is compatible with the Steam version of the game, but not with other versions. It is also not compatible with other language packs, so you need to make sure you don't have any other language packs installed before using it.
-
-To download and use Call of Duty: Ghosts English Language Pack, you need to follow the steps in this article. You can also use the tips and tricks in this article to get the most out of Call of Duty: Ghosts English Language Pack. You can also see some examples of how Call of Duty: Ghosts looks like in English language below:
-
-
-
-
-
-
-
-We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-Conclusion
-
-Call of Duty: Ghosts English Language Pack is a file that allows you to play Call of Duty: Ghosts in English language instead of Russian or any other language. It is compatible with the Steam version of the game, but not with other versions. It is also not compatible with other language packs, so you need to make sure you don't have any other language packs installed before using it.
-
-To download and use Call of Duty: Ghosts English Language Pack, you need to follow the steps in this article. You can also use the tips and tricks in this article to get the most out of Call of Duty: Ghosts English Language Pack. You can also see some examples of how Call of Duty: Ghosts looks like in English language below:
-
-
-
-
-
-
-
-We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes.md b/spaces/falterWliame/Face_Mask_Detection/Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes.md
deleted file mode 100644
index 3101877856074a376a7502326f222e4e37779e85..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes: Apa yang Perlu Anda Ketahui?
-
-Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes adalah salah satu buku pelajaran biologi yang digunakan oleh siswa SMA/MA kelas X yang mengikuti kurikulum 2013 edisi revisi. Buku ini disusun oleh Dra. Irnaningtyas, M.Pd. dan diterbitkan oleh Penerbit Erlangga.
-
-Buku ini membahas materi biologi secara menyeluruh dan mengembangkan proses pembelajaran siswa aktif dengan tiga aspek kompetensi, yaitu sikap (afektif), pengetahuan (kognitif), dan keterampilan (psikomotor). Buku ini juga dilengkapi dengan berbagai fitur menarik, seperti gambar, tabel, grafik, diagram, ilustrasi, contoh soal, latihan soal, rangkuman materi, dan kunci jawaban.
-download buku biologi kelas x kurikulum 2013 erlangga pdfgolkes
Download >>> https://urlca.com/2uDdrL
-
-Mengapa Anda Perlu Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes?
-
-Ada beberapa alasan mengapa Anda perlu download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, yaitu:
-
-
-- Anda dapat mengakses buku ini kapan saja dan di mana saja tanpa harus membawa buku fisik yang berat dan merepotkan.
-- Anda dapat membaca buku ini di perangkat elektronik yang Anda miliki, seperti laptop, tablet, atau smartphone.
-- Anda dapat menghemat biaya karena tidak perlu membeli buku fisik yang mungkin mahal atau sulit ditemukan di toko buku.
-- Anda dapat belajar biologi dengan lebih mudah dan efektif karena buku ini disajikan dalam format pdf yang mudah dibaca dan dicetak.
-- Anda dapat mendukung program go green dan mengurangi penggunaan kertas yang dapat merusak lingkungan.
-
-
-Bagaimana Cara Download Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes?
-
-Untuk download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, Anda dapat mengikuti langkah-langkah berikut:
-
-
-- Kunjungi situs web yang menyediakan link download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes. Beberapa contoh situs web yang dapat Anda kunjungi adalah Scribd, Academia.edu, atau Erlangga.co.id.
-- Cari buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan menggunakan kata kunci yang sesuai di kolom pencarian situs web tersebut.
-- Pilih link download yang tersedia dan klik untuk mengunduh file pdf buku biologi kelas X kurikulum 2013 erlangga pdfgolkes ke perangkat elektronik Anda.
-- Tunggu proses download selesai dan simpan file pdf buku biologi kelas X kurikulum 2013 erlangga pdfgolkes di folder yang Anda inginkan.
-- Buka file pdf buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan menggunakan aplikasi pembaca pdf yang Anda miliki, seperti Adobe Reader, Foxit Reader, atau Google PDF Viewer.
-- Selamat membaca dan belajar biologi dengan buku biologi kelas X kurikulum 2013 erlangga pdfgolkes!
-
-
-Demikianlah artikel tentang download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes. Semoga artikel ini bermanfaat bagi Anda yang ingin belajar biologi dengan lebih mudah dan efektif. Terima kasih telah membaca dan selamat belajar!
-Apa Isi Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes?
-
-Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes terdiri dari 10 bab yang mencakup berbagai topik biologi yang relevan dan menarik, yaitu:
-
-
-- Bab 1: Keanekaragaman Hayati
-- Bab 2: Sistem Klasifikasi Makhluk Hidup
-- Bab 3: Struktur dan Fungsi Jaringan pada Tumbuhan
-- Bab 4: Struktur dan Fungsi Jaringan pada Hewan
-- Bab 5: Organisasi Kehidupan
-- Bab 6: Sel sebagai Satuan Kehidupan
-- Bab 7: Metabolisme Sel
-- Bab 8: Enzim dan Biokatalisator
-- Bab 9: Fotosintesis
-- Bab 10: Respirasi Sel
-
-
-Setiap bab dilengkapi dengan tujuan pembelajaran, indikator pencapaian kompetensi, materi pokok, kegiatan pembelajaran, evaluasi, dan refleksi. Buku ini juga menyajikan berbagai sumber belajar lainnya, seperti buku referensi, jurnal ilmiah, situs web, video, dan aplikasi.
-
-Apa Kelebihan Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes?
-
-Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes memiliki beberapa kelebihan yang dapat membantu Anda belajar biologi dengan lebih mudah dan menyenangkan, yaitu:
-
-
-- Buku ini disusun sesuai dengan kurikulum 2013 edisi revisi yang mengacu pada Standar Isi dan Standar Kompetensi Lulusan.
-- Buku ini mengikuti prinsip scientific approach yang meliputi mengamati, menanya, mengumpulkan informasi, mengasosiasi, dan mengkomunikasikan.
-- Buku ini menggunakan pendekatan saintifik yang melibatkan keterampilan proses sains, keterampilan berpikir kritis, keterampilan berpikir kreatif, dan keterampilan berpikir logis.
-- Buku ini mengintegrasikan nilai-nilai karakter dan konservasi lingkungan dalam pembelajaran biologi.
-- Buku ini menggunakan bahasa yang mudah dipahami dan sesuai dengan kaidah EYD.
-- Buku ini menyediakan berbagai media pembelajaran yang menarik dan bervariasi, seperti gambar, tabel, grafik, diagram, ilustrasi, contoh soal, latihan soal, rangkuman materi, dan kunci jawaban.
-
-
-Dengan demikian, buku biologi kelas X kurikulum 2013 erlangga pdfgolkes adalah buku yang dapat membantu Anda belajar biologi dengan lebih efektif dan menyenangkan. Anda dapat download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan mengikuti langkah-langkah yang telah dijelaskan sebelumnya. Selamat belajar biologi!
-Apa Manfaat Belajar Biologi dengan Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes?
-
-Belajar biologi dengan buku biologi kelas X kurikulum 2013 erlangga pdfgolkes memiliki banyak manfaat bagi Anda, yaitu:
-
-
-
-- Anda dapat meningkatkan pengetahuan dan pemahaman Anda tentang konsep-konsep biologi yang penting dan aktual.
-- Anda dapat mengembangkan keterampilan berpikir ilmiah, kritis, kreatif, dan logis dalam memecahkan masalah biologi.
-- Anda dapat menumbuhkan sikap positif dan apresiatif terhadap keanekaragaman hayati dan lingkungan hidup.
-- Anda dapat mempersiapkan diri untuk menghadapi ujian nasional dan ujian masuk perguruan tinggi yang berhubungan dengan biologi.
-- Anda dapat menentukan minat dan bakat Anda dalam bidang biologi dan merencanakan karier Anda di masa depan.
-
-
-Oleh karena itu, belajar biologi dengan buku biologi kelas X kurikulum 2013 erlangga pdfgolkes adalah pilihan yang tepat bagi Anda yang ingin belajar biologi dengan lebih mudah dan menyenangkan. Anda dapat download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan mengikuti langkah-langkah yang telah dijelaskan sebelumnya. Selamat belajar biologi!
-
-Kesimpulan
-
-Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes adalah buku pelajaran biologi yang digunakan oleh siswa SMA/MA kelas X yang mengikuti kurikulum 2013 edisi revisi. Buku ini disusun oleh Dra. Irnaningtyas, M.Pd. dan diterbitkan oleh Penerbit Erlangga. Buku ini membahas materi biologi secara menyeluruh dan mengembangkan proses pembelajaran siswa aktif dengan tiga aspek kompetensi, yaitu sikap (afektif), pengetahuan (kognitif), dan keterampilan (psikomotor). Buku ini juga dilengkapi dengan berbagai fitur menarik, seperti gambar, tabel, grafik, diagram, ilustrasi, contoh soal, latihan soal, rangkuman materi, dan kunci jawaban.
-
-Untuk download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, Anda dapat mengunjungi situs web yang menyediakan link download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, seperti Scribd, Academia.edu, atau Erlangga.co.id. Anda dapat mencari buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan menggunakan kata kunci yang sesuai di kolom pencarian situs web tersebut. Anda dapat memilih link download yang tersedia dan klik untuk mengunduh file pdf buku biologi kelas X kurikulum 2013 erlangga pdfgolkes ke perangkat elektronik Anda. Anda dapat membaca buku ini di perangkat elektronik yang Anda miliki, seperti laptop, tablet, atau smartphone.
-
-Belajar biologi dengan buku biologi kelas X kurikulum 2013 erlangga pdfgolkes memiliki banyak manfaat bagi Anda, seperti mengakses buku ini kapan saja dan di mana saja tanpa harus membawa buku fisik yang berat dan merepotkan, menghemat biaya karena tidak perlu membeli buku fisik yang mungkin mahal atau sulit ditemukan di toko buku, belajar biologi dengan lebih mudah dan efektif karena buku ini disajikan dalam format pdf yang mudah dibaca dan dicetak, mendukung program go green dan mengurangi penggunaan kertas yang dapat merusak lingkungan, meningkatkan pengetahuan dan pemahaman Anda tentang konsep-konsep biologi yang penting dan aktual, mengembangkan keterampilan berpikir ilmiah, kritis, kreatif, dan logis dalam memecahkan masalah biologi, menumbuhkan sikap positif dan apresiatif terhadap keanekaragaman hayati dan lingkungan hidup, mempersiapkan diri untuk menghadapi ujian nasional dan ujian masuk perguruan tinggi yang berhubungan dengan biologi, dan menentukan minat dan bakat Anda dalam bidang biologi dan merencanakan karier Anda di masa depan.
-
-Demikianlah artikel tentang download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes. Semoga artikel ini bermanfaat bagi Anda yang ingin belajar biologi dengan lebih mudah dan menyenangkan. Terima kasih telah membaca dan selamat belajar!
-Kesimpulan
-
-Buku Biologi Kelas X Kurikulum 2013 Erlangga Pdfgolkes adalah buku pelajaran biologi yang digunakan oleh siswa SMA/MA kelas X yang mengikuti kurikulum 2013 edisi revisi. Buku ini disusun oleh Dra. Irnaningtyas, M.Pd. dan diterbitkan oleh Penerbit Erlangga. Buku ini membahas materi biologi secara menyeluruh dan mengembangkan proses pembelajaran siswa aktif dengan tiga aspek kompetensi, yaitu sikap (afektif), pengetahuan (kognitif), dan keterampilan (psikomotor). Buku ini juga dilengkapi dengan berbagai fitur menarik, seperti gambar, tabel, grafik, diagram, ilustrasi, contoh soal, latihan soal, rangkuman materi, dan kunci jawaban.
-
-Untuk download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, Anda dapat mengunjungi situs web yang menyediakan link download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes, seperti Scribd, Academia.edu, atau Erlangga.co.id. Anda dapat mencari buku biologi kelas X kurikulum 2013 erlangga pdfgolkes dengan menggunakan kata kunci yang sesuai di kolom pencarian situs web tersebut. Anda dapat memilih link download yang tersedia dan klik untuk mengunduh file pdf buku biologi kelas X kurikulum 2013 erlangga pdfgolkes ke perangkat elektronik Anda. Anda dapat membaca buku ini di perangkat elektronik yang Anda miliki, seperti laptop, tablet, atau smartphone.
-
-Belajar biologi dengan buku biologi kelas X kurikulum 2013 erlangga pdfgolkes memiliki banyak manfaat bagi Anda, seperti mengakses buku ini kapan saja dan di mana saja tanpa harus membawa buku fisik yang berat dan merepotkan, menghemat biaya karena tidak perlu membeli buku fisik yang mungkin mahal atau sulit ditemukan di toko buku, belajar biologi dengan lebih mudah dan efektif karena buku ini disajikan dalam format pdf yang mudah dibaca dan dicetak, mendukung program go green dan mengurangi penggunaan kertas yang dapat merusak lingkungan, meningkatkan pengetahuan dan pemahaman Anda tentang konsep-konsep biologi yang penting dan aktual, mengembangkan keterampilan berpikir ilmiah, kritis, kreatif, dan logis dalam memecahkan masalah biologi, menumbuhkan sikap positif dan apresiatif terhadap keanekaragaman hayati dan lingkungan hidup, mempersiapkan diri untuk menghadapi ujian nasional dan ujian masuk perguruan tinggi yang berhubungan dengan biologi, dan menentukan minat dan bakat Anda dalam bidang biologi dan merencanakan karier Anda di masa depan.
-
-Demikianlah artikel tentang download buku biologi kelas X kurikulum 2013 erlangga pdfgolkes. Semoga artikel ini bermanfaat bagi Anda yang ingin belajar biologi dengan lebih mudah dan menyenangkan. Terima kasih telah membaca dan selamat belajar!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/APK Extreme Car Driving Simulator The Most Realistic Car Game Ever.md b/spaces/fatiXbelha/sd/APK Extreme Car Driving Simulator The Most Realistic Car Game Ever.md
deleted file mode 100644
index ebf187ecfe8aa115ae7c560817d99624b1d4fb0f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/APK Extreme Car Driving Simulator The Most Realistic Car Game Ever.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-APK Extreme Car Driving Simulator: A Review
-If you are looking for a realistic and fun car driving simulator game for your Android device, you might want to check out APK Extreme Car Driving Simulator. This game lets you drive, drift, and feel a racing sports car in a huge open world city. You can perform illegal stunts, run from the police, and explore different locations without any limits. In this article, we will review APK Extreme Car Driving Simulator and tell you why you should play it, what features it has, how to download and install it, and what are its pros and cons.
- What is APK Extreme Car Driving Simulator?
-APK Extreme Car Driving Simulator is a game developed by AxesInMotion Racing that was released in 2014. It is one of the most popular car simulator games on Google Play Store with over 500 million downloads. It is also available on Uptodown, where you can download it for free.
-apk extreme car driving simulator
Download ››››› https://urllie.com/2uNyBU
- Why should you play APK Extreme Car Driving Simulator?
-There are many reasons why you should play APK Extreme Car Driving Simulator. Here are some of them:
-
-- You can experience the thrill of driving a sports car in a realistic way.
-- You can choose from different game modes such as checkpoint mode, traffic mode, or free mode.
-- You can customize your car with different colors, wheels, vinyls, and spoilers.
-- You can enjoy stunning graphics and sound effects that make you feel like you are in a real car.
-- You can control your car with different options such as steering wheel, accelerometer, or arrows.
-- You can explore a detailed open world environment with different scenarios such as city, airport, off-road, or desert.
-- You can challenge yourself with realistic car damage and physics that make you crash your car if you are not careful.
-- You can have fun with no rules or limits. You can drive as fast as you want, drift as much as you want, and do whatever you want.
-
- Features of APK Extreme Car Driving Simulator
-APK Extreme Car Driving Simulator has many features that make it an enjoyable game to play. Here are some of them:
- Game modes
-You can choose from three different game modes in APK Extreme Car Driving Simulator:
-
-- Checkpoint mode: In this mode immersive.
-- The game has different game modes that offer different challenges and objectives.
-- The game has a huge open world environment that you can explore with your car.
-- The game has realistic physics and car damage that make the game more challenging and fun.
-- The game has a lot of car customization options that let you personalize your car.
-
- Cons
-
-- The game can be repetitive and boring after a while as there is no story or progression.
-- The game can be buggy and glitchy sometimes as it may crash or freeze.
-- The game can be annoying with the ads that pop up frequently and interrupt the gameplay.
-- The game can be hard to control with some devices as the sensitivity may be too high or low.
-- The game can be unrealistic with some aspects such as the police chase or the traffic behavior.
-
- Conclusion
-APK Extreme Car Driving Simulator is a game that lets you drive, drift, and feel a racing sports car in a huge open world city. You can perform illegal stunts, run from the police, and explore different locations without any limits. The game has stunning graphics and sound effects, different game modes, realistic physics and car damage, car customization options, and a huge open world environment. However, the game also has some drawbacks such as repetition, bugs, ads, controls, and realism. Overall, APK Extreme Car Driving Simulator is a great game for car enthusiasts who want to experience driving a sports car in a realistic way. You can download it for free from Google Play Store or Uptodown and enjoy driving a sports car in a realistic way.
- FAQs
-Here are some frequently asked questions about APK Extreme Car Driving Simulator:
- Q: How many cars are there in APK Extreme Car Driving Simulator?
-A: There are over 20 cars in APK Extreme Car Driving Simulator that you can unlock by earning coins or watching ads. Some of the cars are Ferrari, Lamborghini, Bugatti, Pagani, and McLaren.
- Q: How can I get more coins in APK Extreme Car Driving Simulator?
-A: You can get more coins in APK Extreme Car Driving Simulator by completing the levels in checkpoint mode, drifting in traffic mode, or watching ads. You can also get bonus coins by performing stunts or driving fast.
-apk extreme car driving simulator download
-apk extreme car driving simulator mod
-apk extreme car driving simulator 2023
-apk extreme car driving simulator hack
-apk extreme car driving simulator online
-apk extreme car driving simulator game
-apk extreme car driving simulator free
-apk extreme car driving simulator unlimited money
-apk extreme car driving simulator latest version
-apk extreme car driving simulator for pc
-apk extreme car driving simulator 2
-apk extreme car driving simulator uptodown
-apk extreme car driving simulator old version
-apk extreme car driving simulator 3d
-apk extreme car driving simulator android
-apk extreme car driving simulator cheats
-apk extreme car driving simulator offline
-apk extreme car driving simulator axesinmotion racing
-apk extreme car driving simulator gameplay
-apk extreme car driving simulator review
-apk extreme car driving simulator multiplayer
-apk extreme car driving simulator all cars unlocked
-apk extreme car driving simulator new update
-apk extreme car driving simulator 6.75.1
-apk extreme car driving simulator rexdl
-apk extreme car driving simulator revdl
-apk extreme car driving simulator appvn
-apk extreme car driving simulator apkpure
-apk extreme car driving simulator apkmirror
-apk extreme car driving simulator apkmody
-apk extreme car driving simulator happymod
-apk extreme car driving simulator an1.com
-apk extreme car driving simulator mod menu
-apk extreme car driving simulator mod money and cars unlocked download 2023 latest version for android mobile free install offline racing game app apksfull.com
-apk extreme car driving simulator mod unlimited money and cars unlocked download 2023 latest version for android mobile free install offline racing game app apksfull.com
-apk extreme car driving simulator mod unlimited money and cars unlocked download 2023 latest version for android mobile free install offline racing game app apksfull.com
- Q: How can I turn off the ads in APK Extreme Car Driving Simulator?
-A: You can turn off the ads in APK Extreme Car Driving Simulator by purchasing the premium version of the game for $1.99. This will also unlock all the cars and remove the watermark from the screen.
- Q: How can I change the weather or time of day in APK Extreme Car Driving Simulator?
-A: You can change the weather or time of day in APK Extreme Car Driving Simulator by tapping on the sun or cloud icon on the top right corner of the screen. You can choose from sunny, cloudy, rainy, snowy, day, or night.
- Q: How can I reset my car or go back to the garage in APK Extreme Car Driving Simulator?
-A: You can reset your car or go back to the garage in APK Extreme Car Driving Simulator by tapping on the reset or garage icon on the bottom left corner of the screen. This will also repair your car if it is damaged.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/BombSquad The Ultimate Guide to Unlock All Characters and More.md b/spaces/fatiXbelha/sd/BombSquad The Ultimate Guide to Unlock All Characters and More.md
deleted file mode 100644
index acb7a75e622fe3c1c68330a6154cfc47e3e8756f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/BombSquad The Ultimate Guide to Unlock All Characters and More.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-How to Download BombSquad Unlock All Characters
-BombSquad is a popular action game that lets you blow up your friends in various mini-games. But did you know that you can unlock all the characters in the game for free? In this article, we will show you how to download bombsquad unlock all characters using two different methods. But first, let's learn more about the game itself.
-What is BombSquad?
-A fun and explosive multiplayer game
-BombSquad is an action game developed by Eric Froemling. It features 8 player local or networked multiplayer, gratuitous explosions, advanced ragdoll physics, pirates, ninjas, barbarians, insane chefs, and more. You can play various mini-games such as capture-the-flag, hockey, king-of-the-hill, and bomb. You can also create your own custom games with the built-in editor. The game supports touch screens as well as a variety of controllers, including phones and tablets via the free 'BombSquad Remote' app.
-download bombsquad unlock all characters
Download →→→ https://urllie.com/2uNHTc
-How to play BombSquad on different devices
-BombSquad is available on Android, iOS, Mac, Windows, Linux, and Android TV. You can download it from the official website or from the app stores. To play with your friends, you can either join an online server or host your own local server. You can also play solo or with bots if you prefer. The game is easy to learn but hard to master. You need to use your skills and strategy to win the matches and earn tickets, which you can use to buy new characters, maps, modes, and power-ups.
-Why unlock all characters in BombSquad?
-More variety and customization
-BombSquad has a lot of characters to choose from, each with their own appearance and personality. Some of them are based on popular movies, TV shows, games, and celebrities. For example, you can play as Indiana Jones, Batman, Iron Man, Spider-Man, Hulk, Captain America, Thor, Darth Vader, Yoda, Mario, Luigi, Sonic, Pikachu, Harry Potter, Gandalf, Frodo, Homer Simpson, SpongeBob SquarePants, Mr. Bean, Chuck Norris, Bruce Lee, Jackie Chan, and many more. You can also customize your character's color and name.
-More fun and challenge
-Unlocking all the characters in BombSquad can make the game more fun and challenging. You can try different combinations of characters and see how they interact with each other. You can also use different characters for different modes and maps. For example, you can use a fast character for a racing mode or a strong character for a fighting mode. You can also challenge yourself by playing with random characters or by using the same character as your opponents.
-How to download BombSquad unlock all characters?
-Method 1: Use a plugin
-One way to download bombsquad unlock all characters is to use a plugin that will let you choose any character without purchasing them. This method works for online servers that have custom characters installed. Here are the steps to follow:
-Step 1: Download the plugin
-You can download the plugin from this link. It is called Character Chooser and it was created by Mr.Smoothy. It is a script file that you need to place in your BombSquad folder.
Step 2: Install the plugin
-To install the plugin, you need to copy the script file to your BombSquad folder. The location of the folder depends on your device and operating system. For example, on Android, it is usually in /sdcard/BombSquad. On Windows, it is usually in C:\Users\YourName\AppData\Roaming\BombSquad. On Mac, it is usually in ~/Library/Application Support/BombSquad. On Linux, it is usually in ~/.bombsquad. You can also find the folder by going to the settings menu in the game and choosing 'Show Mods Folder'. Once you have copied the file, you need to restart the game for the plugin to take effect.
-Step 3: Choose your character
-Now that you have installed the plugin, you can choose any character you want without paying for them. To do this, you need to join an online server that has custom characters enabled. You can find such servers by looking for the ones that have a star icon next to their name. Once you join a server, you will see a new button on the top right corner of the screen that says 'Choose Character'. Tap on it and you will see a list of all the available characters. You can scroll through them and select the one you like. You can also change your character anytime during the game by tapping on the same button.
-How to download bombsquad and unlock all characters for free
-Download bombsquad pro edition apk mod with all characters unlocked
-Bombsquad game download for pc with all characters unlocked
-Download bombsquad hack version with unlimited tickets and all characters
-Bombsquad mod menu download with all characters and maps unlocked
-Download bombsquad latest version with all characters and skins unlocked
-Bombsquad online multiplayer download with all characters and modes unlocked
-Download bombsquad for android with all characters and costumes unlocked
-Bombsquad offline download with all characters and powerups unlocked
-Download bombsquad for mac with all characters and features unlocked
-Bombsquad cheats download with all characters and items unlocked
-Download bombsquad for windows 10 with all characters and levels unlocked
-Bombsquad tips and tricks to unlock all characters and win every game
-Download bombsquad for ios with all characters and achievements unlocked
-Bombsquad review and guide to unlock all characters and master the game
-Download bombsquad for linux with all characters and customizations unlocked
-Bombsquad best characters to unlock and use in different game modes
-Download bombsquad for chromebook with all characters and settings unlocked
-Bombsquad codes and coupons to unlock all characters and get discounts
-Download bombsquad for firestick with all characters and controllers unlocked
-Bombsquad funniest moments and fails with all characters and explosions
-Download bombsquad for roku with all characters and soundtracks unlocked
-Bombsquad tournaments and competitions with all characters and prizes
-Download bombsquad for nvidia shield with all characters and graphics unlocked
-Bombsquad updates and news with all characters and improvements
-Download bombsquad for xbox one with all characters and compatibility unlocked
-Bombsquad community and fan art with all characters and creations
-Download bombsquad for ps4 with all characters and performance unlocked
-Bombsquad challenges and achievements with all characters and rewards
-Download bombsquad for switch with all characters and portability unlocked
-Bombsquad gameplay and walkthrough with all characters and strategies
-Download bombsquad for smart tv with all characters and quality unlocked
-Bombsquad features and benefits with all characters and advantages
-Download bombsquad for raspberry pi with all characters and simplicity unlocked
-Bombsquad ratings and reviews with all characters and opinions
-Download bombsquad for steam with all characters and support unlocked
-Bombsquad comparison and alternatives with other games similar to bombsquad
-Download bombsquad for facebook gaming with all characters and social features unlocked
-Bombsquad history and development with all characters and changes
-Download bombsquad for oculus quest with all characters and vr experience unlocked
-Method 2: Use a modded APK
-Another way to download bombsquad unlock all characters is to use a modded APK that has all the characters unlocked by default. This method works for offline and online servers, but you may not be able to join some servers that have anti-cheat measures. Here are the steps to follow:
-Step 1: Download the modded APK
-You can download the modded APK from this link. It is called BombSquad Pro Mod Apk and it was created by TechyList. It is a modified version of the original game that has all the features unlocked, including characters, maps, modes, power-ups, and tickets.
-Step 2: Install the modded APK
-To install the modded APK, you need to uninstall the original game from your device first. Then, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the app store. After that, you need to locate the downloaded file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-Step 3: Enjoy the game
-Now that you have installed the modded APK, you can enjoy the game with all the characters unlocked. You can play offline or online with your friends or with other players around the world. You can also customize your character's color and name as you wish.
-Conclusion
-BombSquad is a fun and explosive multiplayer game that lets you blow up your friends in various mini-games. You can unlock all the characters in the game for free by using either a plugin or a modded APK. Both methods are easy and safe to use, but they may have some limitations depending on your device and server. We hope this article helped you learn how to download bombsquad unlock all characters and enjoy the game more.
-FAQs
-
-- Q: Is BombSquad free to play?
-- A: Yes, BombSquad is free to play on all platforms. However, some features may require in-app purchases or tickets.
-- Q: How many characters are there in BombSquad?
-- A: There are over 100 characters in BombSquad, including custom ones made by fans.
-- Q: How do I create my own custom character in BombSquad?
-- A: You can create your own custom character in BombSquad by using a tool called BS Head Editor. It allows you to design your character's head using various shapes, colors, textures, and effects.
-- Q: How do I share my custom character with others in BombSquad?
-- A: You can share your custom character with others in BombSquad by uploading it to a website called BS Community. It is a platform where you can find and download custom characters, maps, modes, scripts, and more made by other players.
-- Q: How do I report a bug or a problem in BombSquad?
-- A: You can report a bug or a problem in BombS Squad by contacting the developer via email or social media. You can also post your issue on the official forum or the subreddit of the game.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Dummy Resume Samples for Free Choose from 500 Designs.md b/spaces/fatiXbelha/sd/Download Dummy Resume Samples for Free Choose from 500 Designs.md
deleted file mode 100644
index 44379652c64ddcbd67dd7bf6059f01431d394492..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Dummy Resume Samples for Free Choose from 500 Designs.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-Dummy Resume Download: How to Create a Professional Resume in Minutes
-A resume is one of the most important documents you need to prepare when applying for a job. It summarizes your qualifications, skills, and achievements in a concise and compelling way. However, writing a resume from scratch can be challenging and time-consuming, especially if you are not sure what to include and how to format it.
-That's where a dummy resume comes in handy. A dummy resume is a template that you can use to create your own resume in minutes. You don't have to worry about the layout, design, or content of your resume, as the template provides you with everything you need. All you have to do is fill in your information and customize it to fit your needs and preferences.
-dummy resume download
Download File 🆗 https://urllie.com/2uNEme
-What is a dummy resume and why do you need one?
-A dummy resume is a template that you can use to create your own resume
-A dummy resume is not a fake or misleading resume. It is simply a pre-made document that contains the essential elements of a professional resume, such as:
-
-- Your name and contact information
-- A summary or objective statement
-- Your work experience
-- Your education
-- Your skills and interests
-- Any additional information relevant to the job
-
-A dummy resume template gives you a clear structure and format for your resume, as well as some examples of what to write in each section. You can use it as a guide or inspiration for creating your own resume.
-Benefits of using a dummy resume template
-Save time and effort
-Writing a resume from scratch can take hours or even days of research, brainstorming, writing, editing, and proofreading. With a dummy resume template, you can save yourself a lot of time and effort by simply filling in the blanks with your own information. You don't have to worry about the length, order, or style of your resume, as the template takes care of that for you.
-Follow the best practices and standards
-A dummy resume template is designed by experts who know what employers are looking for in a resume. They follow the best practices and standards of resume writing, such as using clear and concise language, highlighting relevant keywords, using bullet points and white space, and avoiding common errors and mistakes. By using a dummy resume template, you can ensure that your resume meets the expectations of hiring managers and recruiters.
-Customize it to suit your needs and preferences
-A dummy resume template is not a one-size-fits-all solution. You can customize it to suit your needs and preferences by changing the font, color, layout, or content of the template. You can also add or delete sections as needed, depending on the requirements of the job you are applying for. A dummy resume template gives you the flexibility to create a unique and personalized resume that showcases your strengths and skills.
-How to choose the right dummy resume template for your job application
-Consider your industry and job role
-Not all resumes are created equal. Different industries and job roles may have different expectations and preferences for resumes. For example, a creative industry may prefer a more colorful and artistic resume, while a corporate industry may prefer a more formal and professional resume. Therefore, you should choose a dummy resume template that matches your industry and job role, as well as the company culture and values. You can browse through different categories and samples of resume templates online to find the one that suits you best.
-dummy resume download free
-dummy resume download word
-dummy resume download pdf
-dummy resume download template
-dummy resume download for freshers
-dummy resume download with photo
-dummy resume download in word format
-dummy resume download for experienced
-dummy resume download for students
-dummy resume download for teachers
-dummy resume download for engineers
-dummy resume download for nurses
-dummy resume download for writers
-dummy resume download for graphic designers
-dummy resume download for web developers
-dummy resume download for accountants
-dummy resume download for business analysts
-dummy resume download for sales marketers
-dummy resume download for flight attendants
-dummy resume download for copywriters
-dummy resume download for data analysts
-dummy resume download for freelancers
-dummy resume sample downloads word and pdfs
-dummy resume templates free printable and customizable
-dummy resume examples for 2023 in word format
-dummy resume formats clean modern simple infographic minimalist corporate creative photo colorful acting academic graphic design college high school scholarship seek babysitter resumes writer teacher business analyst accounting tech
-dummy resume builder online with professional templates and easy-to-use design editor
-dummy resume tips and advice from experts and career coaches
-dummy resume samples by industry and job title
-dummy resume cover letter templates and examples
-how to create a dummy resume in minutes with canva or resumegenius
-how to use a dummy resume to land your dream job or internship
-how to customize a dummy resume to reflect your true potential and skills
-how to write a dummy resume objective or summary statement that stands out from the crowd
-how to choose the best dummy resume font size style and color scheme
-how to optimize a dummy resume for applicant tracking systems (ATS)
-how to avoid common dummy resume mistakes and errors
-how to update and edit a dummy resume anytime anywhere with cloud storage and access
-how to print a high-quality copy of your dummy resume or attach it to emails or online applications in pdf jpg or png format
-how to get feedback and reviews on your dummy resume from peers mentors or professionals
-Pick a format that highlights your strengths and skills
-There are three main types of resume formats: chronological, functional, and hybrid. Each one has its own advantages and disadvantages, depending on your work history, skills, and achievements. Here is a brief overview of each format:
-
-- Chronological: This format lists your work experience in reverse chronological order, starting with your most recent job. It is the most common and preferred format by employers, as it shows your career progression and stability. It is ideal for candidates who have a consistent and relevant work history.
-- Functional: This format focuses on your skills and abilities, rather than your work experience. It groups your skills into categories and provides examples of how you used them in different situations. It is ideal for candidates who have gaps in their work history, are changing careers, or have limited work experience.
-- Hybrid: This format combines the best of both chronological and functional formats. It highlights your skills and achievements at the top of your resume, followed by your work experience in reverse chronological order. It is ideal for candidates who want to showcase both their skills and their work history.
-
-You should pick the format that highlights your strengths and skills, as well as the requirements of the job you are applying for. You can use a dummy resume template that follows the format you choose, or you can mix and match different elements from different templates to create your own format.
-Look for a design that matches your personality and brand
-The design of your resume is not just about aesthetics. It is also about creating a positive impression and conveying your personality and brand. Your resume design should reflect who you are, what you do, and how you do it. You should look for a design that matches your personality and brand, as well as the tone and style of the job you are applying for. Here are some tips to help you choose the right design for your resume:
-
-- Use a simple and clean layout that is easy to read and scan
-- Choose a font that is professional and legible
-- Use colors that are appropriate and consistent with your industry and job role
-- Add some visual elements, such as icons, graphs, or charts, to make your resume more attractive and informative
-- Avoid using too many graphics, images, or effects that may distract or confuse the reader
-
-You can use a dummy resume template that has a design that matches your personality and brand, or you can customize it to fit your preferences. You can also use online tools or software to create your own design from scratch.
-How to download and use a dummy resume template
-Find a reliable and reputable source of free resume templates
-There are many websites that offer free resume templates that you can download and use. However, not all of them are reliable and reputable. Some of them may have low-quality templates, outdated formats, or hidden fees. You should be careful when choosing a source of free resume templates, and look for the following features:
-
-- A large collection of templates for different industries, job roles, and formats
-- A user-friendly interface that allows you to preview, select, and download the templates easily
-- A secure and trustworthy website that protects your privacy and data
-- A positive feedback and rating from other users who have used the templates
-- A customer support service that can help you with any issues or questions you may have
-
-One example of a reliable and reputable source of free resume templates is [Resume Genius]. Resume Genius offers over 50 professional resume templates that you can download in PDF or Word format. You can also use their online resume builder to create your resume in minutes.
-Select and download the template that suits you best
-Once you have found a source of free resume templates, you can browse through their collection and select the template that suits you best. You should consider the following factors when choosing a template:
-
-- The industry and job role you are applying for
-- The format that highlights your strengths and skills
-- The design that matches your personality and brand
-- The compatibility with the software or device you are using
-- The ease of editing and customization
-
-You can preview the template before downloading it to see how it looks like. You can also compare different templates to see which one fits your needs and preferences better. Once you have decided on a template, you can download it in the format that you prefer, such as PDF or Word. You can also save it to your computer or cloud storage for future use.
-Fill in your information and edit the template as needed
-After downloading the template, you can open it with the software or device that you are using, such as Microsoft Word, Google Docs, or Adobe Acrobat. You can then fill in your information and edit the template as needed. You should follow these steps when filling in and editing your resume:
-
-- Start with your name and contact information at the top of your resume. Make sure to include your phone number, email address, and LinkedIn profile.
-- Write a summary or objective statement that summarizes your qualifications, skills, and goals in one or two sentences. This should capture the attention of the reader and make them want to read more.
-- List your work experience in reverse chronological order, starting with your most recent job. For each job, include the company name, location, dates of employment, job title, and a few bullet points that describe your responsibilities and achievements. Use action verbs and quantifiable results to showcase your impact.
-- List your education in reverse chronological order, starting with your highest degree. For each degree, include the school name, location, dates of attendance, degree name, and major. You can also include your GPA, honors, or awards if they are relevant and impressive.
-- List your skills and interests that are relevant to the job you are applying for. You can use a table or a bullet list to organize your skills and interests into categories, such as technical skills, soft skills, languages, hobbies, etc. You can also include your proficiency level or certifications if applicable.
-- Add any additional information that is relevant to the job you are applying for, such as volunteer work, publications, projects, awards, etc. You can use a separate section or a table to highlight these information.
-- Edit and proofread your resume for any errors or mistakes. You can use online tools or software to check your spelling, grammar, punctuation, and formatting. You can also ask someone else to review your resume and give you feedback.
-
-You can also customize your resume by changing the font, color, layout, or content of the template as needed. You can also add some visual elements, such as icons, graphs, or charts, to make your resume more attractive and informative. However, you should avoid making too many changes that may distract or confuse the reader.
-Conclusion
-A dummy resume is a template that you can use to create a professional resume in minutes. It can help you save time and effort, follow the best practices and standards, and customize it to suit your needs and preferences. However, you should also choose the right dummy resume template for your job application, download and use it from a reliable and reputable source, and fill in your information and edit it as needed. By doing so, you can create a unique and personalized resume that showcases your strengths and skills and impresses potential employers.
-FAQs
-What is the difference between a dummy resume and a sample resume?
-A dummy resume is a template that you can use to create your own resume by filling in your information and editing it as needed. A sample resume is an example of a completed resume that you can use as a reference or inspiration for creating your own resume.
-Where can I find free dummy resume templates?
-There are many websites that offer free dummy resume templates that you can download and use. However, not all of them are reliable and reputable. One example of a reliable and reputable source of free resume templates is [Resume Genius]. Resume Genius offers over 50 professional resume templates that you can download in PDF or Word format. You can also use their online resume builder to create your resume in minutes.
-How do I know which format to use for my dummy resume?
-The format of your dummy resume depends on your work history, skills, and achievements, as well as the requirements of the job you are applying for. There are three main types of resume formats: chronological, functional, and hybrid. You should pick the format that highlights your strengths and skills, as well as the expectations of the employer. You can use a dummy resume template that follows the format you choose, or you can mix and match different elements from different templates to create your own format.
-How do I make my dummy resume stand out from the crowd?
-To make your dummy resume stand out from the crowd, you should customize it to fit your needs and preferences, as well as the tone and style of the job you are applying for. You should also use clear and concise language, highlight relevant keywords, use bullet points and white space, and avoid common errors and mistakes. You can also add some visual elements, such as icons, graphs, or charts, to make your resume more attractive and informative. However, you should avoid using too many graphics, images, or effects that may distract or confuse the reader.
-How do I update my dummy resume for different jobs?
-To update your dummy resume for different jobs, you should tailor it to fit the specific requirements and preferences of each job. You should research the company and the job role, and use the keywords and phrases that match their expectations. You should also emphasize your skills and achievements that are relevant and valuable to the job. You can also change the format, design, or content of your resume as needed, depending on the industry and job role.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/models/tagging_models/layers/crf.py b/spaces/fclong/summary/fengshen/models/tagging_models/layers/crf.py
deleted file mode 100644
index d8b3adcc988898a74426bda2412ad101aa804bda..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/tagging_models/layers/crf.py
+++ /dev/null
@@ -1,411 +0,0 @@
-import torch
-import torch.nn as nn
-from typing import List, Optional
-
-class CRF(nn.Module):
- """Conditional random field.
- This module implements a conditional random field [LMP01]_. The forward computation
- of this class computes the log likelihood of the given sequence of tags and
- emission score tensor. This class also has `~CRF.decode` method which finds
- the best tag sequence given an emission score tensor using `Viterbi algorithm`_.
- Args:
- num_tags: Number of tags.
- batch_first: Whether the first dimension corresponds to the size of a minibatch.
- Attributes:
- start_transitions (`~torch.nn.Parameter`): Start transition score tensor of size
- ``(num_tags,)``.
- end_transitions (`~torch.nn.Parameter`): End transition score tensor of size
- ``(num_tags,)``.
- transitions (`~torch.nn.Parameter`): Transition score tensor of size
- ``(num_tags, num_tags)``.
- .. [LMP01] Lafferty, J., McCallum, A., Pereira, F. (2001).
- "Conditional random fields: Probabilistic models for segmenting and
- labeling sequence data". *Proc. 18th International Conf. on Machine
- Learning*. Morgan Kaufmann. pp. 282–289.
- .. _Viterbi algorithm: https://en.wikipedia.org/wiki/Viterbi_algorithm
- """
-
- def __init__(self, num_tags: int, batch_first: bool = False) -> None:
- if num_tags <= 0:
- raise ValueError(f'invalid number of tags: {num_tags}')
- super().__init__()
- self.num_tags = num_tags
- self.batch_first = batch_first
- self.start_transitions = nn.Parameter(torch.empty(num_tags))
- self.end_transitions = nn.Parameter(torch.empty(num_tags))
- self.transitions = nn.Parameter(torch.empty(num_tags, num_tags))
-
- self.reset_parameters()
-
- def reset_parameters(self) -> None:
- """Initialize the transition parameters.
- The parameters will be initialized randomly from a uniform distribution
- between -0.1 and 0.1.
- """
- nn.init.uniform_(self.start_transitions, -0.1, 0.1)
- nn.init.uniform_(self.end_transitions, -0.1, 0.1)
- nn.init.uniform_(self.transitions, -0.1, 0.1)
-
- def __repr__(self) -> str:
- return f'{self.__class__.__name__}(num_tags={self.num_tags})'
-
- def forward(self, emissions: torch.Tensor,
- tags: torch.LongTensor,
- mask: Optional[torch.ByteTensor] = None,
- reduction: str = 'mean') -> torch.Tensor:
- """Compute the conditional log likelihood of a sequence of tags given emission scores.
- Args:
- emissions (`~torch.Tensor`): Emission score tensor of size
- ``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
- ``(batch_size, seq_length, num_tags)`` otherwise.
- tags (`~torch.LongTensor`): Sequence of tags tensor of size
- ``(seq_length, batch_size)`` if ``batch_first`` is ``False``,
- ``(batch_size, seq_length)`` otherwise.
- mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
- if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
- reduction: Specifies the reduction to apply to the output:
- ``none|sum|mean|token_mean``. ``none``: no reduction will be applied.
- ``sum``: the output will be summed over batches. ``mean``: the output will be
- averaged over batches. ``token_mean``: the output will be averaged over tokens.
- Returns:
- `~torch.Tensor`: The log likelihood. This will have size ``(batch_size,)`` if
- reduction is ``none``, ``()`` otherwise.
- """
- if reduction not in ('none', 'sum', 'mean', 'token_mean'):
- raise ValueError(f'invalid reduction: {reduction}')
- if mask is None:
- mask = torch.ones_like(tags, dtype=torch.uint8, device=tags.device)
- if mask.dtype != torch.uint8:
- mask = mask.byte()
- self._validate(emissions, tags=tags, mask=mask)
-
- if self.batch_first:
- emissions = emissions.transpose(0, 1)
- tags = tags.transpose(0, 1)
- mask = mask.transpose(0, 1)
-
- # shape: (batch_size,)
- numerator = self._compute_score(emissions, tags, mask)
- # shape: (batch_size,)
- denominator = self._compute_normalizer(emissions, mask)
- # shape: (batch_size,)
- llh = numerator - denominator
-
- if reduction == 'none':
- return llh
- if reduction == 'sum':
- return llh.sum()
- if reduction == 'mean':
- return llh.mean()
- return llh.sum() / mask.float().sum()
-
- def decode(self, emissions: torch.Tensor,
- mask: Optional[torch.ByteTensor] = None,
- nbest: Optional[int] = None,
- pad_tag: Optional[int] = None) -> List[List[List[int]]]:
- """Find the most likely tag sequence using Viterbi algorithm.
- Args:
- emissions (`~torch.Tensor`): Emission score tensor of size
- ``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
- ``(batch_size, seq_length, num_tags)`` otherwise.
- mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
- if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
- nbest (`int`): Number of most probable paths for each sequence
- pad_tag (`int`): Tag at padded positions. Often input varies in length and
- the length will be padded to the maximum length in the batch. Tags at
- the padded positions will be assigned with a padding tag, i.e. `pad_tag`
- Returns:
- A PyTorch tensor of the best tag sequence for each batch of shape
- (nbest, batch_size, seq_length)
- """
- if nbest is None:
- nbest = 1
- if mask is None:
- mask = torch.ones(emissions.shape[:2], dtype=torch.uint8,
- device=emissions.device)
- if mask.dtype != torch.uint8:
- mask = mask.byte()
- self._validate(emissions, mask=mask)
-
- if self.batch_first:
- emissions = emissions.transpose(0, 1)
- mask = mask.transpose(0, 1)
-
- if nbest == 1:
- return self._viterbi_decode(emissions, mask, pad_tag).unsqueeze(0)
- return self._viterbi_decode_nbest(emissions, mask, nbest, pad_tag)
-
- def _validate(self, emissions: torch.Tensor,
- tags: Optional[torch.LongTensor] = None,
- mask: Optional[torch.ByteTensor] = None) -> None:
- if emissions.dim() != 3:
- raise ValueError(f'emissions must have dimension of 3, got {emissions.dim()}')
- if emissions.size(2) != self.num_tags:
- raise ValueError(
- f'expected last dimension of emissions is {self.num_tags}, '
- f'got {emissions.size(2)}')
-
- if tags is not None:
- if emissions.shape[:2] != tags.shape:
- raise ValueError(
- 'the first two dimensions of emissions and tags must match, '
- f'got {tuple(emissions.shape[:2])} and {tuple(tags.shape)}')
-
- if mask is not None:
- if emissions.shape[:2] != mask.shape:
- raise ValueError(
- 'the first two dimensions of emissions and mask must match, '
- f'got {tuple(emissions.shape[:2])} and {tuple(mask.shape)}')
- no_empty_seq = not self.batch_first and mask[0].all()
- no_empty_seq_bf = self.batch_first and mask[:, 0].all()
- if not no_empty_seq and not no_empty_seq_bf:
- raise ValueError('mask of the first timestep must all be on')
-
- def _compute_score(self, emissions: torch.Tensor,
- tags: torch.LongTensor,
- mask: torch.ByteTensor) -> torch.Tensor:
- # emissions: (seq_length, batch_size, num_tags)
- # tags: (seq_length, batch_size)
- # mask: (seq_length, batch_size)
- seq_length, batch_size = tags.shape
- mask = mask.float()
-
- # Start transition score and first emission
- # shape: (batch_size,)
- score = self.start_transitions[tags[0]]
- score += emissions[0, torch.arange(batch_size), tags[0]]
-
- for i in range(1, seq_length):
- # Transition score to next tag, only added if next timestep is valid (mask == 1)
- # shape: (batch_size,)
- score += self.transitions[tags[i - 1], tags[i]] * mask[i]
-
- # Emission score for next tag, only added if next timestep is valid (mask == 1)
- # shape: (batch_size,)
- score += emissions[i, torch.arange(batch_size), tags[i]] * mask[i]
-
- # End transition score
- # shape: (batch_size,)
- seq_ends = mask.long().sum(dim=0) - 1
- # shape: (batch_size,)
- last_tags = tags[seq_ends, torch.arange(batch_size)]
- # shape: (batch_size,)
- score += self.end_transitions[last_tags]
-
- return score
-
- def _compute_normalizer(self, emissions: torch.Tensor,
- mask: torch.ByteTensor) -> torch.Tensor:
- # emissions: (seq_length, batch_size, num_tags)
- # mask: (seq_length, batch_size)
- seq_length = emissions.size(0)
-
- # Start transition score and first emission; score has size of
- # (batch_size, num_tags) where for each batch, the j-th column stores
- # the score that the first timestep has tag j
- # shape: (batch_size, num_tags)
- score = self.start_transitions + emissions[0]
-
- for i in range(1, seq_length):
- # Broadcast score for every possible next tag
- # shape: (batch_size, num_tags, 1)
- broadcast_score = score.unsqueeze(2)
-
- # Broadcast emission score for every possible current tag
- # shape: (batch_size, 1, num_tags)
- broadcast_emissions = emissions[i].unsqueeze(1)
-
- # Compute the score tensor of size (batch_size, num_tags, num_tags) where
- # for each sample, entry at row i and column j stores the sum of scores of all
- # possible tag sequences so far that end with transitioning from tag i to tag j
- # and emitting
- # shape: (batch_size, num_tags, num_tags)
- next_score = broadcast_score + self.transitions + broadcast_emissions
-
- # Sum over all possible current tags, but we're in score space, so a sum
- # becomes a log-sum-exp: for each sample, entry i stores the sum of scores of
- # all possible tag sequences so far, that end in tag i
- # shape: (batch_size, num_tags)
- next_score = torch.logsumexp(next_score, dim=1)
-
- # Set score to the next score if this timestep is valid (mask == 1)
- # shape: (batch_size, num_tags)
- score = torch.where(mask[i].unsqueeze(1), next_score, score)
-
- # End transition score
- # shape: (batch_size, num_tags)
- score += self.end_transitions
-
- # Sum (log-sum-exp) over all possible tags
- # shape: (batch_size,)
- return torch.logsumexp(score, dim=1)
-
- def _viterbi_decode(self, emissions: torch.FloatTensor,
- mask: torch.ByteTensor,
- pad_tag: Optional[int] = None) -> List[List[int]]:
- # emissions: (seq_length, batch_size, num_tags)
- # mask: (seq_length, batch_size)
- # return: (batch_size, seq_length)
- if pad_tag is None:
- pad_tag = 0
-
- device = emissions.device
- seq_length, batch_size = mask.shape
-
- # Start transition and first emission
- # shape: (batch_size, num_tags)
- score = self.start_transitions + emissions[0]
- history_idx = torch.zeros((seq_length, batch_size, self.num_tags),
- dtype=torch.long, device=device)
- oor_idx = torch.zeros((batch_size, self.num_tags),
- dtype=torch.long, device=device)
- oor_tag = torch.full((seq_length, batch_size), pad_tag,
- dtype=torch.long, device=device)
-
- # - score is a tensor of size (batch_size, num_tags) where for every batch,
- # value at column j stores the score of the best tag sequence so far that ends
- # with tag j
- # - history_idx saves where the best tags candidate transitioned from; this is used
- # when we trace back the best tag sequence
- # - oor_idx saves the best tags candidate transitioned from at the positions
- # where mask is 0, i.e. out of range (oor)
-
- # Viterbi algorithm recursive case: we compute the score of the best tag sequence
- # for every possible next tag
- for i in range(1, seq_length):
- # Broadcast viterbi score for every possible next tag
- # shape: (batch_size, num_tags, 1)
- broadcast_score = score.unsqueeze(2)
-
- # Broadcast emission score for every possible current tag
- # shape: (batch_size, 1, num_tags)
- broadcast_emission = emissions[i].unsqueeze(1)
-
- # Compute the score tensor of size (batch_size, num_tags, num_tags) where
- # for each sample, entry at row i and column j stores the score of the best
- # tag sequence so far that ends with transitioning from tag i to tag j and emitting
- # shape: (batch_size, num_tags, num_tags)
- next_score = broadcast_score + self.transitions + broadcast_emission
-
- # Find the maximum score over all possible current tag
- # shape: (batch_size, num_tags)
- next_score, indices = next_score.max(dim=1)
-
- # Set score to the next score if this timestep is valid (mask == 1)
- # and save the index that produces the next score
- # shape: (batch_size, num_tags)
- score = torch.where(mask[i].unsqueeze(-1), next_score, score)
- indices = torch.where(mask[i].unsqueeze(-1), indices, oor_idx)
- history_idx[i - 1] = indices
-
- # End transition score
- # shape: (batch_size, num_tags)
- end_score = score + self.end_transitions
- _, end_tag = end_score.max(dim=1)
-
- # shape: (batch_size,)
- seq_ends = mask.long().sum(dim=0) - 1
-
- # insert the best tag at each sequence end (last position with mask == 1)
- history_idx = history_idx.transpose(1, 0).contiguous()
- history_idx.scatter_(1, seq_ends.view(-1, 1, 1).expand(-1, 1, self.num_tags),
- end_tag.view(-1, 1, 1).expand(-1, 1, self.num_tags))
- history_idx = history_idx.transpose(1, 0).contiguous()
-
- # The most probable path for each sequence
- best_tags_arr = torch.zeros((seq_length, batch_size),
- dtype=torch.long, device=device)
- best_tags = torch.zeros(batch_size, 1, dtype=torch.long, device=device)
- for idx in range(seq_length - 1, -1, -1):
- best_tags = torch.gather(history_idx[idx], 1, best_tags)
- best_tags_arr[idx] = best_tags.data.view(batch_size)
-
- return torch.where(mask, best_tags_arr, oor_tag).transpose(0, 1)
-
- def _viterbi_decode_nbest(self, emissions: torch.FloatTensor,
- mask: torch.ByteTensor,
- nbest: int,
- pad_tag: Optional[int] = None) -> List[List[List[int]]]:
- # emissions: (seq_length, batch_size, num_tags)
- # mask: (seq_length, batch_size)
- # return: (nbest, batch_size, seq_length)
- if pad_tag is None:
- pad_tag = 0
-
- device = emissions.device
- seq_length, batch_size = mask.shape
-
- # Start transition and first emission
- # shape: (batch_size, num_tags)
- score = self.start_transitions + emissions[0]
- history_idx = torch.zeros((seq_length, batch_size, self.num_tags, nbest),
- dtype=torch.long, device=device)
- oor_idx = torch.zeros((batch_size, self.num_tags, nbest),
- dtype=torch.long, device=device)
- oor_tag = torch.full((seq_length, batch_size, nbest), pad_tag,
- dtype=torch.long, device=device)
-
- # + score is a tensor of size (batch_size, num_tags) where for every batch,
- # value at column j stores the score of the best tag sequence so far that ends
- # with tag j
- # + history_idx saves where the best tags candidate transitioned from; this is used
- # when we trace back the best tag sequence
- # - oor_idx saves the best tags candidate transitioned from at the positions
- # where mask is 0, i.e. out of range (oor)
-
- # Viterbi algorithm recursive case: we compute the score of the best tag sequence
- # for every possible next tag
- for i in range(1, seq_length):
- if i == 1:
- broadcast_score = score.unsqueeze(-1)
- broadcast_emission = emissions[i].unsqueeze(1)
- # shape: (batch_size, num_tags, num_tags)
- next_score = broadcast_score + self.transitions + broadcast_emission
- else:
- broadcast_score = score.unsqueeze(-1)
- broadcast_emission = emissions[i].unsqueeze(1).unsqueeze(2)
- # shape: (batch_size, num_tags, nbest, num_tags)
- next_score = broadcast_score + self.transitions.unsqueeze(1) + broadcast_emission
-
- # Find the top `nbest` maximum score over all possible current tag
- # shape: (batch_size, nbest, num_tags)
- next_score, indices = next_score.view(batch_size, -1, self.num_tags).topk(nbest, dim=1)
-
- if i == 1:
- score = score.unsqueeze(-1).expand(-1, -1, nbest)
- indices = indices * nbest
-
- # convert to shape: (batch_size, num_tags, nbest)
- next_score = next_score.transpose(2, 1)
- indices = indices.transpose(2, 1)
-
- # Set score to the next score if this timestep is valid (mask == 1)
- # and save the index that produces the next score
- # shape: (batch_size, num_tags, nbest)
- score = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1), next_score, score)
- indices = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1), indices, oor_idx)
- history_idx[i - 1] = indices
-
- # End transition score shape: (batch_size, num_tags, nbest)
- end_score = score + self.end_transitions.unsqueeze(-1)
- _, end_tag = end_score.view(batch_size, -1).topk(nbest, dim=1)
-
- # shape: (batch_size,)
- seq_ends = mask.long().sum(dim=0) - 1
-
- # insert the best tag at each sequence end (last position with mask == 1)
- history_idx = history_idx.transpose(1, 0).contiguous()
- history_idx.scatter_(1, seq_ends.view(-1, 1, 1, 1).expand(-1, 1, self.num_tags, nbest),
- end_tag.view(-1, 1, 1, nbest).expand(-1, 1, self.num_tags, nbest))
- history_idx = history_idx.transpose(1, 0).contiguous()
-
- # The most probable path for each sequence
- best_tags_arr = torch.zeros((seq_length, batch_size, nbest),
- dtype=torch.long, device=device)
- best_tags = torch.arange(nbest, dtype=torch.long, device=device) \
- .view(1, -1).expand(batch_size, -1)
- for idx in range(seq_length - 1, -1, -1):
- best_tags = torch.gather(history_idx[idx].view(batch_size, -1), 1, best_tags)
- best_tags_arr[idx] = best_tags.data.view(batch_size, -1) // nbest
-
- return torch.where(mask.unsqueeze(-1), best_tags_arr, oor_tag).permute(2, 1, 0)
\ No newline at end of file
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/options/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/options/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Xbox Game Bar for Windows 10 and Enhance Your Gaming Experience.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Xbox Game Bar for Windows 10 and Enhance Your Gaming Experience.md
deleted file mode 100644
index 9619d1e78584827b489f0bb34b76b1cec75ee455..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Xbox Game Bar for Windows 10 and Enhance Your Gaming Experience.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-Game Bar Windows 10 Download: A Guide for Gamers
-If you are a gamer who wants to record, stream, or share your gameplay on Windows 10, you might be interested in downloading game bar. Game bar is a built-in tool that lets you access various widgets for gaming activities without leaving your game. You can also enable game mode to optimize your system performance and reduce interruptions. In this article, we will show you how to download game bar from the Microsoft Store, how to enable and configure its settings, how to use its features, and how to find alternatives if you are not happy with it.
-game bar windows 10 download
DOWNLOAD ↔ https://gohhs.com/2uPvC3
- How to Download Game Bar from the Microsoft Store
-Downloading game bar is easy and free. You just need to follow these steps:
-
-- Open the Microsoft Store app on your Windows 10 PC.
-- Search for "Xbox Game Bar" and select it from the results.
-- Click on "Get" or "Install" and wait for the download to complete.
-- Once installed, you can launch game bar by pressing Windows + G on your keyboard.
-
-You can also check for updates and manage your game bar settings from the Microsoft Store app.
- How to Enable and Configure Game Bar Settings
-Before you can use game bar, you need to enable it for the game or app you want to record or stream. You can do this by pressing Windows + G while playing the game or using the app. If you see a prompt to enable game bar, click on it. Otherwise, you can access the game bar settings by clicking on the gear icon on the top panel.
-How to install Xbox Game Bar on Windows 10 PC
-Xbox Game Bar preview program for Windows 10
-Xbox Game Bar widgets for Windows 10 gaming overlay
-How to use Xbox Game Bar to capture and share screenshots and videos on Windows 10
-Xbox Game Bar LFG feature to find new teammates on Windows 10
-How to chat with Xbox friends using Xbox Game Bar on Windows 10
-How to customize Xbox Game Bar settings and shortcuts on Windows 10
-How to uninstall Xbox Game Bar from Windows 10 PC
-Xbox Game Bar compatibility with most PC games on Windows 10
-How to update Xbox Game Bar to the latest version on Windows 10
-How to join the Xbox Insider Hub to access the latest Game Bar features on Windows 10
-How to troubleshoot Xbox Game Bar issues on Windows 10
-How to enable or disable Xbox Game Bar on Windows 10
-How to record game audio and microphone with Xbox Game Bar on Windows 10
-How to stream games from Xbox console to Windows 10 PC using Xbox Game Bar
-How to use Xbox Game Bar performance widget to monitor CPU, GPU, RAM, and FPS on Windows 10
-How to use Xbox Game Bar Spotify widget to control music playback on Windows 10
-How to use Xbox Game Bar broadcast widget to stream games live on Windows 10
-How to use Xbox Game Bar gallery widget to view and edit captured media on Windows 10
-How to use Xbox Game Bar volume widget to adjust game and chat audio levels on Windows 10
-How to use Xbox Game Bar achievements widget to track your progress and unlockables on Windows 10
-How to use Xbox Game Bar social widget to view your friends list and send messages on Windows 10
-How to use Xbox Game Bar looking for group widget to join or create parties on Windows 10
-How to use Xbox Game Bar resources widget to access helpful links and tips on Windows 10
-How to use Xbox Game Bar feedback widget to submit your suggestions and report problems on Windows 10
-How to add or remove widgets from Xbox Game Bar on Windows 10
-How to resize or reposition widgets on Xbox Game Bar on Windows 10
-How to pin or unpin widgets on Xbox Game Bar on Windows 10
-How to access more widgets from the Microsoft Store for Xbox Game Bar on Windows 10
-How to develop your own widgets for Xbox Game Bar using the SDK on Windows 10
-The game bar settings have three tabs: General, Capturing, and Audio. Here are some of the options you can customize:
-
-- General: You can enable or disable game mode, background recording, Xbox social features, keyboard shortcuts, and more.
-- Capturing: You can choose the quality, resolution, frame rate, and duration of your recordings. You can also enable or disable your microphone or camera while capturing.
-- Audio: You can adjust the volume of your system, game, microphone, and other apps. You can also choose the audio quality and format of your recordings.
-
- How to Use Game Bar Features
-Game bar has many features that can enhance your gaming experience. Here are some of them:
-Screen Capture
-You can use game bar to take screenshots or record videos of your gameplay. To do this, press Windows + G and click on the camera icon for screenshots or the red circle icon for videos. You can also use keyboard shortcuts such as Windows + Alt + Print Screen for screenshots or Windows + Alt + R for videos. You can find your captures in the Captures folder under Videos in File Explorer or by clicking on "Show all captures" in game bar.
-Performance Monitor
-You can use game bar to monitor your system performance while playing games. To do this, press Windows + G and click on the performance icon. You will see a panel that shows your CPU, GPU, RAM, and disk usage. You can also see a graph of the usage over time. You can pin this panel to make it always visible on your screen.
-Spotify Integration
-You can use game bar to play music from Spotify while gaming. To do this, press Windows + G and click on the menu button. Select Spotify from the list and sign in with your Spotify account. You can then use the Spotify widget to play songs, control playback, and adjust volume.
-Other Features
-Game bar also has other features such as broadcasting, finding new teammates with LFG (looking for group), chatting with Xbox friends across devices, adjusting application volume, and more. You can access these features by clicking on their respective icons or buttons in game bar.
- How to Find How to Find Alternatives to Game Bar
-
While game bar is a convenient and useful tool for Windows 10 gamers, it might not suit everyone's needs or preferences. If you are looking for alternatives to game bar, you have plenty of options to choose from. Here are some of them:
-
-
-Name
-Description
-Pros
-Cons
-
-
-OBS Studio
-A free and open source software for video recording and live streaming.
-- Supports multiple sources and scenes
- Offers advanced settings and features
- Compatible with various platforms and services
-- Has a steep learning curve
- Requires more system resources
- May cause performance issues
-
-
-NVIDIA GeForce Experience
-A software that optimizes your PC for gaming and enables you to capture and share your gameplay with NVIDIA ShadowPlay.
-- Easy to use and configure
- Supports high-quality recording and streaming
- Has a minimal impact on performance
-- Only works with NVIDIA graphics cards
- May have compatibility issues with some games
- Has limited customization options
-
-
-Fraps
-A software that can capture screenshots, videos, and audio of your gameplay.
-- Simple and lightweight
- Supports high-resolution recording
- Shows FPS (frames per second) counter
-- Not free for full version
- Produces large file sizes
- Does not support streaming
-
-
-Bandicam
-A software that can record your screen, game, or webcam.
-- Supports various formats and codecs
- Allows you to draw, add text, or use a chroma key while recording
- Has a built-in video editor
-- Not free for full version
- Has a watermark on the output
- May cause lag or stuttering
-
-
-XSplit Gamecaster
-A software that lets you record, stream, or edit your gameplay.
-- Has a user-friendly interface
- Supports multiple streaming platforms and chat integrations
- Offers a lot of customization options
-- Not free for full version
- Requires an account to use
- May affect performance or quality
-
-
- Conclusion: Summarize the Main Points and Benefits of Game Bar
- Game bar is a handy tool that comes with Windows 10 and allows you to access various widgets for gaming activities without leaving your game. You can download it from the Microsoft Store, enable it for the game or app you want to record or stream, configure its settings, and use its features such as screen capture, performance monitor, Spotify integration, and more. Game bar can also improve your system performance and reduce interruptions by enabling game mode. However, if you are not satisfied with game bar, you can also try other alternatives such as OBS Studio, NVIDIA GeForce Experience, Fraps, Bandicam, or XSplit Gamecaster. Whatever you choose, we hope you enjoy your gaming experience on Windows 10.
- FAQs: Answer Some Common Questions About Game Bar
- Here are some frequently asked questions about game bar and their answers:
- Q: How do I turn off game bar?
- A: If you want to turn off game bar completely, you can do so by following these steps:
-
- - Press Windows + G to open game bar.
- - Click on the gear icon to open the settings.
- - Under the General tab, uncheck the box that says "Enable Xbox Game Bar for things like recording game clips, chatting with friends, and receiving game invites".
- - Click on "Done" to save your changes.
- - You can also disable specific keyboard shortcuts or widgets from the settings.
-
- Q: How do I edit my game bar captures?
- A: If you want to edit your game bar captures, you can do so by using the built-in video editor in the Photos app. Here's how:
-
- - Open the Photos app on your Windows 10 PC.
- - Click on the "Video Editor" button at the top right corner.
- - Select "New video project" and give it a name.
- - Click on "Add" and choose "From this PC".
- - Browse to the Captures folder under Videos in File Explorer and select the capture you want to edit.
- - Drag and drop the capture to the storyboard at the bottom.
- - You can then trim, split, rotate, add text, filters, music, and more to your capture.
-- When you are done, click on "Finish video" and choose the quality and location to save your edited video.
-
- Q: How do I share my game bar captures?
- A: If you want to share your game bar captures, you can do so by using the Share button in game bar or the Photos app. Here's how:
-
- - Press Windows + G to open game bar.
- - Click on the "Show all captures" button to see your recent captures.
- - Select the capture you want to share and click on the Share button at the bottom right corner.
- - Choose the app or service you want to share your capture with, such as Mail, Twitter, Facebook, etc.
- - Alternatively, you can also open the Photos app and select the capture you want to share. Then click on the Share button at the top right corner and follow the same steps.
-
- Q: How do I delete my game bar captures?
- A: If you want to delete your game bar captures, you can do so by using the Delete button in game bar or the Photos app. Here's how:
-
- - Press Windows + G to open game bar.
- - Click on the "Show all captures" button to see your recent captures.
- - Select the capture you want to delete and click on the Delete button at the bottom left corner.
- - Confirm that you want to delete the capture by clicking on "Delete" again.
- - Alternatively, you can also open the Photos app and select the capture you want to delete. Then click on the Delete button at the top right corner and confirm your action.
-
- Q: How do I fix game bar not working?
- A: If you encounter any problems with game bar not working, such as not opening, not recording, not showing widgets, etc., you can try some of these solutions:
-
- - Make sure that game bar is enabled for the game or app you are using. Press Windows + G and click on the prompt to enable game bar if it appears.
- - Make sure that your Windows 10 is updated to the latest version. Go to Settings > Update & Security > Windows Update and check for updates.
- - Make sure that your drivers are updated, especially your graphics card driver. Go to Device Manager > Display adapters and right-click on your graphics card. Then select Update driver and follow the instructions.
- - Make sure that your antivirus or firewall is not blocking game bar. Add game bar as an exception or disable your antivirus or firewall temporarily.
- - Reset game bar settings to default. Go to Settings > Gaming > Xbox Game Bar and click on "Reset" under "Reset Game Bar".
-
-If none of these solutions work, you can also contact Microsoft support or visit their forums for more help.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h
deleted file mode 100644
index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h
+++ /dev/null
@@ -1,33 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-#pragma once
-#include
-
-namespace groundingdino {
-
-at::Tensor ms_deform_attn_cuda_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector ms_deform_attn_cuda_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
-} // namespace groundingdino
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dns.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dns.d.ts
deleted file mode 100644
index 305367b81d17a30d1a914cda62fdaf25acf3567e..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dns.d.ts
+++ /dev/null
@@ -1,659 +0,0 @@
-/**
- * The `dns` module enables name resolution. For example, use it to look up IP
- * addresses of host names.
- *
- * Although named for the [Domain Name System (DNS)](https://en.wikipedia.org/wiki/Domain_Name_System), it does not always use the
- * DNS protocol for lookups. {@link lookup} uses the operating system
- * facilities to perform name resolution. It may not need to perform any network
- * communication. To perform name resolution the way other applications on the same
- * system do, use {@link lookup}.
- *
- * ```js
- * const dns = require('dns');
- *
- * dns.lookup('example.org', (err, address, family) => {
- * console.log('address: %j family: IPv%s', address, family);
- * });
- * // address: "93.184.216.34" family: IPv4
- * ```
- *
- * All other functions in the `dns` module connect to an actual DNS server to
- * perform name resolution. They will always use the network to perform DNS
- * queries. These functions do not use the same set of configuration files used by {@link lookup} (e.g. `/etc/hosts`). Use these functions to always perform
- * DNS queries, bypassing other name-resolution facilities.
- *
- * ```js
- * const dns = require('dns');
- *
- * dns.resolve4('archive.org', (err, addresses) => {
- * if (err) throw err;
- *
- * console.log(`addresses: ${JSON.stringify(addresses)}`);
- *
- * addresses.forEach((a) => {
- * dns.reverse(a, (err, hostnames) => {
- * if (err) {
- * throw err;
- * }
- * console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`);
- * });
- * });
- * });
- * ```
- *
- * See the `Implementation considerations section` for more information.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/dns.js)
- */
-declare module 'dns' {
- import * as dnsPromises from 'node:dns/promises';
- // Supported getaddrinfo flags.
- export const ADDRCONFIG: number;
- export const V4MAPPED: number;
- /**
- * If `dns.V4MAPPED` is specified, return resolved IPv6 addresses as
- * well as IPv4 mapped IPv6 addresses.
- */
- export const ALL: number;
- export interface LookupOptions {
- family?: number | undefined;
- hints?: number | undefined;
- all?: boolean | undefined;
- /**
- * @default true
- */
- verbatim?: boolean | undefined;
- }
- export interface LookupOneOptions extends LookupOptions {
- all?: false | undefined;
- }
- export interface LookupAllOptions extends LookupOptions {
- all: true;
- }
- export interface LookupAddress {
- address: string;
- family: number;
- }
- /**
- * Resolves a host name (e.g. `'nodejs.org'`) into the first found A (IPv4) or
- * AAAA (IPv6) record. All `option` properties are optional. If `options` is an
- * integer, then it must be `4` or `6` – if `options` is not provided, then IPv4
- * and IPv6 addresses are both returned if found.
- *
- * With the `all` option set to `true`, the arguments for `callback` change to`(err, addresses)`, with `addresses` being an array of objects with the
- * properties `address` and `family`.
- *
- * On error, `err` is an `Error` object, where `err.code` is the error code.
- * Keep in mind that `err.code` will be set to `'ENOTFOUND'` not only when
- * the host name does not exist but also when the lookup fails in other ways
- * such as no available file descriptors.
- *
- * `dns.lookup()` does not necessarily have anything to do with the DNS protocol.
- * The implementation uses an operating system facility that can associate names
- * with addresses, and vice versa. This implementation can have subtle but
- * important consequences on the behavior of any Node.js program. Please take some
- * time to consult the `Implementation considerations section` before using`dns.lookup()`.
- *
- * Example usage:
- *
- * ```js
- * const dns = require('dns');
- * const options = {
- * family: 6,
- * hints: dns.ADDRCONFIG | dns.V4MAPPED,
- * };
- * dns.lookup('example.com', options, (err, address, family) =>
- * console.log('address: %j family: IPv%s', address, family));
- * // address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
- *
- * // When options.all is true, the result will be an Array.
- * options.all = true;
- * dns.lookup('example.com', options, (err, addresses) =>
- * console.log('addresses: %j', addresses));
- * // addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}]
- * ```
- *
- * If this method is invoked as its `util.promisify()` ed version, and `all`is not set to `true`, it returns a `Promise` for an `Object` with `address` and`family` properties.
- * @since v0.1.90
- */
- export function lookup(hostname: string, family: number, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void;
- export function lookup(hostname: string, options: LookupOneOptions, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void;
- export function lookup(hostname: string, options: LookupAllOptions, callback: (err: NodeJS.ErrnoException | null, addresses: LookupAddress[]) => void): void;
- export function lookup(hostname: string, options: LookupOptions, callback: (err: NodeJS.ErrnoException | null, address: string | LookupAddress[], family: number) => void): void;
- export function lookup(hostname: string, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void;
- export namespace lookup {
- function __promisify__(hostname: string, options: LookupAllOptions): Promise;
- function __promisify__(hostname: string, options?: LookupOneOptions | number): Promise;
- function __promisify__(hostname: string, options: LookupOptions): Promise;
- }
- /**
- * Resolves the given `address` and `port` into a host name and service using
- * the operating system's underlying `getnameinfo` implementation.
- *
- * If `address` is not a valid IP address, a `TypeError` will be thrown.
- * The `port` will be coerced to a number. If it is not a legal port, a `TypeError`will be thrown.
- *
- * On an error, `err` is an `Error` object, where `err.code` is the error code.
- *
- * ```js
- * const dns = require('dns');
- * dns.lookupService('127.0.0.1', 22, (err, hostname, service) => {
- * console.log(hostname, service);
- * // Prints: localhost ssh
- * });
- * ```
- *
- * If this method is invoked as its `util.promisify()` ed version, it returns a`Promise` for an `Object` with `hostname` and `service` properties.
- * @since v0.11.14
- */
- export function lookupService(address: string, port: number, callback: (err: NodeJS.ErrnoException | null, hostname: string, service: string) => void): void;
- export namespace lookupService {
- function __promisify__(
- address: string,
- port: number
- ): Promise<{
- hostname: string;
- service: string;
- }>;
- }
- export interface ResolveOptions {
- ttl: boolean;
- }
- export interface ResolveWithTtlOptions extends ResolveOptions {
- ttl: true;
- }
- export interface RecordWithTtl {
- address: string;
- ttl: number;
- }
- /** @deprecated Use `AnyARecord` or `AnyAaaaRecord` instead. */
- export type AnyRecordWithTtl = AnyARecord | AnyAaaaRecord;
- export interface AnyARecord extends RecordWithTtl {
- type: 'A';
- }
- export interface AnyAaaaRecord extends RecordWithTtl {
- type: 'AAAA';
- }
- export interface CaaRecord {
- critial: number;
- issue?: string | undefined;
- issuewild?: string | undefined;
- iodef?: string | undefined;
- contactemail?: string | undefined;
- contactphone?: string | undefined;
- }
- export interface MxRecord {
- priority: number;
- exchange: string;
- }
- export interface AnyMxRecord extends MxRecord {
- type: 'MX';
- }
- export interface NaptrRecord {
- flags: string;
- service: string;
- regexp: string;
- replacement: string;
- order: number;
- preference: number;
- }
- export interface AnyNaptrRecord extends NaptrRecord {
- type: 'NAPTR';
- }
- export interface SoaRecord {
- nsname: string;
- hostmaster: string;
- serial: number;
- refresh: number;
- retry: number;
- expire: number;
- minttl: number;
- }
- export interface AnySoaRecord extends SoaRecord {
- type: 'SOA';
- }
- export interface SrvRecord {
- priority: number;
- weight: number;
- port: number;
- name: string;
- }
- export interface AnySrvRecord extends SrvRecord {
- type: 'SRV';
- }
- export interface AnyTxtRecord {
- type: 'TXT';
- entries: string[];
- }
- export interface AnyNsRecord {
- type: 'NS';
- value: string;
- }
- export interface AnyPtrRecord {
- type: 'PTR';
- value: string;
- }
- export interface AnyCnameRecord {
- type: 'CNAME';
- value: string;
- }
- export type AnyRecord = AnyARecord | AnyAaaaRecord | AnyCnameRecord | AnyMxRecord | AnyNaptrRecord | AnyNsRecord | AnyPtrRecord | AnySoaRecord | AnySrvRecord | AnyTxtRecord;
- /**
- * Uses the DNS protocol to resolve a host name (e.g. `'nodejs.org'`) into an array
- * of the resource records. The `callback` function has arguments`(err, records)`. When successful, `records` will be an array of resource
- * records. The type and structure of individual results varies based on `rrtype`:
- *
- *
- *
- * On error, `err` is an `Error` object, where `err.code` is one of the `DNS error codes`.
- * @since v0.1.27
- * @param hostname Host name to resolve.
- * @param [rrtype='A'] Resource record type.
- */
- export function resolve(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'A', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'AAAA', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'ANY', callback: (err: NodeJS.ErrnoException | null, addresses: AnyRecord[]) => void): void;
- export function resolve(hostname: string, rrtype: 'CNAME', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'MX', callback: (err: NodeJS.ErrnoException | null, addresses: MxRecord[]) => void): void;
- export function resolve(hostname: string, rrtype: 'NAPTR', callback: (err: NodeJS.ErrnoException | null, addresses: NaptrRecord[]) => void): void;
- export function resolve(hostname: string, rrtype: 'NS', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'PTR', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve(hostname: string, rrtype: 'SOA', callback: (err: NodeJS.ErrnoException | null, addresses: SoaRecord) => void): void;
- export function resolve(hostname: string, rrtype: 'SRV', callback: (err: NodeJS.ErrnoException | null, addresses: SrvRecord[]) => void): void;
- export function resolve(hostname: string, rrtype: 'TXT', callback: (err: NodeJS.ErrnoException | null, addresses: string[][]) => void): void;
- export function resolve(
- hostname: string,
- rrtype: string,
- callback: (err: NodeJS.ErrnoException | null, addresses: string[] | MxRecord[] | NaptrRecord[] | SoaRecord | SrvRecord[] | string[][] | AnyRecord[]) => void
- ): void;
- export namespace resolve {
- function __promisify__(hostname: string, rrtype?: 'A' | 'AAAA' | 'CNAME' | 'NS' | 'PTR'): Promise;
- function __promisify__(hostname: string, rrtype: 'ANY'): Promise;
- function __promisify__(hostname: string, rrtype: 'MX'): Promise;
- function __promisify__(hostname: string, rrtype: 'NAPTR'): Promise;
- function __promisify__(hostname: string, rrtype: 'SOA'): Promise;
- function __promisify__(hostname: string, rrtype: 'SRV'): Promise;
- function __promisify__(hostname: string, rrtype: 'TXT'): Promise;
- function __promisify__(hostname: string, rrtype: string): Promise;
- }
- /**
- * Uses the DNS protocol to resolve a IPv4 addresses (`A` records) for the`hostname`. The `addresses` argument passed to the `callback` function
- * will contain an array of IPv4 addresses (e.g.`['74.125.79.104', '74.125.79.105', '74.125.79.106']`).
- * @since v0.1.16
- * @param hostname Host name to resolve.
- */
- export function resolve4(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void;
- export function resolve4(hostname: string, options: ResolveWithTtlOptions, callback: (err: NodeJS.ErrnoException | null, addresses: RecordWithTtl[]) => void): void;
- export function resolve4(hostname: string, options: ResolveOptions, callback: (err: NodeJS.ErrnoException | null, addresses: string[] | RecordWithTtl[]) => void): void;
- export namespace resolve4 {
- function __promisify__(hostname: string): Promise;
- function __promisify__(hostname: string, options: ResolveWithTtlOptions): Promise