Dataset Viewer
Auto-converted to Parquet
dataset
stringclasses
45 values
id
stringlengths
17
63
messages
listlengths
2
2
science.pico_ner
science.pico_ner.3244
[ { "content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nIntramedullary repair device against volar plating in the reconstruction of extra-articular and simple articular distal radius fractures ; a randomized pilot study . PURPOSE This prospective randomized pilot study reports our institutional experience and early results using Sonoma Wrx ( Sonoma Orthopedic Products , Santa Rosa , CA ) in the treatment of extra-articular and simple intra-articular distal radius fractures . MATERIAL AND METHODS A total of 64 patients , were enrolled in the study . They were considered eligible if they had ; unstable extra-articular distal radius fractures and simple intra-articular distal radius fractures suitable for closed reduction ( AO types ; A2.2 , A2.3 , A3.1 C2.1 , C2.2 ) . Patients in group I received intramedullary fixation using the Sonoma Wrx device and patients in group II received standard volar locking plate fixation . Radiographic criteria of acceptable healing were used for evaluation . RESULTS Two groups were similar in terms of baseline characteristics . Mean time of operation was significantly shorter in Group 1 vs. in group 2 ( 36.81 ± 7.11 vs. 48.97 ± 5.9 minutes , p = 0.001 ) . Time to healing of the fracture was not different between two groups ( 5.45 ± 1.09 vs. 5.70 ± 1.04 weeks for Group 1 vs. 2 , respectively p = 0.36 ) . Overall complications occurred in 9 patients in group 1 and in 15 patients in group 2 ( p = 0.17 ) . Follow-up was completed in all patients with a median time of 12 months and 13 months in group 1 and 2 , respectively . On radiographic evaluation radial inclination , radial height and volar tilt were not significantly different between group 1 and 2 , respectively . There were no significant differences between two groups in regard to wrist rotational degrees measured in last follow-up visit . CONCLUSION Sonoma Wrx Device is reliable and effective in terms of achieving satisfactory outcomes in treatment of distal radius fractures . It may be reasonable to use this device to prevent complications that are related to extensive soft tissue dissection .", "role": "user" }, { "content": "{\"Intervention\": [\"intramedullary fixation using the Sonoma Wrx device\", \"standard volar locking plate fixation\", \".\", \"Intramedullary repair\"], \"Outcome\": [], \"Participant\": [\"64\", \"fractures suitable\"]}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.2459
[ { "content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nAs severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continues to disperse globally with worrisome speed, identifying amino acid variations in the virus could help to understand the characteristics of it. Here, we studied 489 SARS-CoV-2 genomes obtained from 32 countries from the Nextstrain database and performed phylogenetic tree analysis by clade, country, and genotype of the surface spike glycoprotein (S protein) at site 614. We found that virus strains from mainland China were mostly distributed in Clade B and Clade undefined in the phylogenetic tree, with very few found in Clade A. In contrast, Clades A2 (one case) and A2a (112 cases) predominantly contained strains from European regions. Moreover, Clades A2 and A2a differed significantly from those of mainland China in age of infected population (P = 0.0071, mean age 40.24 to 46.66), although such differences did not exist between the US and mainland China. Further analysis demonstrated that the variation of the S protein at site 614 (QHD43416.1: p.614D>G) was a characteristic of stains in Clades A2 and A2a. Importantly, this variation was predicted to have neutral or benign effects on the function of the S protein. In addition, global quality estimates and 3D protein structures tended to be different between the two S proteins. In summary, we identified different genomic epidemiology among SARS-CoV-2 strains in different clades, especially in an amino acid variation of the S protein at 614, revealing potential viral genome divergence in SARS-CoV-2 strains.", "role": "user" }, { "content": "[\"Mechanism\"]", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.2015
[ { "content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nLevels of PAH-DNA adducts in placental tissue and the risk of fetal neural tube defects in a Chinese population. We examined the relationship between PAH-DNA adduct levels in the placental tissue, measured by a highly sensitive (32)P-postlabeling assay, and the risk of fetal neural tube defects (NTDs). We further explored the interaction between PAH-DNA adducts and placental PAHs with respect to NTD risk. Placental tissues from 80 NTD-affected pregnancies and 50 uncomplicated normal pregnancies were included in this case-control study. Levels of PAH-DNA adducts were lower in the NTD group (8.12 per 10(8) nucleotides) compared to controls (9.92 per 10(8) nucleotides). PAH-DNA adduct concentrations below the median was associated with a 3-fold increased NTD risk. Women with a low PAH-DNA adduct level in concert with a high placental PAH level resulted in a 10-fold elevated risk of having an NTD-complicated pregnancy. A low level of placental PAH-DNA adducts was associated with an increased risk of NTDs; this risk increased dramatically when a low adduct level was coupled with a high placental PAH concentration.", "role": "user" }, { "content": "{\"Chemical\": [\"PAHs\", \"PAH\", \"(32)P\", \"nucleotides\"]}", "role": "assistant" } ]
science.pubmedqa_qa
science.pubmedqa_qa.426
[ { "content": "Analyze the given text and respond to the inquiry. Choose from: \"YES\", \"NO\", or \"MAYBE\".\n\nText:\n(OBJECTIVE) To evaluate the accuracy of ultrasonographic examination in boys with an undescended testis.\n(MATERIAL AND METHODS) All patients who were referred to the paediatric surgeon after detection of an undescended testis were evaluated prospectively between November 2001 and November 2004. Among these 377 patients, 87 were referred with an ultrasonogram previously prescribed by the referring primary physician. The results of the ultrasonogram were compared to the results of the clinical examination of the paediatric surgeon and, in cases of no palpable testis, to the surgical findings.\n(RESULTS) Ultrasonography did not detect the retractile testes. Ultrasonography detected 67% of the palpable undescended testes. In cases of no palpable testis, the ultrasonographic examination missed the abdominal testes and sometimes other structures were falsely interpreted as a testis.\n\nInquiry:\nIs there any interest to perform ultrasonography in boys with undescended testis?\n\nResponse:", "role": "user" }, { "content": "NO", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.1387
[ { "content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a linear classifier with the Softmax function.\n\nKeywords: classification image\n\nDataset Information:\nMNIST-1D: A minimalist, low-memory, and low-compute alternative to classic deep learning benchmarks. The training examples are 20 times smaller than MNIST examples yet they differentiate more clearly between linear, nonlinear, and convolutional models which attain 32, 68, and 94% accuracy respectively (these models obtain 94, 99+, and 99+% on MNIST).\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nI-HAZE: The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images.\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\n\nAvailable Datasets: A: MNIST-1D B: ConvAI2 C: I-HAZE D: CIFAR-10", "role": "user" }, { "content": "CIFAR-10", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1702
[ { "content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nEffects of reducing insemination time in human in vitro fertilization and embryo development by using sibling oocytes.\nReport Abstract 1:\nRecent studies showed a beneficial effect of reducing the time of sperm-oocyte interaction on fertilization, division, and implantation rates of the oocytes obtained from randomized patients. In the present study, the effects of reduced insemination time on fertilization and embryo development were evaluated by using sibling oocytes from the same patient.\n A total of 464 oocytes from 36 patients was randomly allocated to be inseminated for either 1 hr (reduced) or 18 hr (regular).\n Fertilization rates were not significantly different between reduced (135/229; 59%) and regular (150/235; 64%) groups. Cleavage rates and embryo quality were similar in both groups. A total of 135 embryos (73 from the reduced and 62 from the regular group) was transferred to 36 patients. Thirty-four embryos implanted in 18 patients (25.2% implantation and 50.0% pregnancy rates).\n Fertilization, cleavage, and embryo development from 1-hr insemination is comparable, not superior, to those from an 18-hr insemination time, which is commonly used in in vitro fertilization programs. These data suggest that reduced insemination time can be used during in vitro fertilization to avoid unnecessarily longer exposure to spermatozoa.\n\n\nReport Title 2:\nProlonged sperm-oocyte exposure and high sperm concentration affect human embryo viability and pregnancy rate.\nReport Abstract 2:\nA reduced time interval of oocyte exposure to spermatozoa was investigated to assess whether it could enhance oocyte development and improve embryo viability, especially in cases of male factor infertility. A total of 167 patients were included in a prospective randomized study. They were randomly allocated to two major study groups, A (n = 85) and B (control group; n = 82). The oocytes from group A patients were exposed to spermatozoa for only 1 h; those from group B were exposed for 16 h. The two study groups were then subdivided according to semen quality for further analysis of the results. Significantly higher percentages were obtained in group A than in group B in terms of the fertilization rate (74 versus 68%, P < 0.025), cleavage rate (53 versus 41%, P < 0.005), pregnancy rate (27 versus 12%, P < 0.05) and implantation rate (11 versus 6%, P < 0.05). In addition, an increased fertilization rate was achieved in oocytes exposed to male factor spermatozoa for only 1 h compared with the conventional incubation period (78 versus 65%, P < 0.01). Advanced cellular stages (55 versus 41%, P < 0.02) and higher implantation rates (13 versus 4%, P < 0.05) were attained in the subgroup whose oocytes were exposed to normal spermatozoa for 1 h compared with the male factor spermatozoa with the standard culture interval. The higher fertilization rates, enhanced embryo development and viability achieved in group A indicate that prolonged exposure of oocytes to high concentrations of spermatozoa is detrimental, decreasing sperm-oocyte interaction and subsequent embryo implantation, particularly in male factor patients.\n\n\nReport Title 3:\nShortened exposure of oocytes to spermatozoa improves in-vitro fertilization outcome: a prospective, randomized, controlled study.\nReport Abstract 3:\nA prospective, randomized study of 158 patients undergoing in-vitro fertilization (IVF) and embryo transfer was conducted to evaluate whether a shortened exposure of oocytes to spermatozoa enhances oocyte development, and subsequently influences the IVF outcome. A comparison was made between conventional treatment time and shorter exposure of retrieved oocytes to spermatozoa. Fertilization and cleavage rates, embryo quality, implantation and pregnancy rates in the study group (short exposure) versus controls (standard IVF procedure) were evaluated. Fertilization (56 versus 61%) and cleavage rates (96 versus 92%) were similar in the two groups respectively. However, embryo quality was significantly higher in the study group (P < 0.05). Moreover, the pregnancy and implantation rates were significantly increased (42.4 versus 26% per embryo transfer, and 16 versus 10% respectively; P < 0.05). Our results demonstrated that shorter exposure of oocytes to spermatozoa is superior to the standard time in IVF and may have a favourable effect on implantation rates by improving embryo quality.\n\n\nReport Title 4:\nA prospective randomized controlled study of the effect of short coincubation of gametes during insemination on zona pellucida thickness.\nReport Abstract 4:\nA prospective, randomized study was conducted to evaluate the thickness, of zona pellucida (ZP) after brief or standard exposure of human oocytes to spermatozoa, and to determine the correlation between ZP thickness, fertilization rate and embryo quality. The mean ZP thickness 48 h after insemination was found to be significantly less in fertilized oocytes than in non-fertilized oocytes in all treated groups (13.72 +/- 3.0 microns and 15.08 +/- 2.5 microns, respectively; p < 0.007). Zona pellucida thickness correlated positively with embryo quality. Brief exposure of gametes was found to influence ZP thickness. The ZP was significantly thinner after brief and intracytoplasmic sperm injection (ICSI) exposure of oocytes to spermatozoa than after standard in vitro fertilization (IVF). The mean ZP thickness 24 and 48 h after fertilization was significantly greater in standard IVF (16.43 +/- 2.8 microns and 15.22 +/- 2.7 microns, respectively) than in either the brief exposure or ICSI groups (12.78 +/- 2.4 microns and 13.01 +/- 3.5 microns vs. 13.46 +/- 2.2 microns and 13.16 +/- 2.4 microns; p < 0.0001).\n\n\nReport Title 5:\nThe effects of brief gamete co-incubation in human in vitro fertilization.\nReport Abstract 5:\nTo investigate whether very short exposure of mature oocytes to sperm in vitro may affect the fertilization rates, embryo cleavage rates, and embryo quality between sibling oocytes in the same patient.\n Sibling oocytes of the same patient from 23 oocyte collection cycles were randomly allocated to the study group, with a 1-hr or 3-hr sperm-oocyte incubation, or the control group with the standard overnight gamete co-incubation. The fertilization rates, cleavage rates, and subsequent embryo quality were evaluated.\n Our results showed no statistically significant differences in fertilization rates, embryo cleavage rates, and quality of the embryos between the study group and the control group.\n Since the present study showed that long exposure of the oocyte to sperm has no advantage over short exposure, we prefer shortening the oocyte-sperm incubation period for reducing the negative effect induced by nonphysiologically high concentrations of spermatozoa.\n\n\nReport Title 6:\nShort coincubation of gametes in in vitro fertilization improves implantation and pregnancy rates: a prospective, randomized, controlled study.\nReport Abstract 6:\nTo examine the benefits of short coincubation of gametes compared to prolonged incubation in in vitro fertilization (IVF).\n Prospective randomized controlled study.\n Hospital-based fertility center.\n One hundred thirty patients (group 1) and 129 patients (group 2).\n Oocytes from group 1 were exposed to spermatozoa for 2 hours, and oocytes from group 2 were exposed to spermatozoa for 20 hours.\n Fertilization and cleavage rates, embryo quality, and pregnancy and implantation rates were evaluated. Estradiol (E(2)) and progesterone (P(4)) levels were measured in the wells of culture dishes after 2-hour exposure of oocytes/zygotes to spermatozoa in group 1 and after 20-hour exposure in both the groups.\n There was no difference between the two groups in the fertilization rate and the number of embryos obtained. However, the proportion of grade 1 embryos was significantly higher among group 1 compared to group 2 patients. Clinical pregnancy and implantation rates were significantly higher among group 1 compared to group 2. The significantly higher levels of E(2) and P(4) in the 20-hour cultures compared with the 2-hour cultures may have been detrimental to embryo quality, pregnancy, and implantation rates.\n Coincubation of gametes for 2 hours with standard or high insemination concentrations of spermatozoa significantly improved embryo quality and the pregnancy and implantation rates compared with overnight incubation of gametes.\n\n\nReport Title 7:\nExtending the coincubation time of gametes improves in vitro fertilization.\nReport Abstract 7:\nnan", "role": "user" }, { "content": "This review has provided evidence that brief co-incubation of sperm and oocytes may improve the ongoing pregnancy and clinical pregnancy rates for infertile women undergoing IVF cycles. More RCTs are required to assess whether brief co-incubation would contribute to a higher live birth rate and a lower miscarriage rate compared to the standard overnight insemination protocol.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.4494
[ { "content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nAs for rs846910, a haplotype containing the A allele has been previously shown to be associated with higher HSD11B1 mRNA levels and activity in the adipose tissue of southern European whites (Gambineri et al., 2011).", "role": "user" }, { "content": "Result", "role": "assistant" } ]
science.pico_ner
science.pico_ner.1660
[ { "content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nInfluence of infusion time on the acute toxicity of amphotericin B : results of a randomized double-blind study .", "role": "user" }, { "content": "{\"Intervention\": [\"amphotericin B\"], \"Outcome\": [\"acute toxicity of amphotericin B :\"], \"Participant\": [\"acute toxicity of amphotericin B\", \"infusion time\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.526
[ { "content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nText-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.\n\nCited Works Summary:\n[1]: Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.\n\n[2]: Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.\n\n[3]: Text entry rates are explored for several variations of soft keyboards. We present a model to predict novice and expert entry rates and present an empirical test with 24 subjects. Six keyboards were examined: the Qwerty, ABC, Dvorak, Fitaly, JustType, and telephone. At 8-10 wpm, novice predictions are low for all layouts because the dominant factor is the visual scan time, rather than the movement time. Expert predictions are in the range of 22-56 wpm, although these were not tested empirically. In a quick, novice test with a representative phrase of text, subjects achieved rates of 20.2 wpm (Qwerty), 10.7 wpm (ABC), 8.5 wpm (Dvorak), 8.0 wpm (Fitaly), 7.0 wpm (JustType), and 8.0 wpm (telephone). The Qwerty rate of 20.2 wpm is consistent with observations in other studies. The relatively high rate for Qwerty suggests that there is skill transfer from users' familiarity with desktop computers to the stylus tapping task.\n\n[4]: On a touchscreen keyboard, it can be difficult to continuously type without frequently looking at the keys. One factor contributing to this difficulty is called hand drift, where a user's hands gradually misalign with the touchscreen keyboard due to limited tactile feedback. Although intuitive, there remains a lack of empirical data to describe the effect of hand drift. A formal understanding of it can provide insights for improving soft keyboards. To formally quantify the degree (magnitude and direction) of hand drift, we conducted a 3-session study with 13 participants. We measured hand drift with two typing interfaces: a visible conventional keyboard and an invisible adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter disambiguation to allow for unconstrained movement. Findings show that hand drift occurred in both interfaces, at an average rate of 0.25mm min on the conventional keyboard and 1.32mm min on the adaptive keyboard. Participants were also more likely to drift up and or left instead of down or right.\n\n[5]: Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen. Our hypothesis is that users can transfer their typing ability obtained from visible keyboards to eyes-free use. We propose two statistical decoding algorithms to infer users’ eyes-free input: the absolute algorithm and the relative algorithm. The absolute algorithm infers user input based on the absolute position of touch endpoints, while the relative algorithm infers based on the vectors between successive touch endpoints. Evaluation results showed users could achieve satisfying performance with both algorithms. Text entry rate was 17-23 WPM (words per minute) depending on the algorithm used. In comparison, a baseline cursor-based text entry method yielded only 7.66 WPM. In conclusion, our research demonstrates for the first time the feasibility of thumb-based eyes-free typing, which provides a new possibility for text entry on ubiquitous computing platforms such as smart TVs and HMDs.", "role": "user" }, { "content": "A number of works have attempted to understand the user behavior with ten finger typing with soft keyboards. The major findings are as follows: : The typing speed with soft keyboards drops dramatically compared to the speed with physical keyboards [3] . : The distribution of touch points resembles the mechanical keyboard layout [1] , though the distribution varies over time in shape and size [2] . : Hand-drift occurs over time and becomes stronger for invisible keyboards [4] . : Various factors cause tap variability among users [5] . The factors include finger volume, hand posture and mobility.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.7725
[ { "content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nchoriocarcinoma is a highly malignant gestational trophoblastic disease associated with high urine and serum -hcg levels .\nto date , few studies have reported pulmonary embolism ( pe ) and pulmonary hypertension caused by choriocarcinoma of the pulmonary artery . here , we report a young female patient with choriocarcinoma of the pulmonary artery who was clinically misdiagnosed with pulmonary thromboembolism ( pte ) and died of acute cardiopulmonary failure very soon after embolectomy .\na 25 year - old female patient first experienced an episode of sudden dyspnea and frequent coughing after routine physical activity .\nshe was initially diagnosed at a local hospital with acute pneumonia based on a chest x - ray , which showed several small , scattered radio - opaque shadows on both lungs .\nsix months later , the patient was referred to our hospital because of severe orthopnea . upon arrival\n, arterial blood gas analysis showed that the values of ph , paco2 , and pao2 were 7.502 , 26.3 mmhg , and 50 mmhg , respectively , under 2 l / minute oxygen inhalation .\nan emergent contrast - enhanced ct scan showed occlusion of the left pulmonary artery and right inferior pulmonary artery ( fig .\nthe scan further revealed multiple patchy , cord - like , nodular fuzzy shadows in bilateral lungs ( fig .\nmeanwhile , a diagnosis of pe was made based on echocardiographs , which revealed severe pulmonary hypertension ( pulmonary systolic pressure , 105 mmhg ) and mild tricuspid regurgitation .\nalthough her d - dimer test value was only 0.27 g / ml , a pulmonary angiograph was still performed via the right femoral vein , which showed complete occlusion of the left pulmonary artery and right inferior pulmonary artery .\na : ct scan demonstrates occlusion of the left pulmonary artery and right inferior pulmonary artery .\nb : ct scan shows multiple patchy , cordlike , nodular fuzzy shadows in bilateral lungs .\nthe patient claimed that she was a non - smoker , did not use oral contraception , and had experienced three previous spontaneous abortions ( the most recent occurring one year previously ) .\nthe patient had spent more than 10 hours playing card games every day during the past four years .\nan angiographic procedure for direct injection of 400,000 iu urokinase failed to dissolve the emboli . since her pe was believed to have derived from deep venous thrombi ( sedentary clinical history ) , a filter was also placed in the inferior vena cava .\nthe condition of the patient continued to deteriorate with progressive tachypnea , cyanosis , non - productive cough , and hemoptysis ; therefore , a pulmonary embolectomy was carried out using a cardiopulmonary bypass procedure .\ncomplete obstruction of the openings of the left pulmonary artery and right inferior pulmonary artery by the yellow - pink soft emboli was observed .\nthe emboli were completely removed to re - establish blood flow from the bronchial artery . at the time of re - warming\nprior to decannulation , the pulmonary artery systolic pressure ranged from 90 - 110 mmhg and surpassed the aortic systolic pressure .\nthe hemodynamic situation was so unstable that a 5 mm hole was made in the atrial septum to mitigate the condition .\nthe patient was then transferred to the cardio - thoracic intensive care unit . in spite of intensive efforts ,\nthe patient ultimately succumbed to cardiopulmonary failure and acute renal failure approximately 14 hours after embolectomy .\npathologic evaluation of pulmonary emboli showed macroscopic fragments of yellow - pink tissue measuring 4 cm at the greatest dimension ( fig .\npost - mortem histological evaluation showed that the viable tumor constituted only a thin peripheral rim with a central area of necrosis and focal hemorrhage ( fig .\nthe tumor cells consisted of an intimate amalgamation of multinucleate syncytiotrophoblasts , mononucleate cytotrophoblasts , and intermediate trophoblasts .\nthere was considerable cytological atypia in the trophoblasts with hyperchromatic nuclei and abnormal mitotic figures ( fig .\nlack of villi and new blood vessel formation comprised the unique morphological features consistent with choriocarcinoma .\nimmunohistochemical staining demonstrated strong reactivity for hcg in syncytiotrophoblasts ( fig . 2c ) and strong cytokeratin positivity was observed in all the tumor cells ( fig .\nthe diagnosis of choriocarcinoma embolism was confirmed . a : macroscopic examination of the yellow - pink soft embolus obtained during pulmonary embolectomy .\nb : tumor characterized by atypical trophoblasts with polymorphic and hyperchromatic nuclei as well as abnormal mitotic figures .\nchoriocarcinoma of the pulmonary artery has been described as a rare cause of pe and pulmonary artery hypertension ( pah ) .\nsuch cases represent a potentially fatal emergency due to cardiopulmonary failure . however , the diagnosis is difficult to establish because of the tendency of these tumors to present characteristics of more common diseases such as pte .\nalmost all choriocarcinomas of the pulmonary artery present with pe , and pah occurs after nonmolar abortions ( mostly spontaneous abortions ) and rarely after normal pregnancies .\nthe low incidence of choriocarcinoma of the pulmonary artery and the long latency after a live birth or nonmolar abortion make early diagnosis of this condition very challenging .\nbagshawe first reported choriocarcinoma of the pulmonary arteries , identified at necropsy in patients who died of pah in 1959 .\nhe suggested that choriocarcinoma emboli could be the result of a progressive neoplastic obstruction of pulmonary arteries leading to pah and respiratory distress .\nin addition to the case reported here , we reviewed 10 cases of pulmonary choriocarcinoma presenting with pe and pah reported between 1991 and 2008 ; the findings of these 11 cases are summarized in table 1 .\nalmost all of the patients ( 10/11 ) had a history of abortion with latency periods ranging from two months to six years and no primary uterine tumor identified . among these cases ,\nseven ( 7/10 ) were spontaneous abortion and , only one patient ( 1/11 ) a normal delivery three years previously .\nthe most important risk factor , a history of hydatidiform mole , was absent from the cases included in our review ; almost all cases occur after a spontaneous abortion and rarely after normal full term pregnancy . owing to regular check - ups for potential malignant sequelae after hydatidiform molar pregnancies , few of these patients present with advanced disease , such as pulmonary hypertension\n. therefore , almost all trophoblastic tumors that occur after a normal pregnancy are choriocarcinomas .\ne : etoposide ; m : methotrexate ; a : actinomycin d / dactinomycin ; v : vincristine ; cyc : cyclophosphamide ; car : carboplatin ; cis : cisplatin ; aned : alive with no evidence of disease ; dod : dead of disease ; na : not available ; pe : pulmonary embolism ; pte : pulmonary thromboembolism ; mcc : metastatic choriocarcinoma ; el : embolectomy .\n. almost all results of pelvic ultrasound and gynecologic examination of the uteri and ovaries were negative , even by autopsy .\nrare cases of pe due to choriocarcinoma in women occurring many years after hysterectomy have been reported .\nmoreover , tanimura and colleagues revealed trophoblasts in the pulmonary arteries in autopsies of nine of ten patients who died after delivery or abortion .\ntherefore , it has been speculated that primary choriocarcinoma of the pulmonary arteries in women may be due to malignant transformation of normal villous trophoblasts which entered the pulmonary artery at the time of abortion or delivery ; however , the others consider that pulmonary choriocarcinoma derives metastatically from a regressed tumor previously present in the uterus .\nsimilar to our case , patients commonly present with non - specific symptoms such as chest pain , cough and dyspnea , as well as abnormal findings of chest x - rays ( diffuse radiopaque shadows in the bilateral lungs ) .\nthese manifestations are commonly misdiagnosed as pneumonia due to the absence of molar pregnancy history and gynecologic symptoms .\nas the quantity and volume of the pulmonary artery embolus increases , symptoms of pulmonary hypertension are present .\nchest ct imaging may show intraluminal contrast filling defects ( emboli ) in the pulmonary artery down to the segmental and subsegmental levels , as well as multiple peripheral opacities in both lungs . at this stage ,\nthoracic surgeons frequently misdiagnosed the condition as pulmonary thromboembolism and as a result , the duration from onset to treatment ranged from one to 12 months ( average 6 months ) . typically , serum -hcg levels in patients with choriocarcinoma of the pulmonary artery are high , ranging from 1,000 miu / ml to over 1,000,000 miu / ml , and urine pregnancy tests are always positive . in most developed countries , urine and/or\nserum -hcg evaluation is a routine screening test in female patients regardless of the disorders that bring them to clinics .\nhowever , there are still many hospitals around the world ( especially in developing countries ) in which urine and/or serum -hcg is still not on the list of routine tests .\nsimilarly , the clinician responsible for the case described here noted the long sedentary history of the patient but disregarded the three spontaneous abortions ; therefore , urine and/or serum -hcg tests were consistently omitted .\nthus , urine and/or serum -hcg test should always be a frontline test ordered when female patients present with the symptoms of pe and pah to rule out the possibility of choriocarcinoma .\nseckl and coworkers reported three cases of young women with complete remission following chemotherapy of choriocarcinoma embolism .\nthe diagnosis was based mainly on high -hcg levels in non - pregnant women , indicating that a definitive histological diagnosis is not essential before initiation of chemotherapy , particularly since surgery is associated with considerable morbidity .\nrecently , fdg - pet / ct has been utilized to assist diagnosis for increased metabolism of the embolus .\nmultiagent chemotherapy includes etoposide , methotrexate , actinomycin d , cyclophosphamide , cisplatin , vincristine , and dactinomycin , leading to a cure rate in excess of 89% .\nthus , chemotherapy coupled with biochemical and radiological monitoring constitutes an attractive management approach for pulmonary choriocarcinoma embolism therapy . in conclusion ,\npulmonary choriocarcinoma embolism is now a curable disease which is commonly associated with spontaneous abortion , a long latency and empty uteri . in order to avoid unnecessary deaths , it is very important for the clinician to recognize the non - specific clinical characteristics in the early stages and to use urine and/or serum -hcg as the frontline test in the fertile female patients who are clinically suspected of pe ; the earlier the diagnosis and initiation of chemotherapy , the better the prognosis .", "role": "user" }, { "content": "abstractcases of pulmonary embolism and pulmonary artery hypertension caused by choriocarcinoma represent a rare clinical emergency . \n we report a case of a 25-year - old woman who presented with pulmonary embolism and hypertension and died soon after complete pulmonary embolectomy . \n a related literature review revealed that almost all of these patients had previously experienced a spontaneous abortion ( average , 6 months ) and were not pregnant .", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1455
[ { "content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: Besides a straightforward integration, we have also studied whetherInception itself can be made more efficient by making it deeper and wider.For that purpose, we designed a new version named Inception-v4which has a more uniform simplified architecture and more inception modulesthan Inception-v3. Historically, Inception-v3 had inherited a lot of thebaggage of the earlier incarnations. The technical constraints chiefly came fromthe need for partitioning the model for distributed training usingDistBelief [2].Now, after migrating our training setup to TensorFlow [1]these constraints have been lifted, which allowed us to simplify the architecturesignificantly. The details of that simplified architecture are described in Section 3.\n\nInquiry: Why does inception-v4’s inception module use average pooling instead of max pooling?", "role": "user" }, { "content": "One possible reason to use average pooling instead of max pooling is that max pooling introduces some kind of technical constraints which are reduced by average pooling. But these constraint have not been explained in the paper. So this question cannot be fully answered in this paper.", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.928
[ { "content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nState-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.\nThe compression of DNN models has therefore become an active area of research recently, with \\emph{connection pruning} emerging as one of the most successful strategies.\nA very natural approach is to prune connections of DNNs via $\\ell_1$ regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression.\nIn this work, we revisit this simple strategy and analyze it rigorously, to show that:\n(a) any \\emph{stationary point} of an $\\ell_1$-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization;\n(b) successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization.\nOur theoretical results thus suggest that $\\ell_1$ pruning could be successful provided we use an accurate optimization solver.\nWe corroborate this in our experiments, where we show that simple $\\ell_1$ regularization with an Adamax-L1(cumulative) solver gives pruning ratio competitive to the state-of-the-art.\nState-of-the-art Deep Neural Networks (DNNs) typically have millions of parameters.\nFor example, the VGG-16 network BID0 ), from the winning team of ILSVRC-2014, contains more than one hundred million parameters; inference with this network on a single image takes tens of billions of operations, prohibiting its use on edge devices such as mobile phones or in real-time applications.\nIn addition, the huge size of DNNs often precludes them from being placed at the upper level of the memory hierarchy, with resulting slow access times and expensive energy consumption.A recent thread of research has thus focused on the question of how to compress DNNs.\nOne successful approach that has emerged is to trim the connections between neurons, which reduces the number of non-zero parameters and thus the model size BID1\nb) ; BID3 ; BID4 ; BID5 ; BID6 ; BID7 ).\nHowever, there has been a gap between the theory and practice: the trimming algorithms that have been practically successful BID1\nb) ; BID3 ) do not have theoretical guarantees, while theoretically-motivated approaches have been less competitive compared to the heuristics-based approaches BID5 , and often relies on stringent distributional assumption such as Gaussian-distributed matrices which might not hold in practice.\nWith a better theoretical understanding, we might be able to answer how much pruning one can achieve via different approaches on different tasks, and moreover when a given pruning approach might or might not work.\nIndeed, as we discuss in our experiments, even the generally practically successful approaches are subject to certain failure cases.\nBeyond simple connection pruning, there have been other works on structured pruning that prune a whole filter, whole row, or whole column at a time BID8 ; BID9 ; BID10 ; ; BID12 ).\nThe structured pruning strategy can often speed up inference speed at prediction time more than simple connection pruning, but the pruning ratios are typically not as high as non-structured connection pruning; so that the storage complexity is still too high, so that the caveats we noted earlier largely remain.A very natural strategy is to use 1 regularized training to prune DNNs, due to their considerable practical success in general sparse estimation in shallow model settings.\nHowever, many recent investigations seemed to suggest that such 1 regularization does not work as well with non-shallow DNNs, especially compared to other proposed methods.\nDoes 1 regularization not work as well in non-shallow models?\nIn this work, we theoretically analyze this question, revisit the trimming of DNNs through 1 regularization.\nOur analysis provides two interesting findings:\n(a) for any stationary point under 1 regularization, the number of non-zero parameters in each layer of a DNN is bounded by the number of penalized prediction logits-an upper bound typically several orders of magnitude smaller than the total number of DNN parameters, and\n(b) it is critical to employ an 1 -friendly optimization solver with a high precision in order to find the stationary point of sparse support.Our theoretical findings thus suggest that one could achieve high pruning ratios even via 1 regularization provided one uses high-precision solvers (which we emphasize are typically not required if we only care about prediction error rather than sparsity).\nWe corroborate these findings in our experiments, where we show that solving the 1 -regularized objective by the combination of SGD pretraining and Adamax-L1(cumulative) yields competitive pruning results compared to the state-ofthe-art.\nIn this work, we revisit the simple idea of pruning connections of DNNs through 1 regularization.\nWhile recent empirical investigations suggested that this might not necessarily achieve high sparsity levels in the context of DNNs, we provide a rigorous theoretical analysis that does provide small upper bounds on the number of non-zero elements, but with the caveat that one needs to use a high-precision optimization solver (which is typically not needed if we care only about prediction error rather than sparsity).\nWhen using such an accurate optimization solver, we can converge closer to stationary points than traditional SGD, and achieve much better pruning ratios than SGD, which might explain the poorer performance of 1 regularization in recent investigations.\nWe perform experiments across different datasets and networks and demonstrate state-of-the-art result with such simple 1 regularization.\nTable 5 : Per-layer Resnet-32 architecture.\nThere are 3 main convolutional blocks with downsampling through stride=2 for the first layer of each block.\nAfter the convloutional layers, global pooling is applied on the spatial axes and a fully-connected layer is appended for the output.\nEach set of rows is a residual block.", "role": "user" }, { "content": "We revisit the simple idea of pruning connections of DNNs through $\\ell_1$ regularization achieving state-of-the-art results on multiple datasets with theoretic guarantees.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.4243
[ { "content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nthe disrupted in schizophrenia 1 ( disc1 ) gene is a 414.3 kb gene located on chromosomal region 1q42.2 , and consists of 13 exons .\ndisc1 was originally identified as a candidate gene for schizophrenia in a large scottish family , in which a balanced translocation involving chromosomes 1 and 11 was strongly linked to schizophrenia , schizoaffective disorder , bipolar affective disorder , and recurrent , major depression . in this family\n, carriers of the translocation were found to have reduced p300 amplitude , which is observed in some patients with schizophrenia .\nsubsequent association studies identified numerous polymorphisms in the disc1 gene associated with schizophrenia and affective disorders , although different , polymorphisms / haplotypes in various regions of the gene were implicated in these studies . \n in the adult mouse brain ,\ndisc1 is expressed widely , including in the olfactory bulb , cortex , hippocampus , hypothalamus , cerebellum , and brain stem . during development ,\ndisc1 protein is detected at all stages , from embryonic day 10 ( elo ) to 6 months old , with two significant peaks of protein expression of one of the disc1 isoforms at e13.5 and postnatal day 35 .\ninterestingly , these time points correspond to periods of active neurogenesis and puberty in the mouse .\nthese results suggest , that disc1 may play a critical role in brain development , lending support to the neurodevelopmental hypothesis of schizophrenia . \n\ndisc1 encodes an 854-amino acid ( aa ) protein , which shows no homology to other known proteins and little homology between species .\nthis amino - acid sequence predicts that the protein disc1 may act as a scaffolding protein with multiple binding motifs , facilitating formation of protein complexes .\nthe n -terminus ( aa 1 - 347 ) contains nuclear localization signals , whereas the c - terminus ( aa 348 - 854 ) appears to be important , for microtubule and centrosomal targeting,17 - 19 although no centrosomal localization has been detected so far for the native protein .\nalthough the precise function of disc1 in the brain is unknown , a number of disc1 -interacting partners have been identified , including fasciculation and elongation protein zeta-1 ( fez1 ) , nuclear distribution element - like ( nudel ) , and lissencephaly 1 ( lis1 ) , which are known to play a role in neuronal development and functioning .\naltered interactions between disc1 and its binding partners are currently being investigated in order to understand more accurately the biology of disc1 as a schizophrenia susceptibility gene .\nin an effort to understand the cellular function of disc1 , yeast - two hybrid studies have been used to identify molecular interactors of disc1 .\nit , was found that , disc1 has numerous binding partners , including nudel , fez1 , activating transcription factor ( att ' ) 4/5 , and microtubule - associated protein 1 a ( mapi a ) .\nnudel is a component of a pathway involved in cytoplasmic dynein movement , and is involved in neurofilament assembly , neuronal migration , and development of neurite morphology .\noverexpression of truncated disc1 protein inhibits neurite outgrowth in pci 2 cells , suggesting that the disc1-nudel complex may be involved in neuronal outgrowth .\nthe hypothetical peptide product , resulting from the scottish translocation removes the interaction domain for nudel .\nrecently , it has been shown that nudel oligopeptidase activity is under tight , regulation through binding to disc1 , since a mutation very close to the disc1-binding site of nudel abolishes this activity .\ninterestingly , nudel cleaves a number of neuropeptides in vitro , some of which have previously been implicated in the pathophysiology of schizophrenia , including neurotensin ( nt ) .\nnt receptor agonists may be potential antipsychotics ; thus , inhibition of nudel could lead to increase in local concentration of nt , which may have an antipsychotic effect .\naltered subcellular distribution of disc1 has been reported in patients with psychosis and alcohol / substance abuse , with increased ratios of nuclear to cytoplasmic disc1 protein levels in patients .\ncell culture studies in cortical neurons have found evidence that disc1 may colocalize with mitochondrial markers , and that its subcellular targeting is independent of the nudel - binding site .\nhayashi et al have also demonstrated that disc1 and nudel bind in a neurodevelopmentally regulated manner and form a trimolecular complex with another protein , lis1 .\nlis1 is involved in neuronal migration and corticogenesis . although the function of this complex is currently unknown , it .\nanother interacting partner of disc1 is fez1 , which is a mammalian homologue of the caenorhabditis elegans unc-76 protein , involved in axonal outgrowth and fasciculation .\nmiyoshi et al demonstrated that disc1 participates in neurite extension through its c - terminal interaction with fez1 .\nthe chromosomal location for fez1 was previously implicated in a schizophrenia linkage analysis , although results from different , populations vary in significance .\na modest association between schizophrenia and fez1 polymorphisms has been detected in a subset of japanese patients . \n\nin our laboratory , we have tested the hypothesis that , altered expression of disc1 , and/or its molecular partners nudel , fez1 , and lis1 may underlie its pathogenic role in schizophrenia and explain its genetic association .\nwe examined the expression of disc1 and these selected binding partners in postmortem human brain .\nwe found no difference in the expression of disc1 mrna in schizophrenia , and no association with previously identified risk snps ( all f values < 1.5 , all p values > 0.2 ) .\ndisc1 immunoreactivity was significantly , albeit modestly ( by approximately 20% ) , increased in the hippocampus of patients with schizophrenia : f(1,73)=3.6 , p=0.05 .\nhowever , the expression of nudel , fez1 , and lis1 mrna was each significantly reduced in schizophrenic tissue in both the dorsolateral prefrontal cortex and hippocampus and the expression of each gene showed association with a high risk disc1 polymorphism ( all p values < 0.05 ) .\nthese data implicate genetically linked abnormalities in the disc1 molecular pathway in the pathophysiology of schizophrenia . given its role in brain development and plasticity via .\nits interaction with a. number of different proteins , disc1 remains a. candidate gene for schizophrenia , and an understanding of its exact mechanistic role in neuronal pathways may shed more light on the disease .\nschizophrenia is a devastating neuropsychiatrie disorder , the genetics of which has been under extensive investigation for several decades . despite being an exceedingly complex disease in terms of both etiology and pathogenesis , recent research is finally shedding light on schizophrenia susceptibility genes .\nprominent , among them are the genes comt , dtnbp1 , grm3 , disc1 , nrg1 , akt1 , gad1 , rgs4 , and drd2 .\ndisc1 and its binding partners fez1 , nudel , and lis1 are involved in cytoplasmic dynein movement , neurofilament assembly , neuronal migration , and neurite morphology , and may play a role in the neurodevelopmental deficits observed in schizophrenia . although the precise neurobiological cause of schizophrenia continues to be unknown , the abundance of evidence regarding susceptibility genes for schizophrenia can not be dismissed .\nidentification of the molecular and cellular mechanisms that link susceptibility genes to the neurobiological functioning of the brain continues to be a major focus of research . as evidence for\nmay be determined that these genes operate in a convergent molecular pathway affecting neural development and synaptic plasticity .\nthe disruption of multiple genes within this pathway may lead to the development of schizophrenia .", "role": "user" }, { "content": "the disrupted in schizophrenia 1 ( disc1 ) gene has been identified as a schizophrenia susceptibility gene based on linkage and single nucleotide polymorphism ( snp ) association studies and clinical data , suggesting that risk snps impact on hippocampal structure and function . \n we hypothesized that altered expression of disc1 andlor its molecular partners ( nuclear distribution element - like [ nudel ] , fasciculation and elongation protein zeta-1 [ fez1 ] , and lissencephaly 1 [ l1s1 ] ) may underlie its pathogenic role in schizophrenia and explain its genetic association . \n we examined the expression of disc1 and its binding partners in the hippocampus and dorsolateral prefrontal cortex of postmortem human brains of schizophrenic patients and controls . \n we found no difference in the expression of disc1 mrna in schizophrenia , and no association with previously identified risk snps , however , the expression of nudel , fez1 , and lis1 vas significantly reduced in tissue from schizophrenic subjects , and the expression of each showed association with high - risk disc1 polymorphisms . \n these data suggest involvement of genetically linked abnormalities in the disc1 molecular pathway in the pathophysiology of schizophrenia .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.5581
[ { "content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: A survey of single and multi-hop link schedulers for mmWave wireless systems \nChapter Title: HYBRID APPROACHES \n1. Abstract of Cited Paper (BIB001): In this paper, we propose a spacial time slot scheduling algorithm for relay operation to improve the throughput performance of millimeter-wave wireless personal area network (mmWave WPAN) systems which employ directional antenna. The upcoming mmWave WPAN is designed for high definition TV (HDTV) transmission, high speed wireless docking and gaming, etc. Based on the fact that the significant path loss of millimeter-wave environments provides good space isolation, we have proposed a coexistence mechanism by sharing time slots for relay with direct transmission to guarantee throughput for the above data-rate-greedy applications. This paper is an extension that addresses spacial time slot scheduling for relay operation taking the effect of directional antenna into consideration. We model the throughput maximization with scheduling as an integer optimization and solve it by transforming the problem to a max-weight matching problem of a bipartite graph. We propose a scheduling algorithm based on the Kuhn-Munkres algorithm which can be used to solve the max weight matching problem. Simulation results show that there is up to 25% throughput improvement achieved compared with random scheduling method. \n2. Abstract of Cited Paper (BIB002): Millimeter-wave (mmWave) communications is a promising enabling technology for high rate (Giga-bit) multimedia applications. However, because oxygen absorption peaks at 60 GHz, mmWave signal power degrades significantly over distance. Therefore, a traffic flow transmitting over multiple short hops is preferred to improve flow throughput. In this paper, we first design a hop selection metric for the piconet controller (PNC) to select appropriate relay hops for a traffic flow, aiming to improve the flow throughput and balance the traffic load across the network. We then propose a multi-hop concurrent transmission (MHCT) scheme to exploit the spatial capacity of the mmWave WPAN. Extensive simulations show that the proposed MHCT scheme can significantly improve the traffic flow throughput and network throughput.\n\nReview Chapter:", "role": "user" }, { "content": "To date, only three works have considered both spatial re-use and selecting the best relay paths concurrently. In BIB001 , Zhou et al. propose two algorithms, random fit deflect routing (RFDR) and best fit deflect routing (BFDR) to schedule shared time slots to enable concurrent transmissions between direct links and multihop paths. The authors argue that although multi-hop relaying is effective in solving link degradation and blockage problems, the actual throughput of the network is decreased due to the extra airtime introduced by relaying. In Figure 12b , we see that Link(1, 2) is replaced with a multi-hop path Link(1, 3) and Link(3, 2) via relay node 3. This replacement adds an extra airtime if links are scheduled in a TDMA-based manner. However, if two links can be scheduled in the same time slot, the extra airtime in Figure 12b can be removed. In Figure 12c , Link(3, 2) is scheduled to share the same time slots with Link(4, 5). To schedule multiple links in the same time slot, the CCI on each link needs to be measured before scheduling. In particular, the authors derive the concurrent transmission condition as follows: a link is scheduled to a time slot only if the CCI on every link in the same time slot is lower than a given threshold. The authors consider a network consisting of n pairs of DEVs. There are m pairs of DEVs requesting multi-hop operation from the AP, where 0 < m < n. The problem is to find relays for these m pairs of DEVs and to schedule their transmission in a manner that maximizes system throughput. , d d ) . RFDR repeats the procedure until all multi-hop requests are processed. On the other hand, BFDR checks every DEV to find the best relay. In particular, BFDR checks every time slot in order to locate one that causes the least CCI to links scheduled in a time slot. BFDR terminates when all m pairs are scheduled with a relay. In a subsequent work, Zhou et al. BIB002 present a Binary Integer Program (BIP) with an objective to maximize data rate for the same problem. As the problem is NP-hard, they reformulated the BIP as a maximum weight bipartite matching problem. The resulting problem is then solved using the well-known Kuhn-Munkres (Hungarian) algorithm. The third work, see BIB003 , presents a multi-hop concurrent transmission scheme that consists of a hop selection metric and a concurrent transmission scheme. The aim is to improve flow and network throughput. The problem is to find an optimal route such that the aggregated throughput is maximized and one that also load balances nodes. In addition, to explore spatial reuse efficiency, given a set of multi-hop routes, the problem is to schedule non-conflicting links on the routes to transmit simultaneously in a fair manner. To solve the multi-hop route selection problem, the authors use a weighted graph. The weight of an edge reflects both distance and traffic load; i.e., where d(i, j) is the link distance between node i and node j, F (j) is the traffic load on node j,D andF are the average link length and traffic load among all nodes respectively. The hop selection metric always selects the multi-hop route with the minimum aggregated link weights as computed by the Dijkstra algorithm. The hop selection method re-computes routes whenever the network topology changes or new flows are added. The authors then propose a concurrent transmission scheme that preferentially schedules links on a path in descending order of transmission loads. It also groups non-interfering links together, and assigns the group a time slot that is equal to the maximum scheduled time slot. It is possible that some links cannot be scheduled in the current superframe because there are insufficient time slots remaining in the transmission period. These links are removed from consideration and the corresponding flows are removed from the current schedule and are re-scheduled in the next superframe.", "role": "assistant" } ]
science.scierc_re
science.scierc_re.287
[ { "content": "Examine the provided abstract from a computer science paper. Your objective is to identify and extract relationships between entities. Present these relationships in a structured JSON format.\n\nInput Text:\n\nMultimodal interfaces require effective parsing and understanding of utterances whose content is distributed across multiple input modes. Johnston 1998 presents an approach in which strategies for multimodal integration are stated declaratively using a unification-based grammar that is used by a multidimensional chart parser to compose inputs. This approach is highly expressive and supports a broad class of interfaces, but offers only limited potential for mutual compensation among the input modes, is subject to significant concerns in terms of computational complexity, and complicates selection among alternative multimodal interpretations of the input. In this paper, we present an alternative approach in which multimodal parsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. This approach is significantly more efficient, enables tight-coupling of multimodal understanding with speech recognition, and provides a general probabilistic framework for multimodal ambiguity resolution.", "role": "user" }, { "content": "[[\"weighted finite-state device\", \"multimodal parsing and understanding\", \"USED-FOR\"], [\"approach\", \"multimodal parsing and understanding\", \"USED-FOR\"], [\"approach\", \"multimodal integration\", \"USED-FOR\"], [\"approach\", \"multimodal ambiguity resolution\", \"USED-FOR\"], [\"parsing\", \"Multimodal interfaces\", \"USED-FOR\"], [\"unification-based grammar\", \"multimodal integration\", \"USED-FOR\"], [\"speech and gesture streams\", \"weighted finite-state device\", \"USED-FOR\"], [\"speech recognition\", \"multimodal understanding\", \"CONJUNCTION\"], [\"unification-based grammar\", \"multidimensional chart parser\", \"USED-FOR\"]]", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1040
[ { "content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nThe role of post-operative radiotherapy in non-small-cell lung cancer: a multicentre randomised trial in patients with pathologically staged T1-2, N1-2, M0 disease. Medical Research Council Lung Cancer Working Party.\nReport Abstract 1:\nThe role of post-operative radiotherapy for patients with non-small-cell lung cancer (NSCLC) is unclear despite five previous randomised trials. One deficiency with these trials was that they did not include adequate TNM staging, and so the present randomised trial was designed to compare surgery alone (S) with surgery plus post-operative radiotherapy (SR) in patients with pathologically staged T1-2, N1-2. M0 NSCLC. Between July 1986 and October 1993, 308 patients (154 S, 154 SR) were entered from 16 centres in the UK. The median age of the patients was 62 years, 74% were male, > 85% had normal or near normal levels of general condition, activity and breathlessness, 68% had squamous carcinoma, 52% had had a pneumonectomy, 63% had N1 disease and 37% N2 disease. SR patients received 40 Gy in 15 fractions starting 4-6 weeks post-operatively. Overall there was no advantage to either group in terms of survival, although definite local recurrence and bony metastases appeared less frequently and later in the SR group. In a subgroup analysis, in the N1 group no differences between the treatment groups were seen, but in the N2 group SR patients appeared to gain a one month survival advantage, delayed time to local recurrence and time to appearance of the bone metastases. There is, therefore, no clear indication for post-operative radiotherapy in N1 disease, but the question remains unresolved in N2 disease.\n\n\nReport Title 2:\nPostresection irradiation for T2 N0 M0 non-small cell carcinoma: a prospective, randomized study.\nReport Abstract 2:\nStage I nonirradiated T2 N0 non-small cell lung carcinoma has a postoperative prognosis not very different from stage II irradiated T1 N1 carcinoma. The hypothesis was that more locoregional malignant sites are overlooked in T2 N0 M0 than in T1 N0 M0 tumors, considering the better prognosis of this last group, and that T2 N0 cancer might benefit from postresection irradiation.\n From 1985 to 1991, 163 non-small cell lung carcinomas were classified T2 N0 M0 and randomized for irradiation or nonirradiation after operation. After revision of all the cases, 132 were included in this study: 60 were irradiated and 72 were not irradiated. All were followed up. The study was closed in October 1995. Statistical analysis was then performed considering volume, location, cell type, survival, and recurrence in the two groups.\n One hundred thirteen patients were followed up during a minimum of 5 years: the survival was 44.2%. There was no significant difference considering cell type or irradiation. There was no recurrence-free survivor beyond 5 years with a tumor invading the visceral pleura. At the close of the study (follow up, 4 years 3 months to 10 years 1 month), 49 of 132 patients were alive. The median survival was 3 years 11 months. Fifty-nine patients had died of local (21) or distant (40) recurrences (2 patients had both local and distant recurrence). There was again no significant difference considering cell type or irradiation, either in the survival or in the mode of recurrence.\n Stage I T2 N0 M0 non-small cell lung carcinoma tends to manifest distant metastasis. Prospective studies of stratified systemic adjuvant therapy should improve the present moderate result of radical resection in this group of tumors.\n\n\nReport Title 3:\nA study of postoperative radiotherapy in patients with non-small-cell lung cancer: a randomized trial.\nReport Abstract 3:\nTo study the value of postoperative radiotherapy for non-small-cell lung cancer (NSCLC) with positive regional lymph metastases (NI or N2) after radical surgery.\n From February 1982 to October 1995, 366 patients with NSCLC and N1 or N2 disease were randomized into postoperative radiotherapy (S + R) (183 patients) and no further treatment (S alone) (182 patients). Postoperative radiotherapy (RT) was administrated 3-4 weeks after radical operation. Irradiated fields covered the bronchial stump, ipsilateral hilum, and most of the mediastinum. The midplane dose was 6000 cGy/30 fractions/6 weeks, with the spinal cord limited to 4000 cGy/20 fractions/4 weeks or less. One hundred thirty-four patients in S + R group and 162 patients in S alone group were evaluated. Clinical data were comparable in both arms, except for the numbers of N2 patients.\n The 3-year and 5-year overall survival rates were 51.9% and 42.9% in the S + R group and 50.2% and 40.5% in the S alone Group (p = 0.56). The 3-year and 5-year disease-free survival rates were 50.7% +/- 4.7% and 42.9% +/- 5.2% in the S + R group vs. 44.4% +/- 4.3% and 38.2% +/- 4.5% in the S alone group (p = 0.28), respectively. In the patients with NI or T3-4 tumors, there was a trend toward improved survival in the S + R group, especially in the patients with T3-4N1M0. These patients demonstrated 20% improvement in overall survival (p = 0.092) and greater than 20% better disease-free survival (p = 0.057). Postoperative RT reduced local recurrence but had no impact on distant metastases.\n Postoperative RT significantly reduced local relapses, but did not improve overall survival, due to a high frequency of distant metastases in this patient group.\n\n\nReport Title 4:\nAdjuvant radiotherapy in non-small cell lung cancer with pathological stage I: definitive results of a phase III randomized trial.\nReport Abstract 4:\nTo evaluate the benefits and the drawbacks of post-operative radiotherapy in completely resected Stage I (a and b) non-small cell lung cancer (NSCLC).\n Patients with pathological Stages Ia and Ib NSCLC have been randomized into two groups: Group 1 (G1) received adjuvant radiotherapy, Group 0 (G0) the control group did not receive any adjuvant therapy. Local control, toxicity and survival have been evaluated.\n Between July 1989 and June 1997, 104 patients with pathological stage I NSCLC have been enrolled in this study. Fifty-one patients were randomized to G1 and 53 to G0. Six patients have been excluded from the study due to incomplete follow-up data. Regarding local control, one patient in the G1 group had a local recurrence (2.2%) while in the G0 12 local recurrences have been observed (23%). Seventy-one percent of patients are disease-free at 5 years in G1 and 60% in G0 (P=0.039). Overall 5-year survival (Kaplan-Meier) showed a positive trend in the treated group: 67 versus 58% (P=0.048). Regarding toxicity in G1, six patients experienced a grade 1 acute toxicity. Radiological evidence of long-term lung toxicity, with no significant impairment of the respiratory function, has been detected in 18 of the 19 patients who have been diagnosed as having a post-radiation lung fibrosis.\n Adjuvant radiotherapy gave good results in terms of local control in patients with completely resected NSCLC with pathological Stage I. Overall 5-year survival and disease-free survival showed a promising trend. Treatment-related toxicity is acceptable.\n\n\nReport Title 5:\nPostoperative radiotherapy for radically resected N2 non-small-cell lung cancer (NSCLC): randomised clinical study 1988-1992.\nReport Abstract 5:\nIn the period 1988-1992, 74 consecutive radically resected patients with NSCLC were randomised to postoperative radiotherapy or surgery alone in order to evaluate the influence of postoperative radiotherapy on survival. There were 61 males and 13 females, aged 35-80 years, median 59 years. Their distribution by stage was as follows: pT1N2 = 19, pT2N2 = 54, pT3N2 = one patient; histology: 32 squamous, 32 adeno and 10 large cell carcinomas; surgery: atypical resection in six, lobectomy in 27, bilobectomy in ten, and pneumonectomy in 31 patients. In 27 patients, only one lymph node in a single mediastinal lymph node site was affected; in 31 patients more than one lymph node in one site; in 16 patients more sites were affected. In 35/74 patients radiotherapy of hilar and mediastinal sites with 3000 cGy in 2 weeks was performed. On December 31, 1994, 19 patients (26%) were still alive; 39/55 patients died of the following causes: locoregional failure-10(26%), distant metastases- 25 (64%), other tumor-unrelated causes-four patients (10%). Five-year survival rates did not show statistically significant differences between the irradiated and surgically treated patients only with respect to sex, pTNM stage, histology and frequency of locoregional failure. The number of metastatic mediastinal lymph nodes was the only significant prognostic factor (P < 0.005) in both randomised groups.", "role": "user" }, { "content": "PORT is detrimental to patients with early stage completely resected non-small cell lung cancer and should not be used in the routine treatment of such patients. The role of PORT in the treatment of N2 tumours is not clear and may justify further research.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.51169
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\ndispersion forces such as the casimir \npolder interaction between microscopic particles ( atoms and molecules ) with macroscopic bodies are prototypical forces that result from quantum vacuum fluctuations of a physical quantity , in this case the electromagnetic field @xcite .\ntheir magnitude depends sensitively on the geometry of the macroscopic body and the optical response of both particle and medium . in the simplest case of flat surfaces and isotropic optical response the casimir \npolder force is usually attractive and perpendicular to the surface .\nhowever , in more complex geometries this force can be modified to a large extent .\nfor example , surfaces with gratings can induce force components that are parallel to the ( spatially averaged ) interface .\nthis has been shown for casimir forces between corrugated surfaces @xcite as well as for casimir polder forces @xcite .\nmoreover , the sign of the dispersion force can be reversed .\na repulsive force with associated equilibrium , at least in the form of a saddle point , has been predicted for objects above a metal plate with a circular hole @xcite and near wedges @xcite .\napart from geometry , non - equilibrium situations such as resonant casimir polder interactions of excited atoms can provide transient repulsion @xcite . in this article\n, we will show how a strong anisotropic optical response of an atom can lead to a repulsive casimir \npolder force normal to a one - dimensional grating structure . from a theoretical point of view , to compute an electromagnetic dispersion force means to determine the scattering properties of a surface , typically in terms of reflection matrices that enter a mode expansion @xcite or , equivalently , a green s tensor construction @xcite .\nhere we use the dyadic green s tensor expansion in terms of rayleigh reflection coefficients to investigate the interaction of an anisotropic atom with a one - dimensional periodic surface .\nwe require the scattering green s tensor for source point @xmath0 and field point @xmath1 being situated in the vacuum half space above a nano - grating ( see fig .\n[ ris1 ] ) . above a one - dimensional rectangular grating.,width=377 ] the grating displays a periodic surface profile in the @xmath2-direction with period @xmath3 , and height @xmath4 while being translationally invariant in the @xmath5-direction . to construct the green s tensor\n, we employ a plane - wave basis which is adapted to the symmetry of the system .\nwave vectors for upward ( + ) and downward ( - ) moving waves are parametrised as .\ntheir @xmath2-components are decomposed according to into a continuous part that lies within the first brillouin zone and a discrete part comprised of integer multiples of the reciprocal lattice vector .\nthe @xmath5-component takes arbitrary continuous values while the @xmath6-component then follows from the dispersion relation : @xmath7 for each such wave vector , we choose two perpendicular polarisation vectors @xmath8 and @xmath9 such that only the respective electric and magnetic fields exhibit non - vanishing @xmath6-components , respectively : @xmath10 these vectors are obviously normalised , and orthogonal to each other , , as well as to the unit wave vector , .\nthe three orthonormal vectors form a right - handed triad , . with these preparations at hand ,\nthe required scattering green s tensor can be given as @xmath11}}{k_y^n}\\ , { \\bm{e}}_{m+}^\\sigma r_{mn}^{\\sigma\\sigma'}{\\bm{e}}_{n-}^{\\sigma'}.\\end{gathered}\\ ] ] the rayleigh reflection coefficients conserve @xmath12 and @xmath13 of the incident wave , but they mix polarisations @xmath14 as well as diffraction orders @xmath15 @xcite . in general , they have to be calculated numerically by integrating the maxwell equations within the grating ( ) and imposing conditions of continuity at its upper and lower boundaries @xcite . for a rectangular grating ( fig . [ ris1 ] ) , the formalism for computing the rayleigh coefficients is presented in [ appa ] . before we proceed ,\nlet us derive some symmetry properties of the reflection coefficients as well as consider the asymptotes of small and large grating periods .\n_ schwarz reflection principle . _ like every causal response function , the scattering green s tensor obeys the schwarz reflection principle @xcite . applying this to eq . ( [ greentensor ] )\nleads to @xmath16 _ onsager reciprocity . _\nprovided that the grating consists of a medium which obeys time - reversal symmetry @xcite , the scattering green s tensor fulfils onsager reciprocity .\nthis implies @xmath17 _ grating symmetries .\n_ finally , we consider symmetries imposed by the geometry of the grating .\nthe invariance of the grating with respect to an inversion of the @xmath5 coordinate implies that the diagonal / off - diagonal rayleigh coefficients are even / odd functions of @xmath13 , respectively : @xmath18 for a symmetric grating with , we further have @xmath19 by virtue of the @xmath2-inversion symmetry .\nfurther symmetries can be obtained by combining the above .\nfor instance , eqs .\n( [ s1 ] ) and ( [ s2 ] ) imply for at purely imaginary frequencies .\nthe exponential factors @xmath20 and @xmath21 become either exponentially damped or rapidly oscillating whenever or , respectively . in the limit where the grating period is small with respect to the distances of source and field points from the grating , , this implies that the rayleigh sums in eq .\n( [ greentensor ] ) are effectively limited to small values . to leading order\n, we have @xmath22 in the coincidence limit .\nthis asymptote of the scattering green s tensor is thus translationally invariant along the @xmath2-direction and it is described by the effective properties of the grating averaged over one period .\nthe leading correction to this @xmath2-independent green s tensor is a harmonic variation in @xmath23 which is due to terms and . in particular for a symmetric grating ,\nthis correction is proportional to @xmath24 in the coincidence limit . in the opposite extreme of the grating period being very large with respect to the distances of source and field points from the grating , , very large rayleigh reflection orders contribute .\nwe thus have , , so that the polarisation unit vectors and reflection coefficients can be approximated by @xmath25 and .\ncarrying out the remaining @xmath12-integral , the scattering green s tensor assumes the asymptotic form @xmath26}{4\\pi^2(x - x ' ) } \\sum_{m , n=-\\infty}^\\infty{\\mathrm{e}}^{{\\mathrm{i}}q(mx - nx')}\\\\ \\times\\int_{-\\infty}^\\infty{\\mathrm{d}}k_z \\frac{{\\mathrm{e}}^{{\\mathrm{i}}(k_y^my+k_y^ny')+{\\mathrm{i}}k_z(z - z')}}{k_y^{n- } } \\sum_{\\sigma,\\sigma'=e , h}r_{mn}^{\\sigma\\sigma ' } { \\bm{e}}_{m+}^\\sigma { \\bm{e}}_{n-}^{\\sigma'}.\\end{gathered}\\ ] ] in the coincidence limit , the scattering green s tensor is then well described by a proximity force approximation @xcite where the grating is replaced by the local surface at .\nwithin leading - order perturbation theory , the cp potential of a ground - state atom within an arbitrary structure is given as @xcite @xmath27 , \\ ] ] where @xmath28 is the ground - state polarisability of the atom as given in terms of its transition frequencies and dipole matrix elements @xmath29 .\nusing the green s tensor ( [ greentensor ] ) from the previous section , we find the cp potential of an atom above a one - dimensional grating as @xmath30 with and @xmath31 in the limit of small grating period , we use the asymptote ( [ greentensorasymp1 ] ) of the green s tensor to find the @xmath2-independent potential @xmath32 for large grating period , eq .\n( [ greentensorasymp2 ] ) leads to @xmath33 with @xmath34 taken in all expressions inside the integrand .\nwe study a ground - state rb atom above a rectangular au grating as shown in fig .\nthe grating consists of a periodic array of rectangular bars of height and width with separation , so that the grating period is .\nthe atom grating potential is derived by numerically integrating eq .\n( [ cpgrating ] ) with the rayleigh coefficients as given in [ appa ] .\n.,width=529 ] to study the effect of possible anisotropies on the potential , we separate the latter into contributions from where @xmath35 is the ground - state polarisability of the rb atom : @xmath36 .\nthe resulting potentials at are shown in fig .\nwe find that the component @xmath37 leads to a repulsive casimir polder potential in normal direction at separations .\nthis is the first demonstration of a repulsive casimir polder potential for a grating geometry .\nnm above an au grating as a function of lateral position .\nthick lines : exact numerical results from eq .\n( [ cpgrating ] ) .\nthin lines : asymptotes from eq .\n( [ cpasymp2]).,width=529 ] in many cases of interest , a full numerical calculation of the potential is neither desirable nor needed . for short atom - grating separations , the asymptotic expression ( [ cpasymp2 ] )\ngives reliable results . in fig .\n[ ris3 ] , we show a comparison of the full numerical calculation according to eq .\n( [ cpgrating ] ) ( thick lines ) with the short - distance asymptote ( [ cpasymp2 ] ) ( thin lines ) at a fixed atom - grating distance of\n. the obvious appearance of a non - sinusoidal lateral force across the grating implies in particular that the repulsive @xmath37 component at leads to an unstable , saddle - point equilibrium in accordance with the generalised earnshaw theorem @xcite .\none further notices that the short - distance ( or large grating ) approximation already provides very accurate results already at . at smaller separations of ,\nthe difference between exact and approximate potentials are within the thickness of the lines .\nwe have shown that it is possible to generate repulsive casimir polder forces above material gratings using anisotropic atoms .\nour calculation was based on the green s tensor expansion of the electromagnetic field which , using a rayleigh expansion , yields both a numerically exact result as well as analytically tractable asymptotes for small and large gratings .\nsurprisingly , the short - distance ( or large grating ) asymptote becomes already accurate at a distance of above a grating with a period .\nsyb was supported by the dfg ( grant bu 1803/3 - 1 ) and the freiburg institute for advanced studies .\nvnm was partially supported by grants of saint petersburg state university 11.38.237.2015 and 11.38.660.2013 .\nvnm thanks the participants of the 9th friedmann seminar for interesting discussions ; it is a pleasure to thank the organizers for an excellent seminar .\nin the following , we determine the rayleigh reflection coefficients for the rectangular grating depicted in fig .\n[ ris1 ] . to this end\n, we express the electromagnetic fields above and below the grating for incident waves of polarisation @xmath38 according to eqs .\n( [ polarisationvectore ] ) and ( [ polarisationvectorh ] ) as ( the factor @xmath39 is omitted for brevity ) @xmath40 for and @xmath41 for , respectively . for simplicity , we take throughout this appendix . in the region @xmath42 one\ncan write @xmath43 , analogous decompositions hold for the other components of the electromagnetic field .\nit is convenient to denote by @xmath44 $ ] the @xmath45-component vector with components .\nwe further introduce a diagonal @xmath46-matrix @xmath47 the maxwell equations in the region can then be written as @xcite @xmath48}{\\partial y}-{\\mathrm{i}}k_z[e_y ] = { \\mathrm{i}}\\omega[h_x],\\qquad & & \\frac{\\partial[h_z]}{\\partial y}-{\\mathrm{i}}k_z [ h_y ] = -{\\mathrm{i}}\\omega[d_x ] , \\label{1}\\\\ { \\mathrm{i}}k_z[e_x]-i\\lambda[e_z]={\\mathrm{i}}\\omega[h_y],\\qquad & & { \\mathrm{i}}k_z[h_x]-i\\lambda[h_z]=-{\\mathrm{i}}\\omega[d_y ] , \\label{2}\\\\ { \\mathrm{i}}\\lambda[e_y]-\\frac{\\partial[e_x]}{\\partial y } = { \\mathrm{i}}\\omega[h_z],\\qquad & & { \\mathrm{i}}\\lambda[h_y]-\\frac{[h_x]}{\\partial y } = -{\\mathrm{i}}\\omega [ d_z ] .\n\\label{3}\\end{aligned}\\ ] ] writing and using the results of ref .\n@xcite , one has for a rectangular grating : @xmath49 here , @xmath50 is a toeplitz matrix defined as @xmath51 where @xmath52 is a fourier coefficient of the periodic function @xmath53 : @xmath54 in order to solve the above system of equations , we proceed as follows\n. equation ( [ 2 ] ) implies , which can be substituted into equations ( [ 1 ] ) and ( [ 3 ] ) to eliminate @xmath55 $ ] . similarly , eq . ( [ 2 ] ) leads to , which can be substituted into equations ( [ 1 ] ) and ( [ 3 ] ) to eliminate @xmath56 $ ] . as a result ,\nthe maxwell equations in the region can be rewritten as @xmath57 \\\\ [ e_z ] \\\\ [ h_x ] \\\\ [ h_z ] \\end{pmatrix } = m \\begin{pmatrix } [ e_x ] \\\\ [ e_z ] \\\\ [ h_x ] \\\\ [ h_z ] \\end{pmatrix } , \\label{eq1}\\ ] ] where the matrix @xmath58 is defined as @xmath59 knowing the transmitted fields ( [ ezt ] ) and ( [ hzt ] ) at , one can determine the fields at by integrating eq .\n( [ eq1 ] ) from @xmath60 to @xmath61 . imposing the continuity conditions on all fourier components of @xmath62 , [ e_z ] , [ h_x ] , [ h_z]$ ] at @xmath63 ,\none then determines the rayleigh reflection and transmission coefficients .\nd. maystre ( ed . ) _ selected papers on diffraction gratings _ ( spie , bellingham , 1993 ) ; r. petit ( ed . ) , _ electromagnetic theory of gratings _ ( springer - verlag , berlin , 1980 ) ; m. c. hutley , _ diffraction gratings _ ( academic press , london , 1982 ) ; e. g.loewen and e. popov , _ diffraction gratings and applications _ ( marcel dekker , new york , 1997 ) .", "role": "user" }, { "content": "we study the interaction of an atom with a one - dimensional nano - grating within the framework of macroscopic qed , with special emphasis on possible anisotropic contributions . to this end \n , we first derive the scattering green s tensor of the grating by means of a rayleigh expansion and discuss its symmetry properties and asymptotes . \n we then determine the casimir polder potential of an atom with the grating . \n in particular , we find that strong anisotropy can lead to a repulsive casimir polder potential in the normal direction . _ \n _ _ keywords _ : casimir polder potential ; material grating ; rayleigh scattering . \n pacs numbers : 34.35.+a , 37.10.vz , 42.50.nn , 42.25.gy", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.796
[ { "content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: A triplet-center loss based part-aware model for vehicle re-identification.\n\nKey Terms: vehicle re-identification video\n\nDataset Summaries:\nPartial-iLIDS: Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras.\nVRAI: VRAI is a large-scale vehicle ReID dataset for UAV-based intelligent applications. The dataset consists of 137, 613 images of 13, 022 vehicle instances.\nVeRi-776: VeRi-776 is a vehicle re-identification dataset which contains 49,357 images of 776 vehicles from 20 cameras. The dataset is collected in the real traffic scenario, which is close to the setting of CityFlow.\nVehicleID: The “VehicleID” dataset contains CARS captured during the daytime by multiple real-world surveillance cameras distributed in a small city in China. There are 26,267 vehicles (221,763 images in total) in the entire dataset.\nPRID2011: PRID 2011 is a person reidentification dataset that provides multiple person trajectories recorded from two different static surveillance cameras, monitoring crosswalks and sidewalks. The dataset shows a clean background, and the people in the dataset are rarely occluded.\nMARS: MARS (Motion Analysis and Re-identification Set) is a large scale video based person reidentification dataset, an extension of the Market-1501 dataset. It has been collected from six near-synchronized cameras.\n\nDataset Options: A: Partial-iLIDS B: VRAI C: VeRi-776 D: VehicleID E: PRID2011 F: MARS", "role": "user" }, { "content": "[\"VeRi-776\", \"VehicleID\"]", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.1639
[ { "content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: We propose a novel approach for learning novel visual concepts and their interactions with other concepts from a\n\nKeywords: novel visual concept learning images\n\nDataset Information:\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nCommonsenseQA: The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nFollowUp: 1000 query triples on 120 tables.\nWebVision: The WebVision dataset is designed to facilitate the research on learning visual representation from noisy web data. It is a large scale web images dataset that contains more than 2.4 million of images crawled from the Flickr website and Google Images search.\n\nAvailable Datasets: A: COCO B: CommonsenseQA C: FollowUp D: WebVision", "role": "user" }, { "content": "COCO", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.3172
[ { "content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nExploring the Energy Landscapes of Cyclic Tetrapeptides with Discrete Path Sampling. Cyclic tetrapeptides are an important class of biologically active molecules that exhibit interesting conformational dynamics, with slow interconversion of several different structures. We present calculations on their energy landscapes using discrete path sampling. In acyclic peptides and large cyclic peptides, isomers containing cis-peptide groups are much less stable than the all-trans isomers and separated from them by large barriers. Strain in small cyclic peptides causes the cis and trans isomers to be closer in energy and separated by much lower barriers. If d-amino acids or proline residues are introduced, isomers containing cis-peptides become more stable than the all-trans structures. We also show that changing the polarity of the solvent has a significant effect on the energy landscapes of cyclic tetrapeptides, causing changes in the orientations of the peptide groups and in the degree of intramolecular hydrogen bonding.", "role": "user" }, { "content": "{\"Chemical\": [\"Cyclic tetrapeptides\", \"cyclic tetrapeptides\", \"proline\", \"d-amino acids\", \"Cyclic Tetrapeptides\", \"hydrogen\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.4102
[ { "content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nWeakly supervised temporal action localization, which aims at temporally locating action instances in untrimmed videos using only video-level class labels during training, is an important yet challenging problem in video analysis. Many current methods adopt the \"localization by classification\" framework: first do video classification, then locate temporal area contributing to the results most. However, this framework fails to locate the entire action instances and gives little consideration to the local context. In this paper, we present a novel architecture called Cascaded Pyramid Mining Network (CPMN) to address these issues using two effective modules. First, to discover the entire temporal interval of specific action, we design a two-stage cascaded module with proposed Online Adversarial Erasing (OAE) mechanism, where new and complementary regions are mined through feeding the erased feature maps of discovered regions back to the system. Second, to exploit hierarchical contextual information in videos and reduce missing detections, we design a pyramid module which produces a scale-invariant attention map through combining the feature maps from different levels. Final, we aggregate the results of two modules to perform action localization via locating high score areas in temporal Class Activation Sequence (CAS). Extensive experiments conducted on THUMOS14 and ActivityNet-1.3 datasets demonstrate the effectiveness of our method.\n\nCited Works Summary:\n[1]: \n\n[2]: We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.\n\n[3]: \n\n[4]: Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.\n\n[5]: Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).\n\n[6]: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "role": "user" }, { "content": "Action recognition. Action recognition has been widely studied in recent years, which aims to identify one or multiple action categories of a video. Earlier works mainly focus on hand-crafted feature engineering, such as improved Dense Trajectory (iDT) [1] [4] . With the development of convolutional neural networks, many deep learning based methods [3] [6] [5] [2] have been applied to action recognition task and achieve convincing performance. Two-stream network [3] [6] [5] typically consists of two branches which learn the appearance and motion information using RGB image and optical flow respectively. C3D network [2] simultaneously captures appearance and motion features using a series of 3D convolutional layers. These action recognition models are usually adopted to extract frame or unit level visual representation in long and untrimmed videos.", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.4757
[ { "content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nQingfei Paidu decoction (QFPD), a multi-component herbal formula, has been widely used to treat COVID-19 in China. However, its active compounds and mechanisms of action are still unknown. Firstly, we divided QFPD into five functional units (FUs) according to the compatibility theory of traditional Chinese medicine. The corresponding common targets of the five FUs were all significantly enriched in Go Ontology (oxidoreductase activity, lipid metabolic process, homeostatic process, etc.), KEGG pathways (steroid biosynthesis, PPAR signaling pathway, adipocytokine signaling pathway, etc.), TTD diseases (chronic inflammatory diseases, asthma, chronic obstructive pulmonary Disease, etc.), miRNA (MIR183), kinase (CDK7) and TF (LXR). QFPD contained 257 specific targets in addition to HCoV, pneumonia and ACE2 co-expression proteins. Then, network topology analysis of the five components-target-pathway-disease networks yielded 67 active ingredients. In addition, ADMET estimations showed that 20 compounds passed the stringent lead-like criteria and in silico drug-likeness test with high gastrointestinal absorption and the median lethal dose (LD50 > 1600 mg/kg). Moreover, 4 specific ingredients (M3, S1, X2 and O2) and 5 common ingredients (MS1, MX16, SX1, WO1 and XO1) of QFPD presented good molecular docking score for 2019-nCov structure and non-structure proteins. Finally, drug perturbation of COVID-19 network robustness showed that all five FUs may protect COVID-19 independently, and target 8 specifically expressed drug-attacked nodes which were related to the bacterial and viral responses, immune system, signaling transduction, etc. In conclusion, our new FUNP analysis showed that QFPD had a protection effect on COVID-19 by regulating a complex molecular network with safety and efficacy. Part of the mechanism was associated with the regulation of anti-viral, anti-inflammatory activity and metabolic programming.", "role": "user" }, { "content": "[\"Mechanism\", \"Treatment\"]", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.4373
[ { "content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Dataset search: a survey \nChapter Title: Data handling: extra knowledge \n1. Abstract of Cited Paper (BIB001): There are thousands of data repositories on the Web, providing access to millions of datasets. National and regional governments, scientific publishers and consortia, commercial data providers, and others publish data for fields ranging from social science to life science to high-energy physics to climate science and more. Access to this data is critical to facilitating reproducibility of research results, enabling scientists to build on others' work, and providing data journalists easier access to information and its provenance. In this paper, we discuss Google Dataset Search, a dataset-discovery tool that provides search capabilities over potentially all datasets published on the Web. The approach relies on an open ecosystem, where dataset owners and providers publish semantically enhanced metadata on their own sites. We then aggregate, normalize, and reconcile this metadata, providing a search engine that lets users find datasets in the “long tail” of the Web. In this paper, we discuss both social and technical challenges in building this type of tool, and the lessons that we learned from this experience.\n\nReview Chapter:", "role": "user" }, { "content": "In order to support the differentiated access and advanced exploratory interfaces articulated above, dataset search engines will need to become more advanced in their ingestion, indexing and cataloging procedures. This problem divides into two areas: incorporation of external knowledge in the data handling process and better management and usage of dataset-intrinsic information. As described in BIB002 , links between datasets are still rare, making identification and usage of extra knowledge difficult. Incorporating external knowledge, whether through the use of domain ontologies, external quality indicators or even unstructured information (i.e., papers) that describe the datasets, is a critical problem. A concrete example of this problem: many datasets are described through code books that are written in natural language. These datasets are nearly useless without integration of external information about the codebooks themselves. Utilizing dataset-intrinsic information, is necessary to more fully capture the richness of each dataset, and allow users to express a richer set of criteria during search. Within this space, there are open problems related to data preprocessing. How to do quality assessment on the fly? What kinds of indexes around quality need to be created? Moving beyond quality, in general, the automatic creation and maintenance of metadata that describes datasets is difficult. Users rely up on metadata to chose appropriate datasets. Open problems for metadata include: 1. identifying the metadata that is of highest value to users w.r.t. datasets; 2. tools to automatically create and maintain that metadata; 3. automatic annotation of dataset with metadata-linking them automatically to global ontologies. In addition to pre-processing, current dataset search systems primarily rely on information retrieval architectures (e.g., indexing into ElasticSearch) to index and perform queries. Here, lessons learned from database architectures could be applied. This is particularly the case as we have seen the importance of lessons learned from relational query engines being applied in the case of distributed data environments BIB001 . Thus, we think an important open problem is what the most effective architectures are for dataset search systems.", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.2609
[ { "content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nStatistical algorithms improve accuracy of gene fusion detection Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"OVCAR3\", \"EWSR1-FLI1\", \"cell line\"], \"Bacterium\": [], \"BiologicFunction\": [\"ovarian cancer\", \"cancer\", \"pathogenesis\", \"primary tumors\", \"tumor\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"detecting\", \"negative\", \"detection\", \"detect\"], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"algorithms\", \"MACHETE\", \"Mismatched Alignment CHimEra Tracking Engine)\", \"RNA-Seq data\", \"statistical models\", \"published\", \"statistical algorithm\", \"Statistical algorithms\"], \"MedicalDevice\": [], \"Organization\": [\"public\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"fusion\", \"gene fusion\", \"Gene fusions\", \"PCR\", \"fusions\", \"gene fusions\", \"validate\"], \"SpatialConcept\": [\"in vivo\"], \"Virus\": []}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.4131
[ { "content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffect of postmenopausal hormonal replacement therapy on mammographic density and parenchymal pattern . PURPOSE To measure changes and predictors of change in mammograms obtained in postmenopausal women undergoing continuous combined hormonal replacement therapy ( HRT ) . MATERIALS AND METHODS Mammograms of 41 postmenopausal women obtained before and 1 year after the initiation of HRT were evaluated blindly according to the quantitative density percentage method and the Wolfe classification system . RESULTS Mammographic densities increased compared with baseline values in 73 % of subjects ( mean increase , 6.7 % ; 95 % confidence interval , 2.5 % , 11.0 % ; P = .003 ) . A shift in Wolfe classification from lower to greater parenchymal density was noted in 24 % of subjects ( P = .016 ) . Multivariate analysis results indicated that the lower the tissue density percentage before treatment , the greater the increase in density percentage after treatment . CONCLUSION An increase in mammographic density was demonstrated in most subjects undergoing continuous combined HRT and was most pronounced in subjects with a lower baseline density percentage .", "role": "user" }, { "content": "{\"Intervention\": [\"hormonal replacement therapy ( HRT )\", \"HRT\", \"Mammograms\", \"postmenopausal hormonal replacement therapy\"], \"Outcome\": [\"tissue density percentage\", \"increase in density percentage\", \"Mammographic densities\", \"parenchymal pattern\", \"lower to greater parenchymal density\", \"mammographic density\"], \"Participant\": [\"41 postmenopausal women obtained before and 1 year after the initiation of HRT\", \"postmenopausal women undergoing continuous combined hormonal replacement therapy ( HRT ) .\"]}", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1293
[ { "content": "Consider the following research inquiry and associated key terms. Review the dataset summaries provided and determine which datasets are suitable for addressing the research question. Base your decision on your understanding of machine learning datasets.\nProvide your answer as a JSON list. For example, for the inquiry \"Semi supervised image classification\", a possible response could be: [\"CIFAR-10\", \"CIFAR-100\"]. Ensure the response contains only the dataset names in JSON format.\n\nResearch Inquiry: I want to train a generative model that can manipulate images based on poses.\n\nKey Terms: image synthesis images pose\n\nDataset Summaries:\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nDeepFashion: DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos.\nMulti-Modal CelebA-HQ: Multi-Modal-CelebA-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA dataset by following CelebA-HQ. Each image has high-quality segmentation mask, sketch, descriptive text, and image with transparent background.\nRoboNet: An open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and study how it can be used to learn generalizable models for vision-based robotic manipulation.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nMarket-1501: Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector.\n\nDataset Options: A: ConvAI2 B: DeepFashion C: Multi-Modal CelebA-HQ D: RoboNet E: SNIPS F: Market-1501", "role": "user" }, { "content": "[\"DeepFashion\", \"Market-1501\"]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3430
[ { "content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nWith the COVID-19 pandemic and the ensuing barriers to the collection and transport of donor cells, it is often necessary to collect and cryopreserve grafts before initiation of transplantation conditioning. The effect on transplantation outcomes in nonmalignant disease is unknown. This analysis examined the effect of cryopreservation of related and unrelated donor grafts for transplantation for severe aplastic anemia in the United States during 2013 to 2019. Included are 52 recipients of cryopreserved grafts who were matched for age, donor type, and graft type to 194 recipients who received noncryopreserved grafts. Marginal Cox regression models were built to study the effect of cryopreservation and other risk factors associated with outcomes. We recorded higher 1-year rates of graft failure (hazard ratio [HR], 2.26; 95% confidence interval, 1.17 to 4.35; P = .01) and of 1-year overall mortality (HR, 3.13; 95% CI, 1.60 to 6.11; P = .0008) after transplantation of cryopreserved compared with noncryopreserved grafts, with adjustment for sex, performance score, comorbidity, cytomegalovirus serostatus, and ABO blood group match. The incidence of acute and chronic graft-versus-host disease did not differ between the 2 groups. Adjusted probabilities of 1-year survival were 73% (95% CI, 60% to 84%) in the cryopreserved graft group and 91% (95% CI, 86% to 94%) in the noncryopreserved graft group. These data support the use of noncryopreserved grafts whenever possible in patients with severe aplastic anemia.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.6134
[ { "content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Low 25-OH vitamin D is associated with benign prostatic hyperplasia.\nSummary: PURPOSE We tested the hypothesis that low vitamin D is associated with benign prostatic hyperplasia . We also studied whether body composition , sex hormones , serum sex hormone-binding globulin , albumin corrected serum calcium , adiponectin and lipid status are associated with benign prostatic hyperplasia . MATERIAL S AND METHODS We investigated 184 representative , r and omly selected men 72 to 76 years old enrolled in the Gothenburg arm of the Osteoporotic Fractures in Men Study ( MrOS ) . Men with a history of prostate cancer , prostate operation or medication for benign prostatic hyperplasia were excluded from study , leaving 155 available for analysis . A cross-sectional study was performed in which benign prostatic hyperplasia measured by total prostate volume was related to clinical , anthropometric , endocrine and metabolic factors on univariate and multivariate analyses with regression models . RESULTS Median prostate volume was 40 ml . In multivariate models only 25-OH vitamin D , albumin corrected serum calcium , serum sex hormone-binding globulin and high density lipoprotein cholesterol were significantly and inversely associated with large prostate gl and s. CONCLUSIONS The current report adds 4 independent factors associated with benign prostatic hyperplasia , including low 25-OH vitamin D , serum calcium , sex hormone-binding globulin and high density lipoprotein cholesterol\n\n\nReport 2:\nTitle: Impact of obesity on urinary storage symptoms: results from the FINNO study.\nSummary: PURPOSE Urinary storage symptoms are the most common and most bothersome urinary symptoms . Many studies on the relation between body weight and urinary symptoms have focused on urinary incontinence in women . We evaluated the association of obesity with urinary storage symptoms in a population based study of men and women age 18 to 79 years old . MATERIAL S AND METHODS Question naires were mailed to 6,000 adults r and omly identified from the Finnish Population Register . Self-reported height and weight were used to calculate body mass index . Urinary frequency , nocturia , urgency , stress urinary incontinence and urgency urinary incontinence were assessed using vali date d instruments . Multivariate logistic regression analyses ( adjusted for age , comorbidity and medications , and sociodemographic , lifestyle and reproductive factors ) were performed to evaluate associations between body mass index and each symptom . RESULTS Of the 6,000 individuals approached 3,727 participated ( 62.4 % response , 53.7 % women ) . In men and women obesity was associated with nocturia ( adjusted OR 2.0 , 95 % CI 1.2 - 3.3 for men ; OR 2.4 , 95 % CI 1.5 - 3.8 for women ) but not with urgency ( adjusted OR 1.2 , 95 % CI 0.7 - 2.3 for men ; OR 1.2 , 95 % CI 0.7 - 2.1 for women ) . In men obesity was also associated with urinary frequency ( OR 2.0 , 95 % CI 1.0 - 3.9 ) , and in women it was associated with stress urinary incontinence ( OR 1.9 , 95 % CI 1.2 - 3.0 ) and urgency urinary incontinence ( OR 3.0 , 95 % CI 1.2 - 7.4 ) . However , the number of men with stress urinary incontinence or urgency urinary incontinence was insufficient for precise analyses . CONCLUSIONS This study extends previous research by providing symptom specific associations between obesity and urinary storage symptoms in a population based sample of men and women . Obesity impacts individual urinary storage symptoms differently and these associations may be influenced by gender\n\n\nReport 3:\nTitle: Obesity increases and physical activity decreases lower urinary tract symptom risk in older men: the Osteoporotic Fractures in Men study.\nSummary: BACKGROUND Two potential targets for preventing chronic lower urinary tract symptoms ( LUTS ) in older men are obesity and physical activity . OBJECTIVE To examine associations of adiposity and physical activity with incident LUTS in community-dwelling older men . DESIGN , SETTING , AND PARTICIPANTS The Osteoporotic Fractures in Men Study ( MrOS ) is a prospect i ve cohort of men ≥65 yr of age . MrOS participants without LUTS and a history of LUTS treatment at baseline were included in this analysis . MEASUREMENTS Adiposity was measured with body mass index ( BMI ) , physical activity with the Physical Activity Scale for the Elderly ( PASE ) and self-report of daily walking , and LUTS with the American Urological Association Symptom Index . RESULTS AND LIMITATIONS The mean age ( st and ard deviation [ SD ] ) of the 1695 participants was 72 ( 5 ) yr at baseline . At a mean ( SD ) follow-up of 4.6 ( 0.5 ) yr , 524 ( 31 % ) of men reported incident LUTS . In multivariate analyses , compared with men of normal weight at baseline ( BMI < 25 kg/m² ) , overweight ( BMI : 25.0 - 29.9 kg/m² ) and obese ( ≥30 kg/m² ) men were 29 % ( adjusted odds ratio [ OR(adj ) ] : 1.29 ; 95 % confidence interval [ CI ] , 1.00 - 1.68 ) and 41 % ( OR(adj ) : 1.41 ; 95 % CI , 1.03 - 1.93 ) more likely to develop LUTS , respectively . Men in the highest quartile of physical activity were 29 % ( OR(adj ) : 0.71 ; 95 % CI , 0.53 - 0.97 ) and those who walked daily 20 % ( OR(adj ) : 0.80 ; 95 % CI , 0.65 - 0.98 ) less likely than their sedentary peers to develop LUTS , adjusting for BMI . The homogeneous composition of MrOS potentially diminishes the external validity of these results . CONCLUSIONS In older men , obesity and higher physical activity are associated with increased and decreased risks of incident LUTS , respectively . Prevention of chronic urinary symptoms represents another potential health benefit of exercise in elderly men\n\n\nReport 4:\nTitle: BXL628, a novel vitamin D3 analog arrests prostate growth in patients with benign prostatic hyperplasia: a randomized clinical trial.\nSummary: OBJECTIVE To evaluate the effect of BXL628 , a vitamin D3 analog , on prostate volume in patients with benign prostatic hyperplasia ( BPH ) . METHODS We conducted a phase II , double blind , r and omized , placebo controlled , clinical study . Patients eligible were aged > or=50 years , had a diagnosis of BPH and a prostate volume > or=40 ml . Eligible patients were r and omized and given either BXL628 150 mcg daily or placebo for 12 weeks . All r and omized patients underwent at baseline and at the end of study pelvic MRI to measure prostatic volume , uroflowmetry ( Qmax ) , American Urological Association Symptom Index ( AUASI ) , serum PSA , testosterone , dihydrotestosterone and luteizing hormone . RESULTS A total of 119 patients were r and omized : 57 patients to BXL628 and 62 to placebo . The percentage change of prostate volume at 12 week was -2.90 in the BXL628 group vs. + 4.32 in the placebo group ( p-value<0.0001 ) . The estimated difference between treatments ( BXL628 minus placebo ) was -7.22 % ( 95 % confidence limit -9.27 to -5.18 ) . Considering Qmax , mean change vs. baseline was -0.30 in BXL628 vs. + 1.50 in the placebo group : this finding was not statistically significant . The mean change of the AUASI total score at final visit vs. baseline was -1.77 in the BXL628 group vs. -3.45 in the placebo group ( p = not significant ) . CONCLUSION BXL628 was able to arrest prostate growth within 12 weeks in men aged > or=50 years with prostatic volume > or=40 ml . Its unprecedented mechanism of action may offer a new opportunity for the treatment of BPH\n\n\nReport 5:\nTitle: Association between family history of benign prostatic hyperplasia and urinary symptoms: results of a population-based study.\nSummary: Baseline measurements for a population -based prospect i ve cohort study were used to assess the association between family history of enlarged prostate and urinary symptoms . Between December 1989 and March 1991 , a group of r and omly selected men aged 40 - 79 years from Olmsted County , Minnesota , was administered a previously vali date d question naire that included questions with wording close to that of the American Urological Association 's Symptom Index . A detailed family history of an enlarged prostate was obtained by personal interview , and peak urinary flow rates were measured for each participant . Of the 2,119 men , 440 ( 21 percent ) reported a family history of an enlarged prostate . The age-adjusted odds of having moderate or severe urinary symptoms were elevated among those with a family history relative to those without ( odds ratio = 1.3 , 95 percent confidence interval 1.1 - 1.7 ) . With simultaneous control for effects of age and worry about urologic function , the odds ratio remained at 1.3 ( 95 percent confidence interval 1.0 - 1.6 ) . Furthermore , this risk was greater for men with relatives diagnosed at a younger age ( odds ratio = 2.5 , 95 percent confidence interval 1.5 - 4.3 ) . Men with a family history were also 1.3 times as likely to have an impaired peak urinary flow rate . These findings suggest that men with a family history of an enlarged prostate may be at increased risk for development of symptoms and signs suggestive of benign prostatic hyperplasia and that this risk is greater in men with relatives diagnosed at a younger age . Recognition of this association may help to target early interventions and may lead to further clues about the causes of benign prostatic hyperplasia\n\n\nReport 6:\nTitle: The effect of benign lower urinary tract symptoms on subsequent prostate cancer testing and diagnosis.\nSummary: BACKGROUND Lower urinary tract symptoms ( LUTS ) are common and have been associated with the subsequent diagnosis of prostate cancer ( PCa ) in population cohorts . OBJECTIVE To determine whether the association between LUTS and PCa is due to the intensity of PCa testing after LUTS diagnosis . DESIGN , SETTING , AND PARTICIPANTS We prospect ively followed a representative , population -based cohort of 1922 men , aged 40 - 79 yr , from 1990 until 2010 with interviews , question naires , and abstract ing of medical records for prostate outcomes . Men were excluded if they had a previous prostate biopsy or PCa diagnosis . Self-reported LUTS was defined as an American Urological Association symptom index score > 7 ( n=621 ) . Men treated for LUTS ( n=168 ) were identified from review of medical records and /or self report . Median follow-up was 11.8 yr ( interquartile range : 10.7 - 12.3 ) . OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Associations between self-reported LUTS , or treatment for LUTS , and risk of subsequent prostate biopsy and PCa were estimated using Cox proportional hazard models . RESULTS AND LIMITATIONS Fifty-five percent of eligible men enrolled in the study . Men treated for LUTS were more likely to undergo a prostate biopsy ( hazard ratio [ HR ] : 2.4 ; 95 % confidence interval [ CI ] , 1.7 - 3.3 ) . Men younger than 65 yr who were treated for LUTS were more likely to be diagnosed with PCa ( HR : 2.3 , 95 % CI , 1.5 - 3.5 ) , while men aged > 65 yr were not ( HR : 0.89 , 95 % CI , 0.35 - 1.9 ) . Men with self-reported LUTS were not more likely to be biopsied or diagnosed with PCa . Neither definition of LUTS was associated with subsequent intermediate- to high-risk cancer . The study is limited by lack of histologic or prostate-specific antigen level data for the cohort . CONCLUSIONS These results indicate that a possible cause of the association between LUTS and PCa is increased diagnostic intensity among men whose LUTS come to the attention of physicians . Increased symptoms themselves were not associated with intensity of testing or diagnosis\n\n\nReport 7:\nTitle: Genetic sequence variants are associated with severity of lower urinary tract symptoms and prostate cancer susceptibility.\nSummary: PURPOSE While a clear heritable component underlies lower urinary tract symptoms and benign prostatic hyperplasia , few studies have identified specific genetic factors . In contrast , recent genome-wide association studies identified single nucleotide polymorphisms that increase prostate cancer risk . Some of these single nucleotide polymorphisms may also predispose to surgical intervention for benign prostatic hyperplasia . We determined whether these single nucleotide polymorphisms are also associated with lower urinary tract symptom severity and benign prostatic hyperplasia medication use . MATERIAL S AND METHODS The genotypes of 38 single nucleotide polymorphisms previously associated with prostate cancer risk were determined for 1,168 healthy white male volunteers . American Urological Association symptom index score and medication for benign prostatic hyperplasia were documented prospect ively . Statistical analyses were done to compare the frequency of the single nucleotide polymorphisms with American Urological Association symptom index and benign prostatic hyperplasia medication use . RESULTS Several single nucleotide polymorphisms , including rs2736098 on chromosome 5p15 , showed a significant relationship with benign prostatic hyperplasia medication . After adjusting for the other genetic variants , patient age and medication use , rs1571801 on chromosome 9q33.2 ( OR 1.31 , 95 % CI 1.0 - 1.74 ) and rs5945572 on chromosome Xp11 ( OR 1.28 , 95 % CI 1.04 - 1.59 ) were significantly associated with increased urinary symptoms . In contrast , rs445114 on chromosome 8q24 was marginally associated with decreased urinary symptoms ( OR 0.83 , 95 % CI 0.66 - 1.01 ) . CONCLUSIONS Of 38 single nucleotide polymorphisms that predispose to prostate cancer we identified 3 that are also associated with a well characterized lower urinary tract symptom phenotype . These single nucleotide polymorphisms may aid in the improved characterization of men with lower urinary tract symptoms/benign prostatic hyperplasia", "role": "user" }, { "content": "Few putative genetic risk variants have been reliably replicated across population s. We found consistent evidence of a reduced risk of LUTS associated with the common rs731236 variant of the vitamin D receptor gene in our meta-analyses .\nOnly one , a variant of the vitamin D receptor , was consistently protective across different population", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.1479
[ { "content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nThesim4algorithmusessplice-siteconsensussequencestorefinealignments.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1360
[ { "content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nAchieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks.\nHowever, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods.\nThese methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal.\nAs such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance.\nThis solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain knowledge inspired logic to improve the samples itself.\nExperimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.\nThe enormous computational intensity of DNNs have resulted in developing either hand-optimized kernels, such as NVIDIA cuDNN or Intel MKL that serve as backend for a variety of programming environment such as TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2017) .\nHowever, the complexity of the tensor operations in DNNs and the volatility of algorithms, which has led to unprecedented rate of innovation (LeCun, 2019) , calls for developing automated compilation frameworks.\nTo imitate or even surpass the success of hand-optimized libraries, recent research has developed stochastic optimization passes for general code: STOKE (Schkufza et al., 2013) , and neural network code, TVM (Chen et al., 2018a) and TensorComprehensions (Vasilache et al., 2018) .\nTVM and TensorComprehensions are based on random or genetic algorithms to search the space of optimized code for neural networks.\nAutoTVM (Chen et al., 2018b ) builds on top of TVM and leverage boosted trees (Chen & Guestrin, 2016) as part of the search cost model to avoid measuring the fitness of each solution (optimized candidate neural network code), and instead predict its fitness.\nHowever, even with these innovations the optimizing compilation time can be around 10 hours for ResNet-18 (He et al., 2016) , and even more for deeper or wider networks.\nSince the general objective is to unleash new possibilities by developing automatic optimization passes, long compilation time hinders innovation and could put the current solutions in a position of questionable utility.\nTo solve this problem, we first question the very statistical guarantees which the aforementioned optimization passes rely on.\nThe current approaches are oblivious to the patterns in the design space of schedules that are available for exploitation, and causes inefficient search or even converges to solutions that may even be suboptimal.\nAlso, we notice that current approaches rely on greedy sampling that neglects the distribution of the candidate solutions (configurations).\nWhile greedy sampling that passively filter samples based on the fitness estimations from the cost models work, many of their hardware measurements (required for optimization) tend to be redundant and wasteful.\nMoreover, we found that current solutions that rely on greedy sampling lead to significant fractions of the candidate configurations being redundant over iterations, and that any optimizing compiler are prone to invalid configurations which significantly prolongs the optimization time.\nAs such, this work sets out to present an Adaptive approach to significantly reduce the compilation time and offer automation while avoiding dependence to hand-optimization, enabling far more diverse tensor operations in the next generation DNNs.\nWe tackle this challenge from two fronts with the following contributions:\n(1) Devising an Adaptive Exploration module that utilizes reinforcement learning to adapt to unseen design space of new networks to reduce search time yet achieve better performance.\n(2) Proposing an Adaptive Sampling algorithm that utilizes clustering to adaptively reduce the number of costly hardware measurements, and devising a domain-knowledge inspired Sample Synthesis to find configurations that would potentially yield better performance.\nReal hardware experimentation with modern DNNs (AlexNet, VGG-16, and ResNet-18) on a highend GPU (Titan Xp), shows that the combination of these two innovations, dubbed CHAMELEON, yields 4.45×speedup over the leading framework, AutoTVM.\nCHAMELEON is anonymously available in https://github.com/anony-sub/chameleon, which will be made public.\nWe present CHAMELEON to allow optimizing compilers to adapt to unseen design spaces of code schedules to reduce the optimization time.\nThis paper is also an initial effort to bring Reinforcement Learning to the realm of optimizing compilers for neural networks, and we also develop an Adaptive Sampling with domain-knowledge inspired Sample Synthesis to not only reduce the number of samples required to navigate the design space but also augment its quality in terms of fitness.\nExperimentation with real-world deep models shows that CHAMELEON not only reduces the time for compilation significantly, but also improves the quality of the code.\nThis encouraging result suggests a significant potential for various learning techniques to optimizing deep learning models.", "role": "user" }, { "content": "Reinforcement learning and Adaptive Sampling for Optimized Compilation of Deep Neural Networks.", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3294
[ { "content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nIntramuscular compared to intravenous midazolam for paediatric sedation : A study on cardiopulmonary safety and effectiveness . BACKGROUND Sedation in children remains a controversial issue in emergency departments ( ED ) . Midazolam , as a benzodiazepine is widely used for procedural sedation among paediatrics . We compared the effectiveness and safety of two forms of midazolam prescription ; intramuscular ( IM ) and intravenous ( IV ) . PATIENTS AND METHODS A cohort study was conducted on two matched groups of 30 children referred to our ED between 2010 and 2011 . The first group received IM midazolam ( 0.3 mg/kg ) and the second group received IV midazolam ( 0.15 mg/kg ) for sedation . For evaluating effectiveness , sedation , irritation and cooperation score were followed every 15 min for 60 min and for safety assessment , vital signs and O 2 saturation were observed . RESULTS Mean age was 6.18 ± 2.88 years and 31 patients ( 51.7 % ) were male . All patients were sedated completely after using first dose . There was an overall complication rate of 68.3 % . 35 ( 58.3 % ) patients presented euphoria as the most common complication , but there was no statistical difference between the two groups ( P = 0.396 ) . Cases who received IV midazolam became sedated faster than those received IM midazolam ( P > 0.001 ) . The vital signs including heart rate , respiratory rate , systolic blood pressure and O 2 saturation changed significantly between and within groups during the sedation ( P < 0.05 ) . CONCLUSION Both forms of midazolam , IM and IV , are effective and safe for paediatric sedation in ED . Although the sedative with IV form might appear sooner , IM form of midazolam can be effectively used in patient with limited IV access . Patients are better to observe closely for psychological side-effects .", "role": "user" }, { "content": "{\"Intervention\": [\"IV midazolam\", \"IM midazolam\", \"Intramuscular compared to intravenous midazolam\", \"benzodiazepine\", \"midazolam prescription ; intramuscular ( IM ) and intravenous ( IV ) .\", \"Midazolam\"], \"Outcome\": [\"were sedated\", \"sedation\", \"irritation and cooperation score\", \"including heart rate , respiratory rate , systolic blood pressure and O 2 saturation\", \"are effective and safe\", \"an overall complication rate\", \"presented euphoria\", \"became sedated\", \"safety assessment , vital signs and O 2 saturation\", \"effectiveness\"], \"Participant\": [\"ED between 2010 and 2011\", \"31\", \"children\", \"Mean age was 6.18 \\u00b1 2.88 years\", \"male\", \"30\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.1911
[ { "content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Word Embeddings: A Survey \nChapter Title: Log-linear Model, Hierarchical Softmax \n1. Abstract of Cited Paper (BIB001): The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. \n2. Abstract of Cited Paper (BIB002): Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. \n3. Abstract of Cited Paper (BIB003): We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance. \n4. Abstract of Cited Paper (BIB004): We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field. \n5. Abstract of Cited Paper (BIB005): The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. \n6. Abstract of Cited Paper (BIB006): Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male/female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems. \n7. Abstract of Cited Paper (BIB007): We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. \n8. Abstract of Cited Paper (BIB008): This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute.\n\nReview Chapter:", "role": "user" }, { "content": "Trained on DistBelief, which is the precursor to TensorFlow ). Reports better results than SGNS. Embeddings are also reported to be good for composition (into sentence, document embeddings). partition function or normalization factor required by softmax output layers 5 , such as those in neural network language models (NNLMs). Using a concept called importance sampling ), they managed to bypass calculation of the costly normalization factor, estimating instead gradients in the neural net using an auxiliary distribution (e.g. old n-gram language models) and sampling random examples from the vocabulary. They report gains of a factor of 19 in training time, with respect to the previous model, with similar scores (as measured by perplexity). A little bit later, Morin and Bengio 6 (2005) have 5 Softmax output layers are used when you train neural networks that need to predict multiple outputs, in this case the probability of each word in the vocabulary being the next word, given the context. 6 To our knowledge, this is the first time the term word suggested yet another approach for speeding up training and testing times, using a Hierarchical Softmax layer. They realized that, if one arranged the output words in a hierarchical binary tree structure, one could use, as a proxy for calculating the full distribution for each word, the probability that, at each node leading to the word, the correct path is chosen. Since the height of a binary tree over a set V of words is |V |/ log(|V |), this could yield exponential speedup. In practice, gains were less pronounced, but they still managed gains of a factor of 3 for training times and 100 for testing times, w.r.t. the model using importance sampling. BIB002 were probably the first authors to suggest the Log-bilinear Model 7 (LBL), embedding was used in this context.which has been very influential in later works as well. Another article by BIB003 can be seen as an extension of the LBL BIB002 ) model, using a slightly modified version of the hierarchical softmax scheme proposed by BIB001 , yielding a socalled Hierarchical Log-bilinear Model (HLBL). Whereas BIB001 used a prebuilt word tree from WordNet, BIB003 learned such a tree specifically for the task at hand. In addition to other minor optimizations, they reports large gains over previous LBL models (200 times as fast) and conclude that using purpose-built word trees was key to such results. Somewhat parallel to the works just mentioned, BIB004 approached the problem from a slightly different angle; they were the first to design model with the specific intent of learning embeddings only. In previous models, embeddings were just treated as an interesting by product of the main task (usually language models). In addition to this, they also introduced two improvements worth mentioning: they used words' full contexts (before and after) to predict the centre word 8 . Perhaps most importantly, they introduced a more clever way of leveraging unlabelled data for producing good embeddings: instead of training a language model (which is not the objective here), they expanded the dataset with false or negative examples 9 and simply trained a model that could tell positive (actually occurring) from false examples. 10 Here we should mention two specific contributions by Mikolov et al. (2009; , which have been used in later models. In the first work, (Mikolov et al. (2009) ) a two-step method for bootstraping a NNLM was suggested, whereby a first model was trained using a single word as context. Then, the full model (with larger context) was trained, using as initial embeddings those found by the first step. In (Mikolov et al. (2010) ), the idea of using Recurrent Neural Networks (RNNs) to train lanpendix A for more information. 8 Previous models focused on building language models, so they just used the left context. 9 I.e. sequences of words with the actual centre word replaced by a random word from the vocabulary. 10 This has been BIB006 ) called negative sampling and speeds up training because one can avoid costly operations such as calculating cross-entropies and softmax terms. guage models is first suggested; the argument is that RNNs keep state in the hidden layers, helping the model remember arbitrarily long contexts, and one would not need to decide, beforehand, how many words to use as context in either side. In 2012 Mnih and Teh have suggested further efficiency gains to the training of NNLMs. By leveraging Noise-contrastive Estimation (NCE). 11 NCE BIB005 ) is a way of estimating probability distributions by means of binary decisions over true/false examples. 12 . This has enabled the authors to further reduce training times for NNLMs. In addition to faster training times, they also report better perplexity score w.r.t. previous neural language models. It could be said that, in 2013, with BIB007 BIB008 BIB006 ) the NLP community have again (the main other example being BIB004 ) had its attention drawn to word embeddings as a topic worthy of research in and of itself. These authors analyzed the embeddings obtained with the training of a recurrent neural network model (Mikolov et al. (2010) ) with an eye to finding possible syntactic regularities possibly encoded in the vectors. Perhaps surprisingly, event for the authors themselves, they did find not only syntactic but also semantic regularities in the data. Many common relationships such as male-female, singularplural, etc actually correspond to arithmetical operations one can perform on word vectors (see Figure 1 for an example). A little later, in 2013b and 2013c, Mikolov et al. have introduced two models for learning embeddings, namely the continuous bag-of-words (CBOW) and skip-gram (SG) models. Both of these models are log-linear models (as seen in previous works) and use the two-step procedure (Mikolov et al. (2009) ) for training. The main difference between CBOW and SG lies in the loss function used to update the model; while CBOW trains a model that aims to predict the centre word based upon its context, in SG the roles are reversed, and the centre word is, instead, used to predict each word appearing in its context. The first versions of CBOW and SG BIB008 ) use hierarchical softmax layers, while the variants 13 suggested in BIB006 use negative sampling instead. Furthermore, the variants introduced subsampling of frequent words, to reduce the amount of noise due to overly frequent words and accelerate training. These variants were shown to perform better, with faster training times. Among the most recent contributions to prediction-based models for word embeddings one can cite the two articles BIB009 and BIB009 ) usually cited as the sources of the FastText 14 toolkit, made available by Facebook, Inc. They have suggested an improvement over the skip-gram model from BIB006 , whereby one learns not word embeddings, but n-gram embeddings (which can be composed to form words). The rationale behind this decision lies in the fact that languages that rely heavily on morphology and compositional word-building (such as Turkish, Finnish and other highly inflexional languages) have some information encoded in the word parts themselves, which can be used to help generalize to unseen words. They report better results w.r.t. SGNS (skip-gram variant with negative sampling) BIB006 ), particularly in languages such as German, French and Spanish. A structured comparison of prediction-based models for building word embeddings can be seen on Table 1 .", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.1402
[ { "content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nStructure-activity relationships of glucosamine-derived glycerolipids: the role of the anomeric linkage, the cationic charge and the glycero moiety on the antitumor activity. The potent antitumor activity of 1-O-hexadecyl-2-O-methyl-3-O-(2'-amino-2'-deoxy-β-D-glucopyranosyl)-sn-glycerol (1) was previously shown to arise through an apoptosis-independent pathway. Here, a systematic structure-activity study in which the effects of the anomeric linkage, the cationic charge and the glycero moiety on the antitumor activity is described. Eight analogues of 1 were synthesized, and their antitumor activity against breast (JIMT1 and BT549), pancreas (MiaPaCa2) and prostate (DU145, PC3) cancer was determined. 1-O-Hexadecyl-2-O-methyl-3-O-(2'-amino-2'-deoxy-α-D-glucopyranosyl)-sn-glycerol (2) consistently displayed the most potent activity against all five cell lines with CC(50) values in the range of 6-10 μM. However, replacement of the O-glycosidic linkage by a thioglycosidic linkage or replacement of the amino group by an azide or guanidino group leads to a threefold or greater decrease in potency. The glycero moiety also contributes to the overall activity of 1 and 2 but its effects are of lesser importance. Investigation into the mode of action of this class of compounds revealed that, in agreement with previous findings, the cytotoxic effects arise through induction of large acid vacuoles.", "role": "user" }, { "content": "{\"Chemical\": [\"1-O-Hexadecyl-2-O-methyl-3-O-(2\\u0027-amino-2\\u0027-deoxy-\\u03b1-D-glucopyranosyl)-sn-glycerol\", \"O\", \"thioglycosidic\", \"guanidino\", \"amino\", \"azide\", \"glucosamine\", \"1-O-hexadecyl-2-O-methyl-3-O-(2\\u0027-amino-2\\u0027-deoxy-\\u03b2-D-glucopyranosyl)-sn-glycerol\", \"glycero\"]}", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.720
[ { "content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: COVID-19: Novel Coronavirus May Lead to A Cardiac Arrest or heart attack\n\nResearch Abstract:\nBACKGROUND: A novel coronavirus disease (COVID-19) in Wuhan has caused an outbreak and become a major public health issue in China and great concern from international community.\nMyocarditis and myocardial injury were suspected and may even be considered as one of the leading causes for death of COVID-19 patients.\nTherefore, we focused on the condition of the heart, and sought to provide firsthand evidence for whether myocarditis and myocardial injury were caused by COVID-19.\nMETHODS: We enrolled patients with confirmed diagnosis of COVID-19 retrospectively and collected heart-related clinical data, mainly including cardiac imaging findings, laboratory results and clinical outcomes.\nSerial tests of cardiac markers were traced for the analysis of potential myocardial injury/myocarditis.\nRESULTS: 112 COVID-19 patients were enrolled in our study.\nThere was evidence of myocardial injury in COVID-19 patients and 14 (12.5%) patients had presented abnormalities similar to myocarditis.\nMost of patients had normal levels of troponin at admission, that in 42 (37.5%) patients increased during hospitalization, especially in those that died.\nTroponin levels were significantly increased in the week preceding the death.\n15 (13.4%) patients have presented signs of pulmonary hypertension.\nTypical signs of myocarditis were absent on echocardiography and electrocardiogram.\nCONCLUSIONS: The clinical evidence in our study suggested that myocardial injury is more likely related to systemic consequences rather than direct damage by the 2019 novel coronavirus.\nThe elevation in cardiac markers was probably due to secondary and systemic consequences and can be considered as the warning sign for recent adverse clinical outcomes of the patients.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"CONCLUSIONS: The clinical evidence in our study suggested that myocardial injury is more likely related to systemic consequences rather than direct damage by the 2019 novel coronavirus.\", \"The elevation in cardiac markers was probably due to secondary and systemic consequences and can be considered as the warning sign for recent adverse clinical outcomes of the patients.\"]}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3597
[ { "content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nIn December 2019, the novel coronavirus pneumonia (COVID-19) occurred in Wuhan, Hubei Province, China. The epidemic quickly broke out and spread throughout the country. Now it becomes a pandemic that affects the whole world. In this study, three models were used to fit and predict the epidemic situation in China: a modified SEIRD (Susceptible-Exposed-Infected-Recovered-Dead) dynamic model, a neural network method LSTM (Long Short-Term Memory), and a GWR (Geographically Weighted Regression) model reflecting spatial heterogeneity. Overall, all the three models performed well with great accuracy. The dynamic SEIRD prediction APE (absolute percent error) of China had been </= 1.0% since Mid-February. The LSTM model showed comparable accuracy. The GWR model took into account the influence of geographical differences, with R2 = 99.98% in fitting and 97.95% in prediction. Wilcoxon test showed that none of the three models outperformed the other two at the significance level of 0.05. The parametric analysis of the infectious rate and recovery rate demonstrated that China's national policies had effectively slowed down the spread of the epidemic. Furthermore, the models in this study provided a wide range of implications for other countries to predict the short-term and long-term trend of COVID-19, and to evaluate the intensity and effect of their interventions.", "role": "user" }, { "content": "[\"Epidemic Forecasting\", \"Prevention\"]", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.2905
[ { "content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Taking a hot bath will not prevent you from catching COVID-19.\n\nResearch Abstract:\nAirborne transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) via air-conditioning systems poses a significant threat for the continued escalation of the current coronavirus disease (COVID-19) pandemic.\nConsidering that SARS-CoV-2 cannot tolerate temperatures above 70 °C, here we designed and fabricated efficient air disinfection systems based on heated nickel (Ni) foam to catch and kill SARS-CoV-2.\nVirus test results revealed that 99.8% of the aerosolized SARS-CoV-2 was caught and killed by a single pass through a Ni-foam-based filter when heated up to 200 °C.\nAdditionally, the same filter was also used to catch and kill 99.9% of Bacillus anthracis, an airborne spore.\nThis study paves the way for preventing transmission of SARS-CoV-2 and other highly infectious airborne agents in closed environments.\nOne Sentence Summary Heated Ni-foam filters are capable of effectively catching and killing airborne SARS-CoV-2 and Bacillus anthracis spores.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.8658
[ { "content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Exercise Stress Test Amplifies Genotype-Phenotype Correlation in the LQT1 and LQT2 Forms of the Long-QT Syndrome\nSummary: Background —Experimental studies suggest that the interval between peak and end of T wave ( Tpe ) in transmural ECGs reflects transmural dispersion of repolarization ( TDR ) , which is amplified by & bgr;-adrenergic stimulation in the LQT1 model . In 82 patients with genetically identified long-QT syndrome ( LQTS ) and 33 control subjects , we examined T-wave morphology and various parameters for repolarization in 12-lead ECGs including corrected QT ( QTc ; QT/R-R1/2 ) and corrected Tpe ( Tpec ; Tpe/R-R1/2 ) before and during exercise stress tests . Methods and Results —Under baseline conditions , LQT1 ( n=51 ) showed 3 cardinal T-wave patterns ( broad-based , normal-appearing , late-onset ) and LQT2 ( n=31 ) 3 patterns ( broad-based , bifid with a small or large notch ) . The QTc and Tpec were 510±68 ms and 143±53 ms in LQT1 and 520±61 ms and 195±69 ms in LQT2 , respectively , which were both significantly larger than those in control subjects ( 402±36 ms and 99±36 ms ) . Both QTc and Tpec were significantly prolonged during exercise in LQT1 ( 599±54 ms and 215±46 ms ) with morphological change into a broad-based T-wave pattern . In contrast , exercise produced a prominent notch on the descending limb of the T wave , with no significant changes in the QTc and Tpec ( 502±82 ms and 163±86 ms : n=19 ) in LQT2 . Conclusions —Tpe interval increases during exercise in LQT1 but not in LQT2 , which may partially account for the finding that fatal cardiac events in LQT1 are more often associated with exercise\n\n\nReport 2:\nTitle: The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.\nSummary: AIMS To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality , as well as in cardiac morbidity , in a general population . METHODS AND RESULTS The QT interval was measured in all leads from a st and ard 12-lead ECG in a r and om sample of 1658 women and 1797 men aged 30 - 60 years . QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads . All cause mortality over 13 years , and cardiovascular mortality as well as cardiac morbidity over 11 years , were the main outcome parameters . Subjects with a prolonged QT interval ( 430 ms or more ) or prolonged QT interval dispersion ( 80 ms or more ) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms , or whose QT interval dispersion was less than 30 ms . Cardiovascular death relative risk ratios , adjusted for age , gender , myocardial infa rct , angina pectoris , diabetes mellitus , arterial hypertension , smoking habits , serum cholesterol level , and heart rate were 2.9 for the QT interval ( 95 % confidence interval 1.1 - 7.8 ) and 4.4 for QT interval dispersion ( 95 % confidence interval 1.0 - 19 - 1 ) . Fatal and non-fatal cardiac morbidity relative risk ratios were similar , at 2.7 ( 95 % confidence interval 1.4 - 5.5 ) for the QT interval and 2.2 ( 95 % confidence interval 1.1 - 4.0 ) for QT interval dispersion . CONCLUSION Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years", "role": "user" }, { "content": "This meta- analysis showed that Tpeak-Tend interval is significant higher in individuals who are at elevated risk of adverse events in congenital LQTS , offering incremental value for risk stratification", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.756
[ { "content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: To evaluate Self-Instruct empirically, we run this framework on GPT3 Brown et al. (2020), which is a vanilla LM (§4).The iterative Self-Instruct process on this model leads to about 52k instructions, paired with about 82K instance inputs and target outputs.We observe that the resulting data provides a diverse range of creative tasks and over 50% of them have less than 0.3 ROUGE-L overlaps with the seed instructions (§4.2).On this resulting data, we build GPT3{}_{\\textsc{Self-Inst}} by fine-tuning GPT3 (i.e., the same model used for generating the instructional data).We evaluate GPT3{}_{\\textsc{Self-Inst}} in comparison to various other models on both typical NLP tasks included in Super-NaturalInstructions Wang et al. (2022), and a set of new instructions that are created for novel usage of instruction-following models (§5).The SuperNI results indicate thatGPT3{}_{\\textsc{Self-Inst}} outperforms GPT3 (the original model) by a large margin (+33.1%) and nearly matches the performance of \\text{InstructGPT}_{\\text{001}}. Moreover, our human evaluation on the newly-created instruction set shows that GPT3{}_{\\textsc{Self-Inst}} demonstrates a broad range of instruction following ability, outperforming models trained on other publicly available instruction datasets and leaving only a 5% gap behind \\text{InstructGPT}_{\\text{001}}. Additionally, to compare Self-Instruct training with other publicly available instruction tuning data, we further finetune GPT3 model with data from PromptSource and SuperNI, which are used to train the T0 and Tk-Instruct models. We call them T0 training and SuperNI training for short, respectively.To save the training budget, we sampled 50K instances (but covering all their instructions) for each dataset, which has a comparable size to the instruction data we generated. Based on the findings from Wang et al. (2022) and our early experiments, reducing the number of instances per task does not degrade the model’s generalization performance to unseen tasks. We make the following observations from the results in Table 3.Self-Instructboosts the instruction-following ability of GPT3 by a large margin. The vanilla GPT3 model basically cannot follow human instructions at all. Upon manual analysis, we find that it usually generates irrelevant and repetitive text, and does not know when to stop generation.Compared with other models that are not specifically trained for SuperNI, GPT3{}_{\\textsc{Self-Inst}} achieves better performance than T0 or the GPT3 finetuned on the T0 training set, which takes tremendous human labeling efforts. Notably, GPT3{}_{\\textsc{Self-Inst}} also nearly matches the performance of \\text{InstructGPT}_{\\text{001}}, which is trained with private user data and human-annotated labels. Models trained on SuperNI training set still achieve better performance on its evaluation set, which we attribute to the similar instruction style and formatting. However, we show that Self-Instruct still brings in additional gains when combined with the SuperNI training set, proving its value as complementary data. Figure 5 provides the performance of GPT3 model and its instruction-tuned counterparts on this newly written instruction set. As anticipated, the vanilla GPT3 language model is largely unable to respond to instructions, and all instruction-tuned models demonstrate comparatively higher performance,Nonetheless, GPT3{}_{\\textsc{Self-Inst}} (i.e., GPT3 model fine-tuned with Self-Instruct) outperforms those counterparts trained on T0 or SuperNI by a large margin, demonstrating the value of the generated data despite the noise.Compared with \\text{InstructGPT}_{\\text{001}} (c.f. footnote 1), GPT3{}_{\\textsc{Self-Inst}} is quite close in the performance—if we count acceptable response with minor imperfections (Rating-3) as valid, GPT3{}_{\\textsc{Self-Inst}} is only 5% behind \\text{InstructGPT}_{\\text{001}}. Lastly, our evaluation confirms the impressive instruction-following ability of \\text{InstructGPT}_{\\text{002}} & \\text{InstructGPT}_{\\text{003}} models. Although there are many factors behind this success, we conjecture that future work can largely benefit from improving the quality of our generated data by using human annotators or training a reward model to select better generations, similar to the algorithm used in Ouyang et al. (2022). We introduce Self-Instruct, a task-agnostic method to improve the instruction-following capabilities of language models via its own generation of instruction data (instruction, input, and output samples) and bootstrapping with it. Our method conducts instruction-tuning of the original model on the pruned subset of generated samples. On experimenting with vanilla GPT3, we observe a 33% absolute improvement over the original model on Super-NaturalInstructions. This performance is on par with \\text{InstructGPT}_{\\text{001}}, which is trained with private user data and expensive human annotations. Furthermore, we curate a set of expert-written instructions for novel tasks. Human evaluation on this set shows that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind \\text{InstructGPT}_{\\text{001}}. We hope Self-Instruct can serve as the first step to align pretrained language models to follow human instructions, and future work can build on top of this data to improve instruction-following models.\n\nInquiry: Was the performance difference between Self-Instruct training and SuperNI training significant?", "role": "user" }, { "content": "While it does appear as though there is a measurable performance improvement from SuperNI to Self-Instruct, quantifying the impact and magnitude of that improvement is not straightforward. Evaluations with ROGUE-L scores find that the absolute difference between both these methods is not very high, though additional information and context may be needed to judge the meaning of the absolute difference between these numbers. The authors do claim that they outperform T0 or SuperNI by a large margin, which is strong evidence to suggest that the difference was indeed significant, but such claims must be taken with some grains of salt since authors are usually incentivized to show their models are the best.", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.700
[ { "content": "Examine the provided text and determine the interactions between chemical entities and genes. Focus on identifying any connections present.\n\nPresent your findings in a structured JSON format. Each relationship should be represented as an array entry. If no interactions are detected, return an empty array.\n\nText Content:\n\nSulindac sulfide inhibits epidermal growth factor-induced phosphorylation of extracellular-regulated kinase 1/2 and Bad in human colon cancer cells. Colorectal cancer is the second leading cause of cancer death in the United States. Nonsteroidal anti-inflammatory drugs including sulindac are promising chemopreventive agents for colorectal cancer. Sulindac and selective cyclooxygenase (COX) -2 inhibitors cause regression of colonic polyps in familial polyposis patients. Sulindac induces apoptotic cell death in cancer cells in vitro and in vivo. In tumor cells, activation of extracellular-regulated kinase (ERK) 1/2 results in phosphorylation of several ERK1/2 effectors, including the proapoptotic protein Bad. Phosphorylation of Ser112 by ERK1/2 inactivates Bad and protects the tumor cell from apoptosis. Sulindac metabolites and other nonsteroidal anti-inflammatory drugs selectively inhibit ERK1/2 phosphorylation in human colon cancer cells. In this study we show that epidermal growth factor (EGF) strongly induces phosphorylation of ERK1/2 and Bad in HT29 colon cancer cells. EGF-stimulated phosphorylation of ERK and Bad is blocked by pretreatment with U0126, a selective MAP kinase kinase (MKK) 1/2 inhibitor. Similarly, pretreatment with sulindac sulfide blocks the ability of EGF to induce ERK1/2 and Bad phosphorylation, but also down-regulates total Bad but not ERK1/2 protein levels. The ability of sulindac to block ERK1/2 signaling by the EGF receptor may account for at least part of its potent growth-inhibitory effects against cancer cells.", "role": "user" }, { "content": "[[\"Sulindac\", \"INHIBITOR\", \"cyclooxygenase (COX) -2\"], [\"Sulindac sulfide\", \"INHIBITOR\", \"Bad\"], [\"U0126\", \"INHIBITOR\", \"ERK\"], [\"sulindac sulfide\", \"REGULATOR\", \"ERK1/2\"], [\"sulindac sulfide\", \"INDIRECT-DOWNREGULATOR\", \"Bad\"], [\"U0126\", \"INHIBITOR\", \"MAP kinase kinase (MKK) 1/2\"], [\"sulindac sulfide\", \"INHIBITOR\", \"EGF\"], [\"sulindac sulfide\", \"INHIBITOR\", \"Bad\"], [\"sulindac\", \"INDIRECT-DOWNREGULATOR\", \"ERK1/2\"], [\"sulindac\", \"INHIBITOR\", \"EGF receptor\"], [\"Sulindac\", \"DOWNREGULATOR\", \"ERK1/2\"], [\"U0126\", \"INHIBITOR\", \"Bad\"], [\"Sulindac sulfide\", \"INDIRECT-REGULATOR\", \"extracellular-regulated kinase 1/2\"]]", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.1491
[ { "content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Ginger can treat a coronavirus infection\n\nResearch Abstract:\nThe severity of coronavirus disease 2019 (COVID-19) infection is quite variable and the manifestations varies from asymptomatic disease to severe acute respiratory infection.\nFever, dry cough, dyspnea, myalgia, fatigue, loss of appetite, olfactory and gustatory dysfunctions are the most prevalent general symptoms.\nDecreased immune system cells such as suppressed regulatory T cells, cytotoxic and helper T cells, natural killer cells, monocytes/macrophages and increased proinflammatory cytokines are the characteristic features.\nCompounds derived from Allium sativum (garlic) have the potential to decrease the expression of proinflammatory cytokines and to reverse the immunological abnormalities to more acceptable levels.\nAllium sativum is suggested as a beneficial preventive measure before being infected with SARS-CoV-2 virus.\nAllium sativum is a functional food well-known for its immunomodulatory, antimicrobial, antiinflammatory, antimutagenic, antitumor properties.\nIts antiviral efficiency was also demonstrated.\nSome constituents of this plant were found to be active against protozoan parasites.\nWithin this context, it appears to reverse most immune system dysfunctions observed in patients with COVID-19 infection.\nThe relations among immune system parameters, leptin, leptin receptor, adenosin mono phosphate-activated protein kinase, peroxisome proliferator activated receptor-gamma have also been interpreted.\nLeptin's role in boosting proinflammatory cytokines and in appetite decreasing suggest the possible beneficial effect of decreasing the concentration of this proinflammatory adipose tissue hormone in relieving some symptoms detected during COVID-19 infection.\nIn conclusion, Allium sativum may be an acceptable preventive measure against COVID-19 infection to boost immune system cells and to repress the production and secretion of proinflammatory cytokines as well as an adipose tissue derived hormone leptin having the proinflammatory nature.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
README.md exists but content is empty.
Downloads last month
28