Dataset Viewer
Auto-converted to Parquet
dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.29693
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nApplications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.\n\nCited Abstract(s):\n[1]: We describe Charm++, an object oriented portable parallel programming language based on C++. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of C++ with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latency-tolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects, all of which are more difficult problems in a parallel context. Charm++ provides specific modes for sharing information between parallel objects. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.\n\n[2]: Malleable jobs are parallel programs that can change the number of processors on which they are executing at run time in response to an external command. One of the advantages of such jobs is that a job scheduler for malleable jobs can provide improved system utilization and average response time over a scheduler for traditional jobs. In this paper, we present a programming system for creating malleable jobs that is more general than other current malleable systems. In particular, it is not limited to the master-worker paradigm or the Fortran SPMD programming model, but can also support general purpose parallel programs including those written in MPI and Charm++, and has built-in migration and load-balancing, among other features.\n\n[3]: Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization.", "role": "user" }, { "content": "Moreira and Naik [3] propose a technique for dynamic resource management on distributed systems using a checkpointing framework called Distributed Resource Management Systems (DRMS). The framework supports jobs that can change their active number of tasks during program execution, map the new set of tasks to execution units, and redistribute data among the new set of tasks. DRMS does not make reconfiguration decisions based on application performance however, and it uses file-based checkpointing for data redistribution. A more recent work by Kale [2] achieves reconfiguration of MPI-based message passing programs. However, the reconfiguration is achieved using Adaptive MPI (AMPI), which in turn relies on Charm++ [1] for the processor virtualization layer, and requires that the application be run with many more threads than processors.", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.888
[ { "content": "Extract drug combinations from this sentence and classify each combination into one of these categories:\n- POS: Positive interaction between drugs\n- NEG: Negative interaction between drugs\n- COMB: Drugs used together with unknown interaction type\n\nOutput only a JSON array where each entry is [drug_1, drug_2, ..., drug_n, category].\n\nparagraph: A rapid stability-indicating, fused-core HPLC method for simultaneous determination of β-artemether and lumefantrine in anti-malarial fixed dose combination products. artemisinin-based fixed dose combination (FDC) products are recommended by World Health Organization (WHO) as a first-line treatment. However , the current artemisinin FDC products , such as β-artemether and lumefantrine , are inherently unstable and require controlled distribution and storage conditions , which are not always available in resource-limited settings . Moreover, quality control is hampered by lack of suitable analytical methods. Thus, there is a need for a rapid and simple, but stability-indicating method for the simultaneous assay of β-artemether and lumefantrine FDC products. ### methods Three reversed-phase fused-core HPLC columns (Halo RP-Amide, Halo C18 and Halo Phenyl-hexyl), all thermostated at 30°C, were evaluated. β-artemether and lumefantrine (unstressed and stressed), and reference-related impurities were injected and chromatographic parameters were assessed. Optimal chromatographic parameters were obtained using Halo RP-Amide column and an isocratic mobile phase composed of acetonitrile and 1 mM phosphate buffer pH 3.0 (52:48; V/V) at a flow of 1.0 ml/min and 3 μl injection volume. Quantification was performed at 210 nm and 335 nm for β-artemether and for lumefantrine, respectively. In-silico toxicological evaluation of the related impurities was made using Derek Nexus v2.0®. ### results Both β-artemether and lumefantrine were separated from each other as well as from the specified and unspecified related impurities including degradants. A complete chromatographic run only took four minutes. Evaluation of the method, including a Plackett-Burman robustness verification within analytical QbD-principles, and real-life samples showed the method is suitable for quantitative assay purposes of both active pharmaceutical ingredients, with a mean recovery relative standard deviation (± RSD) of 99.7 % (± 0.7%) for β-artemether and 99.7 % (± 0.6%) for lumefantrine. All identified β-artemether-related impurities were predicted in Derek Nexus v2.0® to have toxicity risks similar to β-artemether active pharmaceutical ingredient (API) itself. ### conclusions A rapid, robust, precise and accurate stability-indicating, quantitative fused-core isocratic HPLC method was developed for simultaneous assay of β-artemether and lumefantrine. This method can be applied in the routine regulatory quality control of FDC products. The in-silico toxicological investigation using Derek Nexus® indicated that the overall toxicity risk for β-artemether-related impurities is comparable to that of β-artemether API.\n\nsentence: However , the current artemisinin FDC products , such as β-artemether and lumefantrine , are inherently unstable and require controlled distribution and storage conditions , which are not always available in resource-limited settings .", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.192659
[ { "content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nthere is much interest at present in the possible use of the new generation of high - power laser facilities ( in particular the national ignition facility at livermore and the laser megajoule in bordeaux ) to simulate astrophysical phenomena such as supernovae . at first sight\nthis programme appears to suffer from one obvious drawback .\nthe phenomena one wishes to simulate generally involve _ explosions _ while the laser facilities are designed to produce _\nremarkably , as we will show , this is not a problem . under certain ,\nnot too restrictive , conditions there exists an exact mathematical duality which allows one to transform an explosion problem to an implosion problem and vice versa .\nthus it is possible , in a precise sense , to use _ implosion _ experiments to simulate _ exploding _ systems .\nthe euler equations of perfect gas dynamics can be conveniently written in the form @xmath0 where @xmath1 denotes the lagrangian , material or convective derivative defined by = + * u * and @xmath2 is the mass density , @xmath3 the thermal energy density , @xmath4 the pressure and @xmath5 the velocity .\nin addition to these differential equations one algebraic relation is needed , an equation of state relating the pressure to the mass and energy densities the simplest being a polytropic equation of state , p = ( -1 ) with @xmath6 a constant .\nthese equations are mathematically equivalent to the mass , momentum and energy conservation equations in smooth regions of the flow . only at shocks\nis it necessary to revert to the fundamental conservation forms to recover the correct shock jump conditions . now consider the following transformation of the dependent and independent variables , @xmath7 where for the moment @xmath8 is an arbitrary function of time .\napart from the , at first sight rather strange , time - dependent scaling factors this is essentially a transformation to a coordinate system which is expanding or contracting with a scale factor @xmath8 . if we define @xmath9 and note that = a^2 ddt(1a ) = - d adt it is easy to see that it is an involutionary transformation with inverse obtained by simply interchanging the starred and unstarred quantities @xmath10 let us now consider how the dynamical equations transform under this change of variables .\nit is easy to see that @xmath11 and thus , after some elementary algebra , @xmath12 remarkably , we see that if the scale factor @xmath13 is such that ^ * = d^2 a^*d t^*2 = - a^2 a = 0 and the gas is a polytrope of exponent @xmath14 with @xmath15 then the euler equations are _ invariant _ under this transformation .\nnote that , because the euler equations in conservation form are algebraically equivalent to the simplified forms , the conservation forms are also invariant and thus the whole structure of ideal gas dynamics , including the rankine - hugoniot shock relations , is preserved .\nthe condition that the acceleration of the scale factor be zero , @xmath16 , requires that @xmath8 be a linear function of @xmath17 and , without loss of generality , we can take @xmath18 where @xmath19 is a constant characteristic expansion time .\nthe time transformation is then t^ * = = t_0 ^ 2 = - t_0 ^ 2 t and it is convenient to set the constant to zero and choose t^ * = -t_0 ^ 2 t , t = -t_0 ^ 2t^*. the initial singularity of the expansion in physical space occurs at @xmath20 and is mapped to @xmath21 , the long term behaviour as @xmath22 is mapped to @xmath23 .\nit is important to note that in the dual representation the time variable is bounded from _ above _ , @xmath24 , whereas in physical space it is bounded from _ below _ , @xmath25 .\nthe remarkable result is that for an ideal gas of point particles with no internal structure ( which is what the 5/3 polytrope is ) hydrodynamics in a uniformly expanding system is exactly equivalent to hydrodynamics in a static system .\nthis result , or special forms of it , appears to have been discovered a number of times by cosmologists ( where the idea of factoring out the general expansion of the universe is very natural ) ; a recent discussion is that of martel and shapiro ( 1998 ) where they propose the felicitous name of `` supercomoving variables '' to describe this transformation .\nwhat does not seem to have been generally noted is that this transformation can be used outside the cosmological context ( however poyet and spiegel , 1979 , did use a variant in an analysis of stellar pulsations ) .\nthe fact that the transformation is exact for the gas of ideal point particles strongly hints that it is derived from a similar result for the free particle motion .\nin fact there is such a duality , although it is almost trivial .\nthe freely moving point particle moves along a straight line trajectory , = * x*_0 + * v*_0 t , with starting point @xmath26 and velocity @xmath27 .\nif we write this as = * v*_0 + * x*_0 1 t we see that there is a dual representation of the trajectory in which @xmath17 is replaced by @xmath28 , lengths are scaled by a factor proportional to time , initial points and final velocities are interchanged , but the trajectory remains a straight line .\nif collisions are instantaneous , localised and elastic they look the same in either system , and thus in both systems one can write down a boltzmann equation and then derive the hydrodynamic equations as limits of moments of the boltzmann equation .\nthis approach also shows that higher order effects , such as viscosity and heat conduction , can formally be treated in the same way ; however the resulting transformed transport coefficients will in general have unphysical time dependencies ( for an application see drury and stewart , 1976 ) .\nthis analysis also shows that similar results will hold in different numbers of spatial dimensions , but the equation of state will have to correspond to the ideal gas in that number of dimensions . in @xmath29 spatial dimensions it is easy to verify that the `` super - comoving '' transformation takes the form @xmath30 and that the euler equations are invariant if the gas has a polytropic equation of state such that d p = 2 corresponding to an adiabatic exponent = 1 + 2d .\nan interesting way of looking at this transformation ( for which we are indebted to our colleague etienne parizot ) is that it provides an analogue in spherical geometry to the freedom that galilei transformations allow in planar geometry .\nif we are looking at a planar shock , it is often convenient to transform to a reference frame where the upstream medium , or the downstream medium , or the shock itself , appears stationary . in\nspherical systems one can not apply galilei boosts because the origin is fixed , however this transformation , by allowing one to take out an arbitrary uniform expansion , gives one much the same freedom .\ncomputational studies of the evolution of a supernova remnant commonly start with initial conditions of dense pressure - free ejecta expanding ballistically away from the site of the explosion , which it is convenient to locate at the coordinate origin , and interacting with a stationary , or slowly moving , ambient medium of much lower density and negligible pressure . to illustrate the application of the duality transformation\nlet us consider the simple , if somewhat artificial , case of uniform density ejecta interacting with a uniform and stationary ambient medium in perfect spherical symmetry .\nthen the initial conditions correspond to @xmath31 in the region @xmath32 occupied by the ejecta ( @xmath33 is the maximum expansion speed of the ejecta ) and @xmath34 in the external ( @xmath35 ) medium of constant density @xmath36 .\nthe sweep - up time @xmath37 corresponds to the point where the ejecta , if expanding unimpeded , would have a density equal to the ambient medium .\nthis defines the physical problem of expanding ejecta interacting with a stationary environment .\nlet us now consider the dual problem obtained by applying the transformation with scale factor a ( t ) = t. then the dependent variables transform as @xmath38 so that the explosion , which occurs at @xmath20 in physical problem , occurs at @xmath39 in the dual problem . conversely the asymptotic evolution as @xmath40 in the physical problem is mapped to the behaviour at @xmath23 in the dual problem .\nthe ejecta density in the dual problem is constant , ^*(r^ * , t^ * ) = a^3 ( r , t ) = _ 0 and the velocity is zero , @xmath41 , in @xmath42 .\nhowever the ambient medium is now time - dependent with density , in the region @xmath43 ^*(r^ * , t^ * ) = ( t)^3 _ 0 = ( -t^*)^-3 _ 0 and velocity u^*(r^ * , t^ * ) = r^*t^*. thus in the dual problem we have _ stationary _ ejecta interacting with an _ imploding _ ambient medium whereas in the physical problem we have _ exploding _ ejecta interacting with a _\nstationary _ ambient medium . instead of the initial explosion at @xmath20 in the physical problem\nwe have the final crunch at @xmath44 in the dual problem .\nthe evolution in physical space of the supernova remnant structure has been often discussed and is well - known ( eg truelove and mckee , 1999 ; dwarkadas and chevalier , 1998 ) . at early times , @xmath45 , the bulk of the ejecta\nexpand ballistically except for a thin interaction region on the outside consisting of a forward shock running into the ambient medium , a zone of hot shocked ambient medium , a contact discontinuity , a zone of shocked ejecta and a reverse shock propagating slowly into the ejecta . at later times , when the mass of swept up ambient material becomes comparable to the ejecta mass , the reverse shock detaches itself from the contact discontinuity and implodes on the centre and the outer forward shock approximates the self - similar sedov solution for a strong point explosion in a cold gas . in the dual system\nthe interaction looks a little different , and in some ways is simpler .\ninitially we have the stationary sphere of high density material ( which for convenience we continue to call the ejecta , although in the dual representation it has not been ejected but is simply sitting there ) surrounded by a very low density converging flow .\nthe inflowing gas has to decelerate at a shock which stands about 10% further out in radius than the edge of the ejecta .\nwriting for convenience @xmath46 there is an exact similarity solution in which @xmath47 , @xmath48 and @xmath49 in the region external to the sphere of ejecta .\nthis steeply rising pressure ( @xmath50 ) drives the reverse shock into the ejecta and starts the implosion of the ejecta .\n= 0.9 at later times , as the ejecta collapse , the shock in the imploding ambient medium also moves inwards thereby reducing the rate of increase of the pressure . transforming the sedov solution to the dual system\nwe see that the shock radius scales as r^ * ^3/5 and the postshock pressure as p^*^-19/5 .\nfigure 1 attempts to show schematically the relation between the two representations .\nwe note in passing that the dual representation is also useful for analytic and numerical studies ; this aspect will be explored in a companion paper ( dwarkadas and drury , in preparation ) .\nthe perfectly symmetric explosion is neither realistic nor especially interesting ; it is the easiest case to analyse numerically and there is no reason to suppose that a laboratory simulation would yield any additional information\n. however reality is more complicated .\nit is clear that the ejecta emerging from real supernova explosions are highly nonuniform on a wide range of scales and that to calculate the resulting remnant evolution in three dimensions is likely to remain a computationally challenging problem for some considerable time ( cf arnett , 1999 ) .\nthe interesting implication of this work is that it should be possible with the new generation of implosion facilities to simulate precisely this problem , the interaction of highly structured ejecta with their surroundings including all the effects of spherical geometry .\none can easily imagine constructing a solid target whose density distribution models the density distribution of the expanding ejecta . if this target is then used in an implosion experiment , and if the momentum loading on the surface is tailored to rise in the same manner as the pressure behind the forward shock in the dual system , a steep initial rise as @xmath51 decreasing to @xmath52 , the evolution of the internal structures including all the turbulent mixing , instabilities and shock formation , should be exactly replicated .\nwe emphasise finally that the transformation discussed in this paper is additional to and complements the well - known linear scaling relations as excellently discussed by ryutov et al ( 1999 ) in the astrophysical context , or connor and taylor ( 1977 ) in the plasma physics context .\ndimensional similarity and scaling arguments are obviously central to any attempt at simulation on a laboratory scale of astrophysical systems , however precisely because they are very general and linear they can not turn an explosion into an implosion . the remarkable nonlinear symmetry discussed in this paper is specific to the ideal gas equation of state , but subject to this constraint gives a powerful new degree of freedom in simulation studies by allowing an arbitrary uniform expansion or contraction to be factored out thereby transforming an explosion problem to an implosion one or vice versa .\nthis work was in part supported by the eu under the tmr programme , contract fmrx - ct98 - 0168 .\nsome of it was carried out while ld was a visitor at the research centre for theoretical astrophysics of the university of sydney .\n= 0 pt = 5 pt plus 5 pt", "role": "user" }, { "content": "the euler equations of ideal gas dynamics posess a remarkable nonlinear involutional symmetry which allows one to factor out an arbitrary uniform expansion or contraction of the system . \n the nature of this symmetry ( called by cosmologists the transformation to supercomoving variables ) is discussed and its origin clarified . \n it is pointed out that this symmetry allows one to map an explosion problem to a dual implosion problem and vice versa . the application to laboratory simulations of supernova remnants \n is considered ; in principle this duality allows the complete three - dimensional evolution of highly structured explosion ejecta to be modelled using a static target in an implosion facility . # 1#1 t # 1#1x # 1d#1d t", "role": "assistant" } ]
science.pico_ner
science.pico_ner.2624
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nThe Coping Cat program for children with anxiety and autism spectrum disorder : a pilot randomized controlled trial . The purpose of this pilot study was to evaluate whether a modified version of the Coping Cat program could be effective in reducing anxiety in children with autism spectrum disorder ( ASD ) . Twenty-two children ( ages 8-14 ; IQ ≥ 70 ) with ASD and clinically significant anxiety were randomly assigned to 16 sessions of the Coping Cat program ( cognitive-behavioral therapy ; CBT ) or a 16-week waitlist . Children in the CBT condition evidenced significantly larger reductions in anxiety than those in the waitlist . Treatment gains were largely maintained at two-month follow-up . Results provide preliminary evidence that a modified version of the Coping Cat program may be a feasible and effective program for reducing clinically significant levels of anxiety in children with high-functioning ASD .", "role": "user" }, { "content": "{\"Intervention\": [\"modified version of the Coping Cat program\"], \"Outcome\": [\"anxiety\"], \"Participant\": [\"anxiety\", \"children\", \"autism spectrum disorder ( ASD\", \"children ( ages 8-14 ;\", \"Twenty-two\", \"anxiety and autism spectrum disorder\", \"ASD and clinically significant anxiety\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.1984
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nVideo streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.\n\nCited Abstract(s):\n[1]: With companies such as Netflix and YouTube accounting for more than 50 of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.\n\n[2]: In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years.", "role": "user" }, { "content": "Recent surveys [1] [2] give a good overview of the bit rate adaptation algorithms for DASH based content delivery.", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.15271
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Widespread Gene Conversion in Centromere Cores\nArticle:\nAbstract:\nCentromeres are the most dynamic regions of the genome, yet they are typified by little or no crossing over, making it difficult to explain the origin of this diversity. To address this question, we developed a novel CENH3 ChIP display method that maps kinetochore footprints over transposon-rich areas of centromere cores. A high level of polymorphism made it possible to map a total of 238 within-centromere markers using maize recombinant inbred lines. Over half of the markers were shown to interact directly with kinetochores( CENH3) by chromatin immunoprecipitation. Although classical crossing over is fully suppressed across CENH3 domains, two gene conversion events( i. e., non-crossover marker exchanges) were identified in a mapping population. A population genetic analysis of 53 diverse inbreds suggests that historical gene conversion is widespread in maize centromeres, occurring at a rate >1×10−5/marker/generation. We conclude that gene conversion accelerates centromere evolution by facilitating sequence exchange among chromosomes.\nIntroduction:\nIn spite of their highly conserved function as the site of kinetochore assembly and spindle attachment, centromeres are the most dynamic regions of complex genomes. The components, copy number, and structural organization of centromeric DNA are highly divergent even among closely related species[1],[2],[3]. This apparent conflict between essentiality and sequence dispensability remains one of the major unresolved paradoxes in genetics. It has been hypothesized that the rapid evolution of centromeric DNA is primarily the result of an arms race in which meiotic drive sweeps novel centromeric repeats to fixation while centromeric proteins adapt to suppress this behavior[4]. Alternatively, some authors have argued that the role of selection is minimal and that observed variation can be explained by stochastic events such as mutation and genetic exchange[5],[6],[7]. Both proposals lack strong empirical support, as centromere drive has only rarely been documented[8], and mutational events are difficult to document in complex repetitive areas. Centromeres are specified epigenetically by the presence of a centromere-specific histone H3 variant, CENH3, which organizes the overlying kinetochores[4]. Kinetochores affect the function and behavior of centromeric DNA in pronounced ways. Perhaps most notable is their effect on crossing over. Cytogeneticists have long known that centromeres severely repress meiotic crossing over[9], and this result has since been confirmed in all species studied[10],[11],[12]. As a consequence, centromeres are often defined as regions where the frequency of crossovers approaches zero[12],[13],[14]. Nevertheless it is not accurate to presume that centromeres never experience genetic exchange. Empirical studies have revealed evidence for recombination between sister centromeres[15],[16], gene conversion events have been inferred from sequence analysis of mammalian centromeres[17],[18],[19], and large intrachromosomal rearrangements have been documented in rice centromeres[20],[21]. However, despite the extensive circumstantial evidence for genetic exchange among centromeres, the frequency and nature of the recombination has been difficult to measure. Maize centromeres contain a 156 bp tandem repeat known as CentC and an abundant class of Ty3/Gypsy-like transposons[22]. Several subfamilies of these so-called Centromeric Retroelements( CR elements, known as CRM in maize;[23]) exist, with CRM2 being the most abundant in the maize genome[24]. Over time, CR elements insert in and around each other resulting in a nested arrangement[25],[26]. Such insertion sites have a high probability of being unique and are generally polymorphic among lines, thereby providing an excellent tool for the genetic analysis of centromeres[27],[28]. Here we used transposon display[29] of CRM2 to generate centromere-specific markers in maize. Analysis of segregation in a mapping population, combined with CENH3 ChIP, allowed us to map the functional region of each maize centromere and provide direct evidence for conversion-type genetic exchanges within centromere cores. An analysis of haplotype variation and linkage disequilibrium in a broad panel of maize lines revealed further evidence for a high rate of gene conversion across all centromeres studied, consistent with an important role for stochastic processes in centromere evolution.\nDiscussion:\nOur data indicate that gene conversion is common within centromeres and may play a fundamental role in determining the dynamics and distribution of centromere repeats. This conclusion is based on three primary lines of evidence. First, our mapping data provide what is to our knowledge the only experimental evidence for centromeric gene conversion. Indeed, two independent conversion events were identified in 93 recombinant inbred lines using a set of 238 CRM2 markers, corresponding to a rate of 1. 86×10−4 exchanges per marker per generation. The second line of evidence comes from LD analysis of 75 markers typed in a set of 53 diverse inbred lines. These data show patterns consistent with genetic exchange, including unusually low LD and the clear presence of recombinant haplotypes( nonzero Rmin), but show no decay of LD with distance as would be expected in the presence of crossing over. Finally, two independent population genetic methods were used to directly estimate centromeric gene conversion, resulting in remarkably similar rates of ∼1×10−5 conversions per marker per generation. It is too early to tell how rates of gene conversion in centromeres compare to other regions of the maize genome, but one estimate of gene conversion at the maize anthocyaninless1 locus( ∼3×10−5/marker/generation[40]) suggests they may be of a similar order of magnitude. It has been hypothesized that centromere evolution in eukaryotes with asymmetric meiosis has been primarily governed by an arms race in which meiotic drive occasionally sweeps novel centromeric repeats to fixation[4]. While the extreme LD observed around a short tract of CentC on centromere 2 may hint at an evolutionary history consistent with these ideas( Figure 5B), our finding of widespread gene conversion explains how high levels of diversity may be observed even in yeast where meiotic drive is a less likely explanation[7]. Sequence data from mammalian centromeres are further consistent with this view, suggesting in several studies that gene conversion has contributed to extant centromere variation and the production of novel higher order repeat arrays[17],[18],[19]. If centromeric gene conversion is indeed common in maize, yeast, and humans, it seems reasonable to hypothesize that gene conversion is an important process within the centromere cores of all eukaryotes.", "role": "user" }, { "content": "Centromeres , which harbor the attachment points for microtubules during cell division , are characterized by repetitive DNA , paucity of genes , and almost complete suppression of crossing over . The repetitive DNA within centromeres appears to evolve much faster than would be expected for genetically inert regions , however . Current explanations for this rapid evolution tend to be theoretical . On the one hand there are arguments that subtle forms of selection on selfish repeat sequences can explain the rapid rate of change , while on the other hand it seems plausible that some form of accelerated neutral evolution is occurring . Here , we address this question in maize , which is known for its excellent genetic mapping resources . We first developed a method for identifying hundreds of single copy markers in centromeres and confirmed that they lie within functional domains by using a chromatin immunoprecipitation assay for kinetochore protein CENH3 . All markers were mapped in relation to each other . The data show that , whereas classical crossing over is suppressed , there is extensive genetic exchange in the form of gene conversion ( by which short segments of one chromosome are copied onto the other ) . These results were confirmed by demonstrating that similar short exchange tracts are common among the centromeres from multiple diverse inbred lines of maize . Our study suggests that centromere diversity can be at least partially attributed to a high rate of previously “hidden” genetic exchange within the core kinetochore domains .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.73473
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nautophagy , literally ' self - eating , ' is a catabolic process characterized by the lysosomal degradation of a cell 's own material or organelles for the maintenance of cellular energy balance and organelle function . among the diverse types of autophagy ,\nmacroautophagy ( hereafter referred to as autophagy ) involves the rearrangement of subcellular membranes to form autophagosomes , which are then delivered to lysosomes and form autophagolysosomes where the sequestered material is degraded and recycled .\nwhile autophagy plays a critical role in the clearance of degenerated proteins and senescent organelles as well as in the maintenance of cellular homeostasis during energy starvation or stress , dysregulated autophagy has been implicated in the pathogenesis of neurodegenerative diseases , cancer , aging , and infection .\nhere , we summarize current understandings of the molecular mechanism of autophagy , focusing on recent results that have examined the role of autophagy in diverse tissues involved in the control of body metabolism and energy homeostasis .\nthe unc51-like kinase 1 ( ulk1 ) complex is crucial in autophagy . in nutrient - rich conditions , mammalian target of rapamycin complex 1 ( mtorc1 ) kinase\nis incorporated into the ulk1-autophagy - related gene ( atg ) 13-fip200 complex and phosphorylates ulk1 .\ninhibition of mtorc1 by nutrient deprivation or rapamycin induces its dissociation from the ulk complex .\ndephosphorylated ulk1 is enzymatically active and phosphorylates matg13 and fip200 to initiate the autophagic process ( fig .\n1 ) . bcl-2-interacting myosin - like coiled - coil protein ( beclin 1 ) , which was initially identified as a bcl-2 binding protein , is also critically involved in the initiation of autophagy .\nbeclin 1 forms complexes with vps34 , vps15 , and atg14l to induce autophagosome formation or with vps34 , vps15 , and ultraviolet radiation resistance - associated gene ( uvrag ) to induce autophagosome maturation . after the dissociation of the beclin-1 complex from bcl-2 in autophagy - inducing conditions such as nutrient deprivation , vps34 , a class iii phosphatidylinositol 3-kinase within the complex ,\npi3p then recruits double fyve - containing protein 1 ( dfcp1 ) and atg proteins , which play crucial roles in the formation of the autophagosome cradle ( fig .\nthe atg system is critical for autophagosome completion and is similar to the ubiquitination system .\natg12 as a ubiquitin - like protein that is conjugated to atg5 and then to atg16l1 through the concerted action of atg7 , an e1-like enzyme , and atg10 , an e2-like enzyme .\natg8 , also called microtubule - associated protein 1 light chain 3 ( lc3 ) , is another ubiquitin - like protein that is converted to lc3-i immediately after synthesis by atg4 .\nlc3-i is conjugated to its lipid target , phosphatidylethanolamine ( pe ) , through atg7 , atg3 acting as another e2-like enzyme and atg12-atg5-atg16l1 complex . after processing ,\nautophagolysosomes are then conjugated to lysosomes to form autophagolysosomes , where the proteolysis of enclosed cytoplasmic contents or organelles occurs .\nhence , unwanted ubiquitinated proteins can be selectively eliminated by binding to p62 and becoming conjugated to lc3-ii of autophagosomes .\nin an attempt to study the role of autophagy in endocrine tissues , mice with -cell - specific deletion of autophagy - related 7 ( atg7 ) have been produced ( atg7 ) .\nthe insulinogenic index was significantly lower in atg7 mice compared to control mice , suggesting impaired -cell function in vivo .\nmorphologically , -cell mass was decreased , which was attributed to increased -cell death and reduced -cell proliferation .\ninsulin release and transient glucose - induced cytosolic ca were significantly attenuated in autophagy - deficient -cells compared to wild - type -cells , suggesting functional defects in addition to compromised viability .\nconfocal microscopy showed inclusion bodies in autophagy - deficient -cells which contained ubiquitin material and p62 , a polyubiquitin - binding adaptor protein , attesting to the importance of autophagy in the removal of insoluble or large long - lived ubiquitinated proteins .\nelectron microscopy showed vacuolar degeneration along with mitochondrial swelling and endoplasmic reticulum ( eprefacer ) distension .\nthese results suggest that autophagy is crucial in the maintenance of -cell structure , mass and function , although the role of autophagy in the development of diabetes is not clear from this study .\nto study the role of -cell autophagy in diabetes , we studied the er because er distention was observed in autophagy - deficient -cells and er stress is important in the development of diabetes .\nwhen we examined the expression of genes involved in the er stress response or the unfolded protein response ( upr ) , we found that their expression was significantly reduced in autophagy - deficient -cells despite er distention suggesting the presence of er stress ( fig .\n2a ) . while these results were contrary to expectations , a deficient upr in the face of er stress could be a sign of cellular decompensation or maladaptation\nindeed , autophagy - deficient -cells were more susceptible to treatment with er stressors such as thapsigargin or lipids in vitro ( fig .\nautophagy - deficient -cells were also more susceptible to the in vivo er stress imposed by obesity compared to autophagy - competent -cells .\nthus , when atg7 mice were bred to ob / w mice , -cell apoptosis was more pronounced in atg7-ob / ob mice compared to control atg7-ob / w or atg7-ob / ob mice . in addition , there was evidence of the increased accumulation of reactive oxygen species and decreased -cell mass in the pancreatic islets of atg7-ob / ob mice ( fig .\naccordingly , atg7-ob / ob mice developed severe diabetes and markedly deteriorated glucose intolerance , while littermate atg7-ob / w mice or atg7-ob / ob mice showed only mild hyperglycemia ( fig .\nthese results suggest that compromised -cell autophagy due to genetic causes , environmental insults , or aging could be a factor in the transition from obesity to diabetes . while closely related , obesity and diabetes are not the same disease process and -cell autophagy status may help determine the progression from obesity to diabetes .\nthe role of hypothalamic autophagy in the control of whole body energy balance has also been studied .\nmice with deleted atg7 specifically in their orexigenic agouti - related peptide ( agrp ) neurons ( atg7 mice ) were lean , which could be attributed to an increase of proopiomelanocortin ( pomc ) expression and anorexigenic -melanocyte stimulating hormone production , or to impaired agrp augmentation in response to fatty acids during starvation in autophagy - deficient agrp neurons .\nin contrast , mice with anorexigenic pomc neuron - specific atg7 deletions ( atg7 mice ) were obese [ 18 - 20 ] , which was attributed to increased food intake and reduced energy expenditure .\ninterestingly , the number of pomc neurons was not diminished in the hypothalamus of atg7 mice compared to control mice , which is in contrast to the diminished pancreatic -cell mass in atg7 mice . instead ,\nsignal transducer and activator of transcription 3 ( stat3 ) activation in the pomc neurons by leptin was diminished in atg7 mice , although the mechanism of deficient stat3 activation is not clear .\nthus , the role of hypothalamic autophagy on whole body metabolism , appetite , and energy expenditure depends on the types of neurons affected .\nmice with autophagy deficiencies in insulin target tissues such as skeletal muscle , liver , or adipose tissues have been generated and studied .\nanimals with targeted disruption of atg7 in the liver have been reported to show accumulation of lipids due to defects in the lipid - disposal process known as ' lipophagy ' .\nin addition , the conditional knockout of autophagy genes in adipose tissues results in defects in adipocyte differentiation . we have demonstrated that autophagy deficiencies in the skeletal muscle or liver lead to leanness and resistance to diet - induced obesity and diabetes .\nsuch phenotypes were attributed to mitochondrial dysfunction due to autophagy deficiencies in insulin target tissues and the subsequent mitochondrial stress response inducing the release of fgf21 as a ' mitokine ' .\nthese results are in contrast to a previous paper which suggested that autophagy deficiency is an element in the pathogenesis of insulin resistance and diabetes .\nsuch discrepancies could be due to differences in the methods , severity , or duration of gene targeting , the age of the experimental animals , and the mode of autophagy affected .\nthe concept of ' mitokines ' has been suggested by a caenorhabditis elegans model that shows increased longevity and has a disruption in the mitochondrial electron transport chain .\nin addition , a recent paper reported a significant prolongation of lifespan in transgenic mice overexpressing fgf21 .\nthe relationship between autophagy , mitochondria , and ' mitokines ' could be a hot topic not only in the field of metabolism and diabetes , but also in aging and longevity studies .\nthe roles of autophagy in whole body metabolism and the development of metabolic syndrome or diabetes have been explored by many investigators .\nthe majority of previous studies investigating the role of autophagy in body metabolism and diabetes employed site - specific knockout mouse models .\nwhile such models have provided important data regarding the impact of dysregulated autophagy in metabolic disorders , they may not reflect natural physiological or disease conditions .\nfurther studies employing more physiological models are likely to provide valuable information on the role of autophagy in the control of body metabolism and the pathogenesis of metabolic disorders . because autophagy plays a crucial role in other body systems besides the control of metabolism , the information obtained by studying the relationship between autophagy and body metabolism could lead to the development of a novel class of drugs that can be employed not only against diabetes or metabolic syndrome , but also cancer , neurodegeneration , and aging .", "role": "user" }, { "content": "autophagy plays a crucial role in the maintenance of cellular nutrient balance and the function of organelles such as mitochondria or the endoplasmic reticulum , which are important in intracellular metabolism , insulin release , and insulin sensitivity . in the insulin - producing pancreatic -cells , \n autophagy is important in the maintenance of -cell mass , structure , and function . \n mice with deficiencies in -cell - specific autophagy show reduced -cell mass and defects in insulin secretion that lead to hypoinsulinemia and hyperglycemia but not diabetes . however , these mice developed diabetes when bred with ob / ob mice , suggesting that autophagy - deficient -cells have defects in dealing with the increased metabolic stress imposed by obesity . \n these results also imply that autophagy deficiency in -cells could be a factor in the progression from obesity to diabetes . \n another important function of autophagy is in hypothalamic neurons for the central control of energy expenditure , appetite , and body weight . \n in addition , mice with autophagy deficiencies in the target tissues of insulin have yielded diverse phenotypes . \n taken together , these results suggest that autophagy is important in the control of whole body energy and nutrient homeostasis , and its dysregulation could play a role in the development of metabolic disorders and diabetes .", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3401
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe COVID-19 pandemic has challenged existing healthcare systems and has made prevention of healthcare personnel exposure a high priority. Essential healthcare services, including multi-disciplinary team (MDT) meetings that make medical decisions, are expected to continue uninterrupted in this time of social distancing. There are a multitude of virtual platforms available to enable remote MDT meetings, and the pandemic has accelerated their arrival into daily healthcare practice. While we deal with a pandemic crisis, we have comprehensively reviewed and reported on the popular platforms and services available for this purpose. While each platform has its own unique features and drawbacks, it is essential to liaise with information technology departments and data governance teams to understand the optimal platforms for use within each healthcare setting. Level of evidence: Not ratable.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.621
[ { "content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: A Whole-Body Approach to Point of Care Ultrasound Ultrasonography is an essential imaging modality in the ICU used to diagnose and guide the treatment of cardiopulmonary failure. Critical care ultrasonography requires that all image acquisition, image interpretation, and clinical applications of ultrasonography are personally performed by the critical care clinician at the point of care and that the information obtained is combined with the history, physical, and laboratory information. Point-of-care ultrasonography is often compartmentalized such that the clinician will focus on one body system while performing the critical care ultrasonography examination. We suggest a change from this compartmentalized approach to a systematic whole-body ultrasonography approach. The standard whole-body ultrasonography examination includes thoracic, cardiac, limited abdominal, and an evaluation for DVT. Other elements of ultrasonography are used when clinically indicated. Each of these elements is reviewed in this article and are accompanied by a link to pertinent cases from the Ultrasound Corner section of CHEST.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"cardiac\", \"whole-body\", \"Whole-Body\"], \"Bacterium\": [], \"BiologicFunction\": [\"DVT\", \"cardiopulmonary failure\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"body system\"], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"compartmentalized approach\", \"diagnose\", \"compartmentalized\", \"history\"], \"Food\": [], \"HealthCareActivity\": [\"critical care\", \"examination\", \"evaluation\", \"Ultrasonography\", \"treatment\", \"ultrasonography\", \"ultrasonography approach\", \"Ultrasound\", \"image interpretation\", \"Critical care\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Ultrasound Corner section of CHEST\"], \"MedicalDevice\": [], \"Organization\": [\"ICU\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"clinician\"], \"ResearchActivity\": [], \"SpatialConcept\": [\"thoracic\", \"abdominal\"], \"Virus\": []}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.4038
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nCarbenicillin-trimethoprim/sulfamethoxazole versus carbenicillin-gentamicin as empiric therapy of infection in granulocytopenic patients . A prospective , randomized , double-blind study . The results of therapy with carbenicillin plus trimethoprim-sulfamethoxazole ( C-T/S ) were compared to those obtained with carbenicillin plus gentamicin ( C-G ) in a prospective double-blind study of empiric antibiotic therapy in granulocytopenic patients . Patients were stratified into two groups : favorable-prognosis , group 1 ( carcinoma , lymphoma , multiple myeloma ) , or unfavorable-prognosis , group 2 ( acute leukemia , bone marrow transplantation ) , based on anticipated duration of granulocytopenia . Over-all , empiric antibiotic trials were more often successful ( P = 0.004 ) in group 1 ( 55 of 62 patients or 89 per cent ) than in group 2 ( 42 of 64 patients , 66 per cent ) mwithin group 1 , there was a favorable outcome in 30 of 32 ( 94 per cent ) C-T/S trials and in 25 of 30 ( 83 per cent ) C-G trials ( P = 0.25 ) ; within group 2 , there was a favorable outcome in 23 of 30 ( 77 per cent ) C-T/S trials and in 19 of 34 ( 56 per cent ) C-G trials ( P = 0.14 ) , Combined results in both groups indicated a higher proportion of favorable outcome in C-T/S trials ( 53 of 62 , 85 per cent ) than in C-G trials ( 44 of 64 , 69 per cent ) . Further analysis ( Manetl-Naenszel test ) showed the over-all difference in outcome to be significant ( P = 0.049 ) , but the general applicability of this result may be limited by the rather low incidence of gram-negative bacterial infections in this study . There was no difference between the treatment regimens in antibiotic toxicity , and serious superinfection occurred only in group 2 patients ( 21 per cent of trials ) , equally divided between treatment arms . Initial protocol dosing achieved target plasma levels of trimethoprim ( 3 to 8 micrograms/ml ) or gentamicin ( 4 to 10 micrograms/ml ) in 57 of 68 ( 84 per cent ) C-T/S trials compared to 21 of 60 ( 35 per cent ) C-G trials .", "role": "user" }, { "content": "{\"Intervention\": [\"Carbenicillin-trimethoprim/sulfamethoxazole versus carbenicillin-gentamicin\", \"carbenicillin plus trimethoprim-sulfamethoxazole ( C-T/S ) were compared to those obtained with carbenicillin plus gentamicin ( C-G )\"], \"Outcome\": [\"favorable outcome\", \"antibiotic toxicity\", \"serious superinfection\", \"successful\", \"analysis ( Manetl-Naenszel test\", \"no difference\"], \"Participant\": [\"lymphoma\", \"two\", \"group 1\", \"bone marrow transplantation\", \"acute leukemia\", \"granulocytopenic\", \"granulocytopenia\", \"carcinoma\", \"group 2\", \"multiple myeloma\"]}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.13652
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Human Breast Milk and Antiretrovirals Dramatically Reduce Oral HIV-1 Transmission in BLT Humanized Mice\nArticle:\nAbstract:\nCurrently, over 15% of new HIV infections occur in children. Breastfeeding is a major contributor to HIV infections in infants. This represents a major paradox in the field because in vitro, breast milk has been shown to have a strong inhibitory effect on HIV infectivity. However, this inhibitory effect has never been demonstrated in vivo. Here, we address this important paradox using the first humanized mouse model of oral HIV transmission. We established that reconstitution of the oral cavity and upper gastrointestinal( GI) tract of humanized bone marrow/liver/thymus( BLT) mice with human leukocytes, including the human cell types important for mucosal HIV transmission( i. e. dendritic cells, macrophages and CD4+ T cells), renders them susceptible to oral transmission of cell-free and cell-associated HIV. Oral transmission of HIV resulted in systemic infection of lymphoid and non-lymphoid tissues that is characterized by the presence of HIV RNA in plasma and a gradual decline of CD4+ T cells in peripheral blood. Consistent with infection of the oral cavity, we observed virus shedding into saliva. We then evaluated the role of human breast milk on oral HIV transmission. Our in vivo results demonstrate that breast milk has a strong inhibitory effect on oral transmission of both cell-free and cell-associated HIV. Finally, we evaluated the effect of antiretrovirals on oral transmission of HIV. Our results show that systemic antiretrovirals administered prior to exposure can efficiently prevent oral HIV transmission in BLT mice.\nIntroduction:\nPediatric HIV infection is associated with an accelerated course of disease and high mortality rate. In the absence of antiretroviral therapy, only 65% of HIV-infected children survive until their first birthday and less than half will reach two years of age[1]. Most children acquire HIV from their mother in utero, intrapartum or orally during breastfeeding[2]. In developed countries the incidence of mother-to-child transmission of HIV is extremely low; HIV-infected women receive antiretroviral therapy during pregnancy and delivery and abstain from breastfeeding. Furthermore, their children receive antiretroviral prophylaxis at birth and for several weeks thereafter. The majority of HIV-infected children live in sub-Saharan Africa where HIV-positive women have limited access to antiretroviral drugs and the health benefits of breastfeeding outweigh the risk of HIV transmission[3]. Despite the presence of innate factors in human breast milk that display strong HIV inhibitory activity in vitro[4]–[12], up to 44% of HIV infections in children can be attributed to breastfeeding. The risk of acquiring HIV after a single day of breastfeeding is extremely low( 0. 00028 per day of breastfeeding)[13], however, after ingesting liters of breast milk over a span of several months to years( ∼250 liters per year), 5–20% of infants born to HIV-infected women will eventually become infected with HIV in the absence of any preventative measures[14]. Exclusive breastfeeding( not allowing any water, juice or solid foods) has been associated with a drastic decrease in the HIV transmission rate through breastfeeding, indicating that breast milk acts as a vehicle of protection[14]. However, elevated levels of HIV particles( cell-free virus) and HIV-infected cells( cell-associated virus) in breast milk of HIV-positive women are associated with an increased risk for HIV transmission during breastfeeding[15]–[17]. Although it has been reported that a 10-fold increase in cell-free or cell-associated HIV in breast milk is associated with a 3-fold increase in transmission[15], it is still unclear whether cell-free and/or cell-associated virus are transmitted during breastfeeding. Furthermore, it is not known if the frequency of cell-free and cell-associated HIV transmission varies at different stages of lactation( i. e. colostrum, early breast milk and mature breast milk). Therefore, successful interventions may need to prevent transmission of both cell-free and cell-associated HIV during breastfeeding by reducing the viral load and number of infected cells in breast milk and/or by directly inhibiting infection of the infant oral and GI mucosa. The development of effective strategies to prevent HIV acquisition during breastfeeding would be significantly enhanced by a small animal model of oral HIV infection that could be utilized to study the relative contribution of cell-free and cell-associated virus in transmission, the mechanism for oral transmission of cell-free and cell-associated HIV and the innate HIV inhibitory activity of human breast milk. In addition, animal models are needed to test the efficacy of novel approaches to prevent transmission of cell-free and cell-associated HIV in breast milk. Thus, we developed an oral HIV transmission model based on BLT humanized mice[18]. We then used this model to demonstrate 1) efficient cell-free and cell-associated oral HIV transmission, 2) transmission can occur in the oral cavity or the upper GI tract, 3) transmission can be prevented with antivirals and 4) that human breast milk dramatically reduces oral HIV transmission.\nDiscussion:\nAvoidance of breastfeeding by HIV seropositive mothers in resource limited settings where prophylaxis is not available reduces the risk of HIV transmission but does not increase the overall survival of their children; breastfeeding protects these children from infections that result in diarrhea, pneumonia and sepsis[14]. In this study, we first established that human hematopoietic cells generated in situ are capable of repopulating the oral cavity and upper GI tract of humanized BLT mice. Specifically, these important mucosal tissues of BLT mice are repopulated with the types of human cells that have been identified to be important for mucosal HIV transmission( i. e. dendritic cells, macrophages and CD4+ T cells). Our results demonstrate that the presence of these human cells renders BLT mice susceptible to oral transmission of cell-free and cell-associated HIV. Oral transmission of HIV results in systemic infection of lymphoid and non-lymphoid tissues that is characterized by a gradual decline of CD4+ T cells in peripheral blood. In addition, infection of the oral cavity results in virus shedding into saliva, recapitulating the human condition[27],[28]. Our data also offers the first in vivo demonstration that human breast milk can inhibit oral transmission of cell-free and cell-associated HIV. Furthermore, oral transmission of HIV can be prevented with systemic FTC/TDF PrEP. Previous studies utilizing NOD/SCID and NOD/SCID/β2m−/− mice reconstituted with human peripheral blood leukocytes( hu-PBL mice) failed to demonstrate oral transmission of cell-free HIV following an oral exposure to CXCR4 and CCR5 tropic strains, including HIV-1JR-CSF[46]. Therefore, our work represents a significant advance since we demonstrated, for the first time, highly reproducible oral transmission of multiple HIV strains in BLT mice. Specifically, in this study we demonstrate oral transmission of the well characterized CCR5-tropic HIV-1 isolate JR-CSF and of several T/F viruses. Our data revealed that the efficiency of oral transmission varied among T/F viruses, suggesting that intrinsic properties of these viruses may contribute to their transmission in vivo. Recent studies of mother-infant transmission pairs indicate that the replicative fitness as well as the length of the variable loop and number of n-linked glycosylation sites in the envelope protein may influence which maternal variant( s) are preferentially transmitted[47]–[49]. Our results indicate that BLT mice could be used to study the molecular and biological properties of HIV strains that provide a selective advantage for oral transmission during breastfeeding. A comparison of viruses transmitted at early, mid and late stages of lactation would indicate if the composition of breast milk, which changes during lactation, influences which viruses are preferentially transmitted. During breastfeeding, HIV is transmitted from mother-to-child in the context of human breast milk. In the absence of antiretroviral therapy, the breast milk of most HIV-infected women possesses cell-free and/or cell-associated HIV[15]–[17]. Although elevated levels of both cell-free and cell-associated HIV in the breast milk of HIV-infected mothers have been associated with an increased risk for HIV transmission during breastfeeding[15]–[17], it is currently not known if both cell-free and cell-associated HIV are transmitted to infants. Studies illustrating that HAART administered to HIV-infected mothers during pregnancy or post-partum significantly decreases the amount of cell-free but not cell-associated HIV in breast milk[50],[51], combined with reports demonstrating that maternal HAART significantly decreases but does not eliminate HIV transmission during breastfeeding, suggest that both cell-free and cell-associated HIV may be transmitted[52],[53]. As the importance of cell-associated HIV in breast milk transmission becomes increasingly more appreciated, antiretroviral drugs and preventative strategies may be needed that reduce the burden of HIV-infected cells in the breast milk of HIV-infected women and/or directly inhibit transmission of cell-associated HIV in infants during breastfeeding. Our data demonstrating oral transmission of both cell-free and cell-associated HIV is of high relevance since it will make possible the future evaluation of novel prophylactic strategies aimed at preventing oral transmission of both cell-free and cell-associated HIV. However, even though infected cells were used for the exposures, we cannot rule out the possibility that actual transmission across the mucosal surface could occur with cell-free virions released from infected cells. Paradoxically, although breastfeeding can be attributed to a significant number of HIV infections in children, breast milk has been shown to potently inhibit HIV infectivity and to possess several innate factors with in vitro anti-HIV inhibitory activity[4]–[12]. Our results offer the first in vivo evidence that human breast milk can strongly inhibit oral transmission of both cell-free and cell-associated HIV. The ability of human breast milk to inhibit cell-associated HIV transmission in BLT mice is in contrast to in vitro studies suggesting that milk does not inhibit cell-associated infection[54]. This apparent discrepancy may be explained by the use of whole human breast milk for our experiments. Specifically, the in vitro experiments comparing breast milk inhibition of cell-free and cell-associated HIV infection utilized the skim milk fraction of breast milk. Further in vitro analysis comparing inhibition of cell-free HIV infection in the presence of whole breast milk or the skim milk fraction will be needed to address this issue. However, since most children at risk of HIV infection via breast milk do not receive skim milk, the potential relevance of this in vitro observation may be questionable. Nevertheless, inhibitory factors present in breast milk may differ in their ability to inhibit cell-free versus cell-associated HIV infection. While the skim milk fraction of human breast milk possesses proteins with HIV inhibitory activity( i. e. mucin, lactoferrin, bile salt-stimulated lipase and secretory leukocyte protease inhibitor[SLPI])[4]–[12], the lipid fraction may contain additional factors that can inhibit transmission of cell-free and/or cell-associated HIV. For example, increased concentrations of certain long-chain polyunsaturated fatty acids( LCPUFAs) in breast milk are associated with a decreased risk of HIV breastfeeding transmission[55]. Although their ability to inhibit cell-free versus cell-associated HIV infection has not been experimentally tested to our knowledge, it has been hypothesized that LCPUFAs may inhibit HIV infection by inactivating the virus' envelope, suppressing the release of HIV virions from the host cell membrane and/or enhancing the viability of infected CD4+ T cells[56]. All together, our results highlight the protective role of human breast milk against HIV transmission and suggest that components in both the skim milk and lipid fractions may contribute to its HIV inhibitory activity. Despite ingesting liters of breast milk over a span of several months to years, the majority of infants born to HIV-infected HAART naïve women( ∼85%) do not acquire HIV during breastfeeding[14]. This observation is in agreement with our data demonstrating the potent in vivo inhibitory activity of human breast milk on oral transmission of both cell-free and cell-associated HIV following a single oral exposure. Oral transmission of HIV in the presence of human breast milk may require multiple exposures over time. Furthermore, although increased levels of HIV in breast milk have been associated with an increased risk for HIV transmission during breastfeeding, several other maternal and infant factors have been associated with breastfeeding transmission. Additional maternal factors include seroconversion during lactation, CD4+ T cells counts below 500 cells per mm3, poor breast health( mastitis, nipple bleeding, etc), and decreased levels of alpha-defensins in breast milk. Infants that receive both breast milk and other food( mixed-feeding) are also more susceptible to HIV transmission during breastfeeding as are infants with oral thrush and decreased levels of salivary SLPI[reviewed in[14]]. One remaining question is, therefore, whether or not there are differences in the babies or mothers or in the breast milk in the cases where mother-to-child transmission does occur. In the future, it will be important to compare the inhibitory activity of breast milk obtained from HIV-infected mothers who transmit HIV to that of HIV-infected mothers that do not transmit HIV in vivo. In addition, in vivo experiments evaluating transmission in the presence of saliva obtained from infants with low and high levels of salivary SLPI will help assess the contribution of the inhibitory activity of infant salivary SLPI on oral transmission of cell-free and cell-associated HIV. Collectively, our results demonstrate that BLT mice are an attractive small animal model that can be utilized to study key aspects of oral HIV transmission and to test the efficacy of HIV vaccines, antiretroviral therapies and other preventative measures aimed at reducing mother-to-child transmission of HIV during breastfeeding. In addition, our data demonstrating the presence of human immune cells in the oral cavity and GI tract of BLT mice indicate that BLT mice may be utilized to study other human pathogens that are transmitted orally and/or infect these tissues( i. e. HCMV and EBV) and to answer fundamental questions about human oral and gastrointestinal immunity.", "role": "user" }, { "content": "Infected children acquire HIV from their mother in utero , intrapartum or by ingesting their mother's breast milk which can contain both HIV particles ( cell-free ) and HIV-infected cells ( cell-associated ) . Although breastfeeding is attributed to a significant number of HIV infections in children , most breastfed infants remain uninfected despite prolonged and repeated exposure to HIV . This limited transmission has led to two apparently contradictory roles for milk in HIV infection: vector of transmission or vehicle of protection ? Milk has a strong inhibitory effect on HIV infection in vitro . However , this has never been demonstrated in an in vivo system . In the present study , we address this paradox in a bone marrow/liver/thymus ( or BLT ) humanized mouse model of oral transmission of cell-free and cell-associated HIV . We demonstrate that human breast milk has potent HIV inhibitory activity that can prevent oral transmission of cell-free and cell-associated HIV in vivo . Our results provide key insight into oral HIV transmission and the protective role of milk . However , since transmission can and does occur in some instances after continued exposure to HIV in milk , we demonstrate that oral HIV transmission can be efficiently prevented in BLT humanized mice by the systemic administration of antiretrovirals .", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.4259
[ { "content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: dexamethasone, a steroid commonly used to treat inflammation, can drastically reduce death in patients who are severely sick with COVID-19.\n\nAbstract:\nOBJECTIVES: The infection caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) spreads worldwide and is considered a pandemic.\nThe most common manifestation of SARS-CoV2 infection (Coronavirus disease 2019 - COVID-19) is viral pneumonia with varying degrees of respiratory compromise and up to 40% of hospitalized patients might develop Acute Respiratory Distress Syndrome (ARDS).\nSeveral clinical trials evaluated the role of corticosteroids in non-COVID-19 ARDS with conflicting results.\nWe designed a trial to evaluate the effectiveness of early intravenous dexamethasone administration on the number of days alive and free of mechanical ventilation within 28 days after randomization in adult patients with moderate or severe ARDS due to confirmed or probable COVID-19.\nMETHODS: This is a pragmatic, prospective, randomized, stratified, multicenter, open-label, controlled trial including 350 patients with early-onset (less than 48h before randomization) moderate or severe ARDS, defined by the Berlin criteria, due to COVID-19.\nEligible patients will be randomly allocated to either standard treatment plus dexamethasone (intervention group) or standard treatment without dexamethasone (control group).\nPatients in the intervention group will receive dexamethasone 20mg IV once daily for 5 days, followed by dexamethasone 10mg IV once daily for additional 5 days or until Intensive Care Unit (ICU) discharge, whichever occurs first.\nThe primary outcome is ventilator-free days within 28 days after randomization, defined as days alive and free from invasive mechanical ventilation.\nSecondary outcomes are all-cause mortality rates at day 28, evaluation of the clinical status at day 15 assessed with a 6-level ordinal scale, mechanical ventilation duration from randomization to day 28, Sequential Organ Failure Assessment (SOFA) Score evaluation at 48h, 72h and 7 days and ICU-free days within 28.\nETHICS AND DISSEMINATION: This trial was approved by the Brazilian National Committee of Ethics in Research (Comissao Nacional de Etica em Pesquisa - CONEP) and National Health Surveillance Agency (ANVISA).\nAn independent data monitoring committee will perform interim analyses and evaluate adverse events throughout the trial.\nResults will be submitted for publication after enrolment and follow-up are complete.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.gnormplus_ner
science.gnormplus_ner.218
[ { "content": "Extract all unique entities of these types from the text: [\"Gene\"].\n\nOutput a JSON object of the format: {\"Gene\": [\"SSX\", ...]}.\n\nText:\nReceptor-associated Mad homologues synergize as effectors of the TGF-beta response. Transforming growth factor-beta TGF-beta is the prototype for a family of extracellular proteins that affect cell proliferation and tissue differentiation. TGF-beta-related factors, including BMP-2/4, Dpp and activin, act through two types of serine/threonine kinase receptors which can form a heteromeric complex. However, the mechanism of signal transduction by these receptors is largely unknown. In Drosophila, Mad is required for signalling by Dpp. We have isolated complementary DNAs for four human Mad homologues, one of which, hMAD-4, is identical to DPC-4, a candidate tumour suppressor. hMAD-3 and -4 synergized to induce strong ligand-independent TGF-beta-like responses. When truncated at their carboxy termini, hMAD-3 and -4 act as dominant-negative inhibitors of the normal TGF-beta response. The activity of hMAD-3 and -4 was regulated by the TGF-beta receptors, and hMAD-3 but not hMAD-4 was phosphorylated and associated with the ligand-bound receptor complex. These results define hMAD-3 and -4 as effectors of the TGF-beta response and demonstrate a function for DPCA-4/hMAD-4 as a tumour suppressor.", "role": "user" }, { "content": "{\"Gene\": [\"Mad\", \"ligand-bound receptor\", \"Transforming growth factor-beta\", \"Dpp\", \"DPCA-4\", \"BMP-2/4\", \"hMAD-3 and -4\", \"DPC-4\", \"TGF-beta-related factors\", \"hMAD-4\", \"hMAD-3\", \"serine/threonine kinase receptors\", \"tumour suppressor\", \"dominant-negative inhibitors\", \"TGF-beta\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3120
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nNo gains in efficacy observed by adding gemcitabine to adjuvant therapy for lymph node-positive breast cancer .", "role": "user" }, { "content": "{\"Intervention\": [\"gemcitabine to adjuvant therapy\"], \"Outcome\": [\"efficacy\"], \"Participant\": [\"lymph node-positive breast cancer\"]}", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.591
[ { "content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nTemporal profile of brain and pituitary GnRHs, GnRH-R and gonadotropin mRNA expression and content during early development in European sea bass (Dicentrarchus labrax L.). A likely endocrine control mechanism for sexual differentiation in size-graded populations of European sea bass (Dicentrarchus labrax) is proposed by evaluating the brain expression and pituitary content of two forms of gonadotropin-releasing hormone (GnRH), namely sea bream (sbGnRH) and salmon (sGnRH), the pituitary expression of one subtype of GnRH receptor (dlGnRH-R-2A) and the three gonadotropin (GtH) subunits, namely glycoprotein alpha (GPalpha), follicle-stimulating hormone beta (FSHbeta) and luteinizing hormone beta (LHbeta), as well as the pituitary and plasma LH levels between 50 and 300 days post-hatching (dph). Four gradings were conducted between 2 and 8 months after hatching, resulting in a population of large and small individuals, having 96.5% females (female-dominant population) and 69.2% males (male-dominant population), respectively, after the last grading. The onset of gonadal differentiation was different in the two sexes, and coincided with a peak of expression of sbGnRH or sGnRH. Furthermore, the expression of these GnRHs was correlated with the expression of dlGnRH-R-2A. Sex-related differences in the brain and pituitary content of sbGnRH were also found at the time of sexual differentiation. Moreover, the observed sexual dimorphism at the transcriptional or synthesis level of these GnRH forms suggests that a different neuro-hormonal regulation is operating according to sex. At the onset of sex differentiation, FSHbeta transcriptional activity reached maximal values, which were maintained until the completion of the process. The present study suggests a role for sbGnRH, sGnRH and the dlGnRH-R-2A during gonadal differentiation, possibly through enhancement of FSHbeta gene expression. In males, a different endocrine regulation seems to exist also during spermiogenesis and spermiation, when gene transcription, peptide synthesis and release of LH are of greater importance.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.6519
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nthe clinical evaluation of left ventricular ( lv ) diastolic function has been a difficult challenge .\nlv filling and pulmonary venous doppler patterns have been utilized to estimate lv diastolic function , comprising variables as such stiffness , relaxation and even lv filling pressures .\ndoppler echocardiography has been utilized not only as a diagnostic tool but also as a monitoring tool , permitting follow - up of the effects of therapeutic interventions . in the previous issue of critical care , vignon and\ncolleagues demonstrated the value of tissue doppler imaging ( tdi ) to estimate lv diastolic function in a setting where acute alterations of preloading conditions may interfere .\nthey also clearly show some weak points with respect to the correct use of these function variables .\na typical transmitral doppler pattern consists of a larger early filling velocity wave ( e wave ) followed by an atrial contraction flow velocity wave .\nreduced lv relaxation ( present in patients with advanced age , ischaemic heart disease or arterial hypertension ) will induce a reduction of the e wave in comparison with the atrial contraction flow velocity wave . increasing filling\npressures will increase the e wave velocity and will shorten the deceleration time of the e wave , with a transition from pseudonormalization to a restrictive pattern .\nthe phase of diastolic dysfunction for an individual patient remains difficult because of the interplay between relaxation and preload , which makes the routine pulsed wave doppler indices useless particularly in a setting where preloading conditions may change constantly and abruptly , as in many critically ill patients .\nthis doppler mode facilitates the assessment of the movement of the myocardial tissue , which typically reflects low velocity with very high amplitude doppler signals .\ntdi can therefore be utilized to estimate myocardial velocities at the mitral annulus to obtain an impression of both systolic and diastolic myocardial motion .\nthe spectral doppler pattern is characterized by a systolic wave , an early diastolic wave ( e ' wave ) and an atrial velocity wave .\nthe technique is hampered by shortcomings related to the doppler technology ( angle misalignment , translation and rotation of the myocardial tissue ) and by intrinsic characteristics of myocardial function ( for example , the presence of regional wall motion abnormalities ) .\nnevertheless , regional lv systolic function and lv diastolic function assessment is possible provided the sample volume is placed at the level of the mitral annulus and no ischaemia / infarction in the annulus region is present .\nwhereas the systolic component of this doppler pattern has been shown to be clearly preload dependent , less transparency exists on the load dependency of the diastolic myocardial velocities . with the initial description of this variable\n, the index appeared to be load independent in settings of rapid infusion or preload alteration .\njacques and colleagues , however , reported both afterload independency and preload dependency of the e ' wave in situations where a normal lv function was present .\nin contrast to the present study of vignon and colleagues , jacques and colleagues did not report the sample volume location .\ncomparison between the septal and lateral mitral annulus signals suggests that the septal signals are apparently more sensitive to preload alterations .\nthe reader , however , should bear in mind that the stability of the signals obtained at the septal side can be questioned , particularly in mechanically ventilated patients with ventilation - induced , right - sided , afterload shifts .\nnevertheless , several authors have already reported differences in e ' wave velocities between the septal and lateral walls , without the ability to indicate a clear cause for this difference .\nvignon and colleagues show no changes of the e ' wave in a rather small subset of intensive care unit patients , and again several issues have to be kept in mind .\nthe effects of drugs such as epinephrine , however , have not been described ; one should expect an increase of the e ' wave , at least when considering a similar effect to that with dobutamine .\nit can thus be questioned whether an ultrafiltration ( and thus load diminishing ) effect is not concurring , keeping the e ' wave constant .\nsecond , the influence of ultrafiltration itself on lv systolic function and lv diastolic function during septic shock is largely unknown .\nthe combination of lower preload ( by ultrafiltration ) and decreased lv function would render a less preload - dependent e ' wave , as suggested by jacques and colleagues the number of intensive care unit patients included in this subset is again far too small to conclude in a proper manner . from all these studies , it is unclear whether the discussed concepts apply to all haemodynamic states . the fact that the e wave / e ' wave relates to pulmonary capillary wedge pressure suggests that increased intrathoracic pressures could impede this relationship strongly .\napart from the fact that the e ' wave ( in conjunction with traditional doppler parameters ) is useful in determining lv diastolic function more accurately , tdi in the critically ill patient keeps more questions open than are answered .\ne wave = early filling velocity wave ; e ' = early diastolic wave ; lv = left ventricular ; tdi = tissue doppler imaging .\nthis work was supported by an unrestricted grant from the international research centre of ghent university .", "role": "user" }, { "content": "assessment of left ventricular diastolic function in the critically ill patient remains a difficult issue in clinical practice . combined use of routine transmitral and pulmonary venous doppler patterns in conjunction with tissue doppler imaging \n have been claimed to allow bedside diagnosis of diastolic dysfunction . \n although in the previous issue of critical care it was clearly demonstrated there might be a difference in load dependency of the early myocardial tissue doppler velocity between lateral and septal placed sample volume , there remain still several unanswered questions , particularly with respect to the preload dependency of these indices .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.102114
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nhe stated he was pressed between reversing 10-ton truck and wall at a construction site .\nplain radiographs and three - dimensional ct revealed an apc type ii pelvic ring injury with 3.0 cm sized diastasis of pubic symphysis ( fig .\nwe scheduled an operation for open reduction and internal fixation with plate on diastasis of pubic symphysis at 4 days after injury .\nsurgery was performed under image intensifler guidance , in the supine position through a pfannenstiel approach .\nafter reduction of symphysis pubis was tried but failed , using a weber pointed reduction clamp placed anteriorly at same level on pubic body , satisfactory reduction was obtained using a farabeau clamp after temporary two screw insertion on superior border of both pubis , and confirmed by fluoroscopy . due to a difficulty of reduction , double plating was done at anterior and superior border of pubis ( fig .\non postoperative radiograph checked on one day after operation , right si joint widening was observed ( fig .\nwe planned a supplementary operation using percutaneous si screws at the second day after initial operation .\nthe patient was positioned supine with applying radiolucent padding under his sacrum not to compress buttock muscle . under an image intensifier , we inserted half - threaded cannulated screw with a washer into s1 vertebral body placed from the lateral surface of the iliac wing for achieving the reduction of widen si joint\n. then second fully - threaded screw was inserted into s2 body as a positional screw .\nthe patient recovered well and was discharged from the hospital on the seventh day after second operation .\n3d - f ) , satisfactory reduction of right si diastasis was confirmed . partially weight bearing on the injured side and fully weight bearing on the contralateral side were permitted after removal of drainage .\none year after injury he had no anterior pain and his follow - up radiograph showed no screw breakage , fixation failure or heterotopic ossification ( fig .\nthe patient was asked if data concerning the case could be submitted for publication , and he consented .\npubic symphysis is known as a non - synovial amphiarthrodial joint that is formed at the junction of the two innominate bones anteriorly .\nthe joint comprises a fibro - cartilaginous disc that is stabilized by the anterior capsular and ligamentous structures .\nthe typical mechanism of injury for an open - book pelvic ring injury is due to a force directed from anterior to posterior , or an external rotation or abduction force applied to either or both legs8 ) . in apc type ii patterns ,\nwidening of the symphysis of more than 2.5 cm has been correlated with rupture of anterior sacrospinous ligament and thought to be required operative fixation like a symphyseal plating with or without si joint fixation2345 ) .\nthe apc type iii patterns is characterized by vertically unstable injury associated with a posterior disruption of si complex , which requires both anterior and posterior fixation146 ) . in general , diastasis of pubic symphysis without any evidence of vertical instability\nthus only anterior plating is enough to reduce disrupted pelvic ring due to anatomical properties of posterior pelvic ring similar to suspension bridge . until now , retrospective studies suggest that posterior ring fixations may not be necessary in apc type ii injury pattern injury8910 ) .\nstatic radiographs may lead to underestimation of the actual injury , even if no pelvic manipulation has occurred .\nstress x - rays were performed on a series of 22 patients with a symphyseal diastasis between 1 and 2.5 cm to determine the extent of pelvic ring instability7 ) despite an injury , radiographic measurement of an average diastasis of 1.8 cm , the average diastasis with applied stress was 2.5 cm .\nsuch findings of intraoperative stress examination have led to a change of the treatment plan in more than 25% of patients7 ) . in the present a case , though initial radiographs revealed 3.0-cm pubic symphyseal diastasis , ct revealed 1.2-cm diastasis and no widening of si joint .\nwe assumed that unintended significant variable recoil may occur even if any pelvic wrapping was not tried .\nsuch unintended recoil could happen by the patient transportation process using bed sheet for checking ct or the convex table setting during checking ct , which is different from flat table setting during checking plain radiograph .\neven if initial radiograph showed si diastasis , following preoperative ct showed minimal si diastasis .\nthus we think there may be injured anterior si complex and intact posterior si complex which could be treated by only anterior plating . however , though anterior plating was done , postoperative radiograph and ct revealed widening of si joint .\nwe presumed that there would be concealed injury of posterior si ligamentous complex which should act as a hinge if injury type was apc ii and it could make consequent seesaw effect by forced reduction of widened pubic symphysis with double anterior plating . when ct was checked for scrutinizing associated posterior ring lesions that have occurred after diastasis of pubic symphysis\nwas measured more than 2.5 cm by initial radiograph , it should be scrutinized not only associated si joint widening but also whether diastasis of pubic symphysis on ct was equal to that of initial radiograph . making a comparison between initial radiographs and following ct , if a smaller measurement of pubic symphyseal widening was observed on following ct , it should be considered a possibility of underestimation of a si joint injury or apc type iii injury . even when pelvic open - book injury was thought that only anterior plating is required without si joint fixation , intraoperative stress test for excluding concealed posterior ring disruption should be considered .", "role": "user" }, { "content": "the classification of anteroposterior compression ( apc ) injury type is based on using static radiographs , stress radiographs are known as a useful adjunct in classifying type of apc pelvic injuries . according to a recent article , the intraoperative stress examination has led to a change in the treatment plan in more than 25% of patients on 22 patients presumed apc type i ( symphyseal diastasis < 2.5 cm ) injuries . here \n authors present a case demonstrating a necessity of intraoperative stress test for excluding concealed posterior ring disruption .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.120144
[ { "content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nblack holes which are formed in the early universe are known as primordial black holes ( pbhs ) .\nthese black holes are formed as a result of initial inhomogeneities [ 1,2 ] , inflation [ 3,4 ] , phase transitions [ 5 ] , bubble collisions [ 6,7 ] or the decay of cosmic loops [ 8 ] . in 1974 hawking discovered that the black holes emit thermal radiation due to quantum effects [ 9 ] . so the black holes get evaporated depending upon their masses .\nsmaller the masses of the pbhs , quicker they evaporate .\nbut the density of a black hole varies as inversely with it s mass .\nso high density is needed for forming lighter black holes . and such high densities is available only in the early universe .\nthus primordial black holes are the only black holes whose masses could be small enough to have evaporated by present time .\nfurther , pbhs could act as seeds for structure formation[10 ] and could also form a significant component of dark matter[11,12,13 ] . using standard cosmology barrow and carr\n[ 14 ] have studied the evaporation of pbhs .\nthey have , however , not included the effect of accretion of radiation which seems to play an important role .\nmajumdar , das gupta and saxena [ 15 ] have solved the baryon asymmetry problem including accretion . in the present work ,\nwe include accretion of radiation while studying the evaporation of pbhs and have shown that how evaporation times of pbhs change wih accretion efficiency .\nfor a spatially flat(k=0 ) frw universe with scale factor @xmath0 , the einstein equation is [ 16 ] @xmath1 where @xmath2 is the density of the universe .\n+ the energy conservation equation is @xmath3 on assuming that the universe is filled with perfect fluid describrd by equation of state @xmath4 .the parameter @xmath5 is @xmath6 for radiation dominated era@xmath7 and is @xmath8 for matter dominated era@xmath9 , where time @xmath10 marks the end of the radiation dominated era @xmath11 sec .\n+ now equation(2 ) gives @xmath12 using this solution in equation(1 ) , one gets the wellknown temporal behaviour of the scale factor @xmath13 as @xmath14 due to hawking evaporation , the rate at which the pbh mass ( m ) decreases is given by @xmath15 where @xmath16 black hole radius=@xmath17 with g as newton s gravitational constant . + @xmath18 black body constant + @xmath19 and@xmath20 hawking temperature=@xmath21 .\n+ now equation ( 4 ) becomes @xmath22 integrating the above equation , we get @xmath23^{\\frac{1}{3}}\\end{aligned}\\ ] ] where @xmath24 .\nwhen a pbh passes through radiation dominated era , the accretion of radiation leads to increase of its mass with the rate given by @xmath25 where @xmath26 is the radiation energy density of the sorrounding of the black hole=@xmath27 and @xmath28 is the accretion efficiency .\nthe value of the accretion efficiency @xmath28 depends upon complex physical processes such as the mean free paths of the particles comprising the radiation sorrounding the pbhs .\nany peculiar velocity of the pbh with respect to the cosmic frame could increase the value of @xmath28 [ 15,17 ] .\nsince the precise value of @xmath28 is unknown , it is customary [ 18 ] to take the accretion rate to be proportional to the product of the surface area of the pbh and the energy density of radiation with @xmath29 . + after substituting the expressions for @xmath30 and\n@xmath31 equation(7 ) becomes @xmath32 using equation(3 ) , we get @xmath33 on integration , the above eqution gives @xmath34^{-1}\\end{aligned}\\ ] ] the variation of accreting mass with time for different @xmath28 is shown in figure-1 .\n+ from figure-1 , it is cleared that the mass of the pbh increases with accretion efficiency .\n+ now equation(10 ) gives @xmath35 but horizon mass varies with @xmath36 as @xmath37 .\n+ so @xmath38 but for accretion to become effective , horizon mass @xmath39 must grow faster than black hole mass @xmath40 which implies ( @xmath41 ) is a positive quantity .\n+ hence equation(12 ) demands ( @xmath42 ) is a positive quantity which gives @xmath43 the behaviour of horizon mass @xmath44 and primordial black hole mass is shown in figure-2 .\nhaving @xmath45 and @xmath44 with @xmath36 ]\nprimordial black holes , as discussed before , are only formed in radiation dominated era .\nso depending on their evaporation , we can divide pbhs into 2 categories .\n+ ( i ) pbhs evaporated in radiation dominated era @xmath7 + ( ii ) pbhs evaporated in matter dominated era @xmath9 . +\n* case - i * @xmath7 + black hole evaporation equation ( 6 ) implies @xmath47^\\frac{1}{3}\\end{aligned}\\ ] ] if we consider both evaporation and accretion simultaneously , then the rate at which primordial black hole mass changes is given by @xmath48 this equation can nt be solved analytically .\nso we have solved it by using numerical methods .\n+ ( for our calculation purpose , we have used @xmath49 and @xmath50 . )\n+ for a given @xmath51 the equation(14 ) and the solution of the equation(15 ) are shown in figure-3 .\n+ ] figure-3 clearly shows that the evaporation time of pbh increases with accretion efficiency . + * case - ii * @xmath9 + since there is no accretion in matter dominated era , so the first term in the combined equation ( 15 ) for variation of @xmath52 with time needs to be integrated only upto @xmath10 .\n+ based on numerical solution with above provision , we construct the following table-1 for the pbhs which are evaporated today i.e. @xmath53 . + @xmath54table-1 + @xmath28 & @xmath55 & @xmath51 + @xmath8 & @xmath56s & @xmath57 g + @xmath58 & @xmath59s & @xmath60 g + @xmath61 & @xmath62s & @xmath63 g + @xmath64 & @xmath65s & @xmath66 g + @xmath67 & @xmath68s & @xmath69 g + @xmath70 & @xmath71s & @xmath72 g + @xmath73 & @xmath74s & @xmath75 g +\nthe fraction of the universe s mass going into pbhs at time @xmath36 is [ 2 ] @xmath76(1+z)^{-1}\\end{aligned}\\ ] ] where @xmath77 is the density parameter associated with pbhs formed at time @xmath36 , @xmath78 is the redshift associated with time @xmath36 and @xmath79 is the microwave background density having value @xmath80 . + again for @xmath81 , @xmath82 .\n+ so @xmath83 again using @xmath84 , we can write the fraction of the universe going into pbhs of mass m is @xmath85 observations of the cosmolgical deceleration parameter imply @xmath86 over all mass ranges for which pbhs have not evaporated .\nbut presently evaporating pbhs(@xmath87 ) generate a @xmath5-ray background whose most of the energy is appearing at around 100 mev[19 ] .\nif the fraction of the emitted energy which goes into photons is @xmath88 , then the density of the radiation at this energy is expected to be @xmath89 . since @xmath90 and the observed @xmath5-ray background density around @xmath91 mev is @xmath92\n, we infer @xmath93 .\n+ now equation ( 18 ) becomes @xmath94 the variation of @xmath95 with @xmath28 drawn from variation of @xmath87 with @xmath28 is shown in the following table-2 .\n@xmath96table-2 + @xmath28 & @xmath87 & @xmath97 + @xmath8 & @xmath98 g & @xmath99 + @xmath61 & @xmath100 g & @xmath101 + @xmath67 & @xmath102 g & @xmath103 + @xmath73 & @xmath104 g & @xmath105 + @xmath106 & @xmath107 g & @xmath108 + @xmath109 & @xmath110 g & @xmath111 +\nif we consider only evaporation then the primordial black holes which are created on or before @xmath112 sec are completely evaporated by present time .\nhowever , we found that if we include accretion , then the primotdial black holes which are created on @xmath56sec will live longer depending on accretion efficiency .\nwe also concluded that accretion efficiency can not exceed @xmath113 .\nfurther , our analysis imposes constraint on the fraction of the universe s mass going into pbhs which is consistent with previous results [ 20,21,22 ] that @xmath114 .\nthus accretion increases the number of existing pbhs depending on accretion efficiency , which helps in considering pbh as the proper candidate for dark matter .\nwe , thus , provide within standard cosmology a possible realisation of the speculation advanced earlier [ 11,12,13 ] .\nwe are thankful to institute of physics , bhubaneswar , india , for providing the library and computational facility .\nb.nayak would like to thank the council of scientific and industrial research , government of india , for the award of jrf , f.no .\n09/173(0125)/2007-emr - i .\n@xmath115 $ ] ya . b. zeldovich and i. novikov , sov .\nastron . astrophys .\nj. @xmath116 , @xmath117 @xmath118 .\n@xmath119 $ ] b. j. carr , astrophys .\nj. @xmath120 , @xmath121 @xmath122 .\n+ @xmath123 $ ] m. y. kholpov , b. a. malomed and ya .\nb. zeldovich , mon . not .\nr. astron .\n@xmath124 , @xmath125 @xmath126 .\n+ @xmath127 $ ] b. j. carr , j. gilbert and j. lidsey , phys .\nrev . d @xmath128 @xmath129 @xmath130 .\n+ @xmath131 $ ] m. y. kholpov and a. polnarev , phys .\n@xmath132 , @xmath133 @xmath134 .\n+ @xmath135 $ ] h. kodma , m. sasaki and k. sato , prog .\n@xmath136 , @xmath137 @xmath138 .\n+ @xmath139 $ ] d. la and p. j. steinhardt , phys .\nlett @xmath140 , @xmath141 @xmath142 .\n+ @xmath143 $ ] a. polnarev and r. zemboricz , phys .\nd @xmath144 , @xmath145 @xmath146 .\n+ @xmath147 $ ] s. w. hawking , commun . math .\n@xmath144 , @xmath148 @xmath122 .\n+ @xmath149 $ ] k. j. mack , j. p. ostriker and m. ricotti , astrophys . j. @xmath150 , @xmath151 @xmath152 .\n+ @xmath153 $ ] d. blais , c. kiefer , d. polarski , phys .\nb @xmath154 , @xmath155 @xmath156 .\n+ @xmath157 $ ] d. blais , t. bringmann , c. kiefer , d. polarski , phys . rev .\nd @xmath158 , @xmath159 @xmath160 .\n+ @xmath161 $ ] a. barrau , d. blais , g. boudoul , d. polarski , annalen phys .\n@xmath162 , @xmath163 @xmath164 .\n+ @xmath165 $ ] j. d. barrow and b. j. carr , phys .\nd @xmath166 , @xmath167 @xmath168 . + @xmath169\n$ ] a. s. majumdar , p. das gupta and r. p. saxena , int .\nd @xmath170 , @xmath171 @xmath172 .\n+ @xmath173 $ ] s. wienberg , ` gravitation and cosmology ' , wiley , new york , @xmath174 .\n+ @xmath175 $ ] n. upadhyay , p. das gupta and r. p. saxena , phys .\nrev . d @xmath176\n, @xmath177 @xmath178 .\n+ @xmath179 $ ] r. guedens , d. clancy and a. r. liddle , phys .\nd @xmath180 , @xmath181 @xmath156 .\n$ ] d. page and s. w. hawking , astrophys . j. @xmath183 , @xmath121 @xmath184 .\n+ @xmath185 $ ] b. j. carr , astron .\n@xmath186 , @xmath187 @xmath130 .\n+ @xmath188 $ ] i. d. novikov et al . ,\nj. @xmath189 , @xmath190 @xmath191 .\n+ @xmath192 $ ] j. macgibbon and b. j. carr , astrophys . j. @xmath193 , @xmath194 @xmath195 .", "role": "user" }, { "content": "primordial black holes evaporate due to hawking radiation . \n we find that the evaporation time of primordial black holes increase when accretion of radiation is included . thus depending on accretion efficiency more and more number of primordial black holes \n are existing today , which strengthens the idea that the primordial black holes are the proper candidate for dark matter . \n pacs numbers : 98.80.-k , 97.60.lf key words : primordial black hole , accretionn , accretion efficiency .", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.485
[ { "content": "Summarize the paper:\n\nPaper text:\n\nLike humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum.\nWhile conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads.\nIn this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning.\nLILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples.\nIt works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn.\nIn the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution.\nWe evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10.\nWe show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks.\nWe further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.\nDeep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples.\nHowever, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009) ; Larochelle et al. (2007) ).\nThe go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives.\nWhile offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (Das et al. (2016) ; Keskar et al. (2016) ; Goyal et al. (2017) ; You et al. (2017) ).\nThis is a by-product of how mini-batches are constructed.\nFor example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches.\nIt is hard to construct a solution that is a perfect compromise between all cases.\nTwo lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks.\nCurriculum learning, inspired by strategies used for humans (Skinner (1958) ; Avrahami et al. (1997) ), works by gradually increasing the conceptual difficulty of samples used to train deep networks ; Florensa et al. (2017) ; Graves et al. (2017) ).\nThis has been shown to improve performance on corrupted (Jiang et al. (2017) ) and small datasets (Fan et al. (2018) ).\nMore recently, deep networks have been used to categorize samples (Weinshall et al. (2018) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth (Hacohen & Weinshall (2019) ).\nTo the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order.\nThis introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can learn in early epochs.\nFurther, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks.\nA complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima.\nIn this regard, label smoothing offers an important solution that is invariant to the underlying architecture.\nEarly works like Xie et al. (2016) replace ground-truth labels with noise while Reed et al. (2014) uses other models' outputs to prevent over-fitting.\nThis idea was extended in Bagherinezhad et al. (2018) to an iterative method which uses logits obtained from previously trained versions of the same deep network.\nWhile Miyato et al. (2015) use local distributional smoothness, based on the robustness of a model's distribution around a data point, to regularize outcomes, Pereyra et al. (2017) penalized highly confident outputs directly.\nClosest in spirit to our work is the label smoothing method defined in Szegedy et al. (2016) , which offers an alternative target distribution for all training samples with no extra data augmentation.\nIn general, label smoothing is applied to all examples regardless of how it affects the network's understanding of them.\nFurther, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation.\nIn this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions.\nLILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples.\nIt works in two key phases,\n1) incremental label introduction and\n2) adaptive compensation.\nIn the first phase, we incrementally introduce groups of labels in the training process.\nData, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset.\nOnce a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues.\nIn recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples.\nOnce all ground-truth labels are revealed the adaptive compensation phase of training is initiated.\nThis phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution.\nThus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model.\nFurther, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained.\nWe apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines.\nWhile incremental and continual learning work on evolving data distributions with the addition of memory constraints ((Rebuffi et al., 2017; Castro et al., 2018) and derivative works), knowledge distillation ( Rolnick et al., 2018) and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset.\nIn incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels.\nThus, it avoids data deficient learning.\nFurther, works like Bucher et al. (2016) ; Li et al. (2013) ; Wang & Gupta (2015) emphasize the importance of hard negative mining, both in size and diversity, in improving learning.\nAlthough the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks.\nTo summarize, our main contributions in LILAC are as follows,\n• we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of performance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods.\nIn the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label.\nGradually, over the course of several fixed intervals of training we reveal the true label.\nWithin a fixed interval of training, we keep constant two sets of data, \"Seen\", whose groundtruth labels are known and \"Unseen\", whose labels are replaced by a fake value.\nWhen training, Illustration of the evolution of data partitions in the incremental label introduction phase for a four label dataset.\nIn the first incremental step, only one label is used for training while the remaining data use label 4.\nA short period of training is performed with this fixed setup, where data from U is uniformly sampled to match the number of samples from S, in every mini-batch.\nThe final incremental step depicted is equivalent to batch learning since all the labels are available to the network.\nOnce all the ground-truth labels are revealed we begin the adaptive compensation phase described in Sec. 2.2.\nmini-batches are uniformly sampled from the entire training set, but the instances from \"Unseen\" classes use the held-out label.\nBy the end of the final interval, we reveal all ground-truth labels.\nWe now describe the incremental phase in more detail.\nAt the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S : Seen and U : Unseen, as shown in Fig. 1 .\nData samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process.\nLILAC assumes a random ordering of labels, Or(M ), where M denotes the total number of labels in the dataset.\nWithin this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b.\nThe remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user.\nTraining in the incremental phase happens at fixed intervals of E epochs each.\nWithin a fixed interval, the virtual data partition is held constant.\nEvery mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label.\nFinally, the curated mini-batches of data are used to train the neural network.\nAt the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval.\nIn this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples.\nThis approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation.\nBoth these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures.\nThe next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs.\nIn further expanding LILAC's ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures.\nA LILAC: ALGORITHM Table 8 : The table captures the effect of varying the number of epochs used for the fixed training intervals in the incremental label introduction phase.\nAcross CIFAR-10 there is an obvious peak after which the mean value decreases.\nHowever, in STL-10 there seems to be a consistent increase, with the assumption of minor noise.\nFinally, in CIFAR-100 there isn't a clear pattern.\nFrom the results in Table 8 , we observe that the choice of E is dependent on the dataset.\nThere isn't an explicit pattern that can be used to select the value of E without trial runs.\nFurther, the available run-time is an important constraint when select E from a range of values since both m and E affect it.", "role": "user" }, { "content": "A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation.", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.4757
[ { "content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nThe Cochrane Collaboration’s tool for assessing risk of bias in randomised trials\nAbstract 1:\nFlaws in the design , conduct , analysis , and reporting of r and omised trials can cause the effect of an intervention to be underestimated or overestimated . The Cochrane Collaboration ’s tool for assessing risk of bias aims to make the process clearer and more\n\n\nTitle 2:\nEvidence for biological effects of metformin in operable breast cancer: a pre-operative, window-of-opportunity, randomized trial\nAbstract 2:\nMetformin may reduce the incidence of breast cancer and enhance response to neoadjuvant chemotherapy in diabetic women . This trial examined the effects of metformin on Ki67 and gene expression in primary breast cancer . Non-diabetic women with operable invasive breast cancer received pre-operative metformin . A pilot cohort of eight patients had core biopsy of the cancer at presentation , a week later ( without treatment ; internal control ) , then following metformin 500-mg o.d . for 1 week increased to 1-g b.d . for a further week continued to surgery . A further 47 patients had core biopsy at diagnosis were r and omized to metformin ( the same dose regimen ) or no drug , and 2 weeks later had core biopsy at surgery . Ki67 immunohistochemistry , transcriptome analysis on formalin-fixed paraffin-embedded cores and serum insulin determination were performed blinded to treatment . Seven patients ( 7/32 , 21.9 % ) receiving metformin withdrew because of gastrointestinal upset . The mean percentage of cells staining for Ki67 fell significantly following metformin treatment in both the pilot cohort ( P = 0.041 , paired t-test ) and in the metformin arm ( P = 0.027 , Wilcoxon rank test ) but was unchanged in the internal control or metformin control arms . Messenger RNA expression was significantly downregulated by metformin for PDE3B ( phosphodiesterase 3B , cGMP-inhibited ; a critical regulator of cAMP levels that affect activation of AMP-activated protein kinase , AMPK ) , confirmed by immunohistochemistry , SSR3 , TP53 and CCDC14 . By ingenuity pathway analysis , the tumour necrosis factor receptor 1 ( TNFR1 ) signaling pathway was most affected by metformin : TGFB and MEKK were upregulated and cdc42 downregulated ; mTOR and AMPK pathways were also affected . Gene set analysis additionally revealed that p53 , BRCA1 and cell cycle pathways also had reduced expression following metformin . Mean serum insulin remained stable in patients receiving metformin but rose in control patients . This trial presents biomarker evidence for anti-proliferative effects of metformin in women with breast cancer and provides support for therapeutic trials of metformin\n\n\nTitle 3:\nPresurgical Trial of Metformin in Overweight and Obese Patients with Newly Diagnosed Breast Cancer\nAbstract 3:\nIntroduction : We conducted a presurgical trial to assess the tissue-related effects of metformin in overweight/obese breast cancer ( BC ) patients . Methods : Metformin 1,500 mg daily was administered to 35 nondiabetics with stage 0–III BC , body mass index ( BMI ) ≥ 25 kg/m2 . The primary endpoint was tumor proliferation change ( i.e. , ki-67 ) . Tumor proliferation change was compared to untreated historical controls , matched by age , BMI , and stage . Results : There was no reduction in ln(ki-67 ) after metformin ( p = .98 ) or compared to controls ( p = .47 ) . There was a significant reduction in BMI , cholesterol , and leptin . Conclusion : Despite no proliferation changes , we observed reductions in other relevant biomarkers\n\n\nTitle 4:\nMetformin intervention in obese non-diabetic patients with breast cancer: phase II randomized, double-blind, placebo-controlled trial\nAbstract 4:\nPrevious observational studies have suggested that metformin in diabetes patients may reduce breast cancer risk more than the reductions from other anti-diabetes medications . This r and omized , double-blind , placebo-controlled trial was performed to evaluate the efficacy of metformin for controlling physical and metabolic profiles related to prognosis and adverse events in non-diabetic breast cancer patients . Female breast cancer patients ( N = 105 ) , at least 6 months post-mastectomy , with obesity ( ≥25 kg/m2 ) and /or pre-diabetes ( fasting blood sugar levels ≥100 mg/dL ) , were r and omly assigned to three groups ( placebo , metformin 500 mg , and metformin 1000 mg ) stratified by tamoxifen use . A linear mixed model for repeated measurements among three groups and ANOVA for profile differences during 6 months of treatment were used for the intention-to-treat analysis . The metformin 1000 mg group had a significantly greater decline in glucose and HbA1c levels between treatment weeks 0 and 6 month ( p = 0.008 and 0.009 , respectively ) , and the declines increased with an increase in body mass index ( BMI ) level ( p interaction with BMI = 0.007 and 0.067 , respectively ) . A marginally significant different effect from the metformin 1000 mg treatment was detected for glucose and HbA1c levels ( p interaction = 0.084 and 0.063 , respectively ) in the intention-to-treat analysis . Metformin 1000 mg treatment had a favorable effect on controlling glucose and HbA1C levels in obese non-diabetic breast cancer patients , indicating prognostic importance . Further trials are needed to eluci date the risk – benefit ratio of long-term use of metformin\n\n\nTitle 5:\nDual effect of metformin on breast cancer proliferation in a randomized presurgical trial.\nAbstract 5:\nPURPOSE Metformin is associated with reduced breast cancer risk in observational studies in patients with diabetes , but clinical evidence for antitumor activity is unclear . The change in Ki-67 between pretreatment biopsy and post-treatment surgical specimen has prognostic value and may predict antitumor activity in breast cancer . PATIENTS AND METHODS After tumor biopsy , we r and omly allocated 200 nondiabetic women with operable breast cancer to either metformin 850 mg/twice per day ( n = 100 ) or placebo ( n = 100 ) . The primary outcome measure was the difference between arms in Ki-67 after 4 weeks adjusted for baseline values . RESULTS Overall , the metformin effect on Ki-67 change relative to placebo was not statistically significant , with a mean proportional increase of 4.0 % ( 95 % CI , -5.6 % to 14.4 % ) 4 weeks apart . However , there was a different drug effect depending on insulin resistance ( homeostasis model assessment [ HOMA ] index > 2.8 , fasting glucose [ mmol/L ] × insulin [mU/L]/22.5 ; P(interaction ) = .045 ) , with a nonsignificant mean proportional decrease in Ki-67 of 10.5 % ( 95 % CI , -26.1 % to 8.4 % ) in women with HOMA more than 2.8 and a nonsignificant increase of 11.1 % ( 95 % CI , -0.6 % to 24.2 % ) with HOMA less than or equal to 2.8 . A different effect of metformin according to HOMA index was noted also in luminal B tumors ( P(interaction ) = .05 ) . Similar trends to drug effect modifications were observed according to body mass index ( P = .143 ) , waist/hip girth-ratio ( P = .058 ) , moderate alcohol consumption ( P = .005 ) , and C-reactive protein ( P = .080 ) . CONCLUSION Metformin before surgery did not significantly affect Ki-67 overall , but showed significantly different effects according to insulin resistance , particularly in luminal B tumors . Our findings warrant further studies of metformin in breast cancer with careful consideration to the metabolic characteristics of the study population\n\n\nTitle 6:\nThe Effects of Metformin and Weight Loss on Biomarkers Associated With Breast Cancer Outcomes\nAbstract 6:\nBackground This study investigated the effects of metformin and weight loss on biomarkers associated with breast cancer prognosis . Methods Overweight/obese postmenopausal breast cancer survivors ( n = 333 ) were r and omly assigned to metformin vs placebo and to a weight loss intervention vs control ( ie , usual care ) . The 2 × 2 factorial design allows a single r and omized trial to investigate the effect of two factors and interactions between them . Outcomes were changes in fasting insulin , glucose , C-reactive protein ( CRP ) , estradiol , testosterone , and sex-hormone binding globulin ( SHBG ) . The trial was powered for a main effects analysis of metformin vs placebo and weight loss vs control . All tests of statistical significance were two-sided . Results A total of 313 women ( 94.0 % ) completed the six-month trial . High prescription adherence ( ie , ≥80 % of pills taken ) ranged from 65.9 % of participants in the metformin group to 81.3 % of those in the placebo group ( P < .002 ) . Mean percent weight loss was statistically significantly higher in the weight loss group ( -5.5 % , 95 % confidence interval [ CI ] = -6.3 % to -4.8 % ) compared with the control group ( -2.7 % , 95 % CI = -3.5 % to -1.9 % ) . Statistically significant group differences ( ie , percent change in metformin group minus placebo group ) were -7.9 % ( 95 % CI = -15.0 % to -0.8 % ) for insulin , -10.0 % ( 95 % CI = -18.5 % to -1.5 % ) for estradiol , -9.5 % ( 95 % CI = -15.2 % to -3.8 % ) for testosterone , and 7.5 % ( 95 % CI = 2.4 % to 12.6 % ) for SHBG . Statistically significant group differences ( ie , percent change in weight loss group minus placebo group ) were -12.5 % ( 95 % CI = -19.6 % to -5.3 % ) for insulin and 5.3 % ( 95 % CI = 0.2 % to 10.4 % ) for SHBG . Conclusions As adjuvant therapy , weight loss and metformin were found to be a safe combination strategy that modestly lowered estrogen levels and advantageously affected other biomarkers thought to be on the pathway for reducing breast cancer recurrence and mortality", "role": "user" }, { "content": "We did not observe non-linear changes in the dose – response relationship between metformin and biomarkers as outcomes .\nConclusions Breast cancer patients receiving metformin as treatment for diabetes showed significant reduction in levels of insulin , fasting glucose , CRP , HOMA , leptin , BMI , and Ki-67", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.7274
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey on Urban Traffic Anomalies Detection Algorithms \nChapter Title: A. STATISTICAL APPROACHES \n1. Abstract of Cited Paper (BIB001): Novel methods for implementation of detector-level multivariate screening methods are presented. The methods use present data and classify data as outliers on the basis of comparisons with empirical cutoff points derived from extensive archived data rather than from standard statistical tables. In addition, while many of the ideas of the classical Hotelling's T2-statistic are used, modern statistical trend removal and blocking are incorporated. The methods are applied to intelligent transportation system data from San Antonio and Austin, Texas. These examples show how the suggested new methods perform with high-quality traffic data and apparently lower-quality traffic data. All algorithms were implemented by using the SAS programming language. \n2. Abstract of Cited Paper (BIB002): This paper studies statistical aggregation procedures in the regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing estimators. We consider here three different types of aggregation: model selection (MS) aggregation, convex (C) aggregation and linear (L) aggregation. The objective of (MS) is to select the optimal single estimator from the list; that of (C) is to select the optimal convex combination of the given estimators; and that of (L) is to select the optimal linear combination of the given estimators. We are interested in evaluating the rates of convergence of the excess risks of the estimators obtained by these procedures. Our approach is motivated by recently published minimax results [Nemirovski, A. (2000). Topics in non-parametric statistics. Lectures on Probability Theory and Statistics (Saint-Flour, 1998). Lecture Notes in Math. 1738 85-277. Springer, Berlin; Tsybakov, A. B. (2003). Optimal rates of aggregation. Learning Theory and Kernel Machines. Lecture Notes in Artificial Intelligence 2777 303-313. Springer, Heidelberg]. There exist competing aggregation procedures achieving optimal convergence rates for each of the (MS), (C) and (L) cases separately. Since these procedures are not directly comparable with each other, we suggest an alternative solution. We prove that all three optimal rates, as well as those for the newly introduced (S) aggregation (subset selection), are nearly achieved via a single \"universal\" aggregation procedure. The procedure consists of mixing the initial estimators with weights obtained by penalized least squares. Two different penalties are considered: one of them is of the BIC type, the second one is a data-dependent l 1 -type penalty. \n3. Abstract of Cited Paper (BIB003): Nonparametric Bayesian mixture models, in particular Dirichlet process (DP) mixture models, have shown great promise for density estimation and data clustering. Given the size of today's datasets, computational efficiency becomes an essential ingredient in the applicability of these techniques to real world data. We study and experimentally compare a number of variational Bayesian (VB) approximations to the DP mixture model. In particular we consider the standard VB approximation where parameters are assumed to be independent from cluster assignment variables, and a novel collapsed VB approximation where mixture weights are marginalized out. For both VB approximations we consider two different ways to approximate the DP, by truncating the stick-breaking construction, and by using a finite mixture model with a symmetric Dirichlet prior. \n4. Abstract of Cited Paper (BIB004): This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. \n5. Abstract of Cited Paper (BIB005): Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning. \n6. Abstract of Cited Paper (BIB006): Traffic data collections are exceedingly useful for road network management. Such collections are typically massive and are full of errors, noise and abnormal traffic behaviour. These abnormalities are regarded as outliers because they are inconsistent with the rest of the data. Hence, the problem of outlier detection (OD) is non-trivial. This paper presents a novel method for detecting outliers in large-scale traffic data by modelling the information as a Dirichlet process mixture model (DPMM). In essence, input traffic signals are truncated and mapped to a covariance signal descriptor, and the vector dimension is then further reduced by principal component analysis. This modified signal vector is then modelled by a DPMM. Traffic signals generally share heavy spatial-temporal similarities within signals or among various categories of traffic signals, and previous OD methods have proved incapable of properly discerning these similarities or differences. The contribution of this study is to represent real-world traffic data by a robust DPMM-based method and to perform an unsupervised OD to achieve a detection rate of 96.67% in a ten-fold cross validation. Language: en \n7. Abstract of Cited Paper (BIB007): It is meaningful to detect outliers in traffic data for traffic management. However, this is a massive task for people from large-scale database to distinguish outliers. In this paper, we present two methods: Kernel Smoothing Na\\\"ive Bayes (NB) method and Gaussian Mixture Model (GMM) method to automatically detect any hardware errors as well as abnormal traffic events in traffic data collected at a four-arm junction in Hong Kong. Traffic data was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then projected to a two-dimensional (2D) (x,y)-coordinate plane by Principal Component Analysis (PCA) for dimension reduction. We assume that inlier data are normal distributed. As such, the NB and GMM methods are successfully applied in outlier detection (OD) for traffic data. The kernel smooth NB method assumes the existence of kernel distributions in traffic data and uses Bayes' Theorem to perform OD. In contrast, the GMM method believes the traffic data is formed by the mixture of Gaussian distributions and exploits confidence region for OD. This paper would address the modeling of each method and evaluate their respective performances. Experimental results show that the NB algorithm with Triangle kernel and GMM method achieve up to 93.78% and 94.50% accuracies, respectively.\n\nLiterature Review:", "role": "user" }, { "content": "Statistical analysis models such as the Gaussian aggregation model BIB003 , principle component analysis BIB005 , stochastic gradient descent BIB006 , and Dirichlet Process Mixture BIB004 , are based on the fact that in general, inlier flows follow some statistical process represented by an alternative hypothesis and the outlier flows deviate from this statistical mechanism and respect to the null hypothesis. For detecting outliers in large-scale urban traffic data, Ngan et al. BIB007 proposed a Dirichlet Process Mixture Model (DPMM). The set of all flow values F = {f 1 , f 2 , . . . , f |F| } is projected into a n-dimensional space, where the i th dimension is defined by the flow values {f i , . . . , f i+w }, where w is the window length projection such as (1 ≤ w ≤ |F| and n = |F| + w). The n dimensions are then entered in a Principal Component Analysis (PCA) kernel to reduce and transform the traffic data space into a two-dimensional (2D) (x, y) coordinate plane. In this step, the covariance matrix among the variables of the n dimensions is computed, and the Eigenvalues are then determined and sorted from the highest to the lowest. This provides the dimensions in the order of significance. The two highest significance dimensions are considered while the rest are ignored. The obtained flow vector represented by the two dimensions is injected into the Dirichlet process to detect flow outliers. Thus, the clusters are estimated using G ∼ DP(H, α), with α being the concentration parameters, and H is the hypothesis base distribution defined by H = {φ. µ}, φ, and µ are the mixture density covariance and the mixture weights of the data, respectively. The clusters with a high number of flow values are considered as normal, and the other clusters are labeled as outliers. Lam et al. BIB008 proposed a Kernel Smoothing Naive Bayes (KSNB) approach to automatically determine any errors as well as abnormal traffic in data from Hong Kong. The authors assumed that inlier flow values followed a kernel smooth distribution. The KSNB model automatically determines regions formed by kernel distributions and then considers them as inliers. In contrast, any flow value outside of those regions is considered to be an outlier. The kernel estimator for the set of flow values F is defined as and b =F − mT , whereF and T are the average of the flows ( . If the score of F i is greater than 1, then it is considered an outlier, otherwise, it is considered a normal flow value. Turochy and Smith proposed a Multivariate Statistical Quality Control (MSQC) approach for traffic congestion outlier detection. This approach took other traffic variables that contributed to the congested case, such as the average speed, and the occupancy rate, instead of using a single variable represented by the flow values. For more details about how to compute these variables, we refer the readers to BIB001 . The historical traffic flows T are fitted to the F-distribution F |T |,|T |−p (α), where p is the number of variables (in this case is set to 3), and α is the confidence significance level. When the new observation flows x = {x flow , x speed , x occupancy } are detected, T with the corresponding x are projected to the F-distribution with the α value. If the alternative hypothesis is accepted then x is considered as normal flow, otherwise, the score of x is computed as: where S is the covariance matrix defined by (x −x)(x −x) t . If the score is greater than the cutoff threshold, then x is considered to be an outlier, otherwise, it is a normal flow. Park et al. BIB002 proposed a Multiple Blocks on Multivariate Statistical Quality Control (MB-MSQC) approach to deal with the variability problem of flow during the hours of the day. For example, in almost all urban cities, there is an increase in traffic between 6:00 to 9:00 and 16:00 to 19:00. Thus, the set of flow values is grouped into five distinct blocks: (B 1 : 00:00 to 6:00, B 2 : 6:00 to 9:00, B 3 : 9:00 to 16:00, B 4 : 16:00 to 19:00, and B 5 : 19:00 to 00:00). Afterwards, the MSQC proposed in is independently applied on each block of flows. This algorithm has been tested on traffic data from San Antonio and Austin, USA. According to the authors, the results revealed the superiority of MB-MSQC compared to MSQC in terms of precision.", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.362
[ { "content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Which species of bacteria did the mitochondria originate from?\n\nContext: Recently, α-proteobacteria have been shown to possess virus-like gene transfer agents that facilitate high frequency gene transfer in natural environments between distantly related lineages. This system could have driven the genomic integration of the mitochondrial progenitor and its proto-eukaryote host and contributed to the evolutionary mosaic of genes seen in modern-day prokaryotic and eukaryotic genomes.\nAlthough the Alphaproteobacteria are thought to be the closest relatives of the mitochondrial progenitor, there is dispute as to what its particular sister group is.\nMore detailed phylogenetic analyses with additional Alphaproteobacteria and including genes from the mitochondria of Reclinomonas americana found matches of mitochondrial genes to those of members of the Rickettsiaceae, Anaplasmataceae, and Rhodospirillaceae families.\nBiologists agree that the ancestor of mitochondria was an alpha-proteobacterium.\nMitochondria originated by permanent enslavement of purple non-sulphur bacteria.\nPhylogenetic analyses based on genes located in the mitochondrial genome indicate that these genes originated from within the alpha-proteobacteria.\nThe strong relationship with alpha-proteobacterial genes observed for some mitochondrial genes, combined with the lack of such a relationship for others, indicates that the modern mitochondrial proteome is the product of both reductive and expansive processes.\nAccumulating evolutionary data point to a monophyletic origin of mitochondria from the order Rickettsiales.\nEvolutionary analyses of proteins encoded in the genome contain the strongest phylogenetic evidence to date for the view that mitochondria descend from alpha-proteobacteria.\nThe functional profiles of these genes show similarities to those of mitochondrial genes: no genes required for anaerobic glycolysis are found in either R. prowazekii or mitochondrial genomes, but a complete set of genes encoding components of the tricarboxylic acid cycle and the respiratory-chain complex is found in R. prowazekii.\nPhylogenetic analyses indicate that R. prowazekii is more closely related to mitochondria than is any other microbe studied so far.\nThe phylogenetic analysis supports the hypothesis that mitochondria are derived from the alpha-proteobacteria and more specifically from within the Rickettsiaceae. We have estimated that the common ancestor of mitochondria and Rickettsiaceae dates back to more than 1500 million years ago.\nGOBASE also includes a fully reannotated genome sequence of Rickettsia prowazekii, one of the closest bacterial relatives of mitochondria, and will shortly expand to contain more data from bacteria from which organelles originated.\nThe genome sequence of Rickettsia prowazekii and the origin of mitochondria.\nAlthough mitochondria derive from alpha-proteobacteria, many proteins acting in this organelle did not originate from bacteria.", "role": "user" }, { "content": "Biologists agree that the ancestor of mitochondria was an alpha-proteobacterium.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.77218
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nproteins encoded by genomes generally function after adopting proper 3d structures . a rapid increase in the number of entirely sequenced genomes led to an unprecedented growth in the number of hypothetical proteins resulting from genome annotation .\nprotein structures and functions can be inferred from amino acid sequences by using advanced computer programs .\nthere is no doubt in the importance of structural and functional annotations of hypothetical proteins .\nthe gtop project was started in 1999 as reported ( 1 ) and was taken over by the dna data bank of japan ( 2 ) in 2007 , under which the database has been continuously updated .\ngtop is a database that provides protein annotation of 3d structures and functions based on similarity searches against pdb ( 3 ) , scop ( 4 ) , and swiss - prot ( 5 ) , 2d structure predictions , pfam ( 6 ) protein families , prosite ( 7 ) functional motifs , prediction of trans - membrane regions , and others .\nthere are several databases of the 3d structures of all the genome - encoded proteins .\nfor example , superfamily ( http://supfam.mrc-lmb.cam.ac.uk/superfamily/ ) ( 8) provides scop domain assignments to proteins encoded by completely sequenced genomes\n. a collection of comparative protein 3d structure models is available at modbase ( http://modbase.compbio.ucsf.edu/modbase-cgi/index.cgi ) ( 9 ) in some entirely sequenced genomes .\ngene3d ( http://gene3d.biochem.ucl.ac.uk/gene3d/ ) ( 10 ) makes public cath - based domain assignments and functional annotations to proteins in more than 500 genomes .\nfunctional and domain assignments including intrinsically disordered ( i d ) regions can be found at pedant ( http://pedant.gsf.de/ ) ( 11 ) . from the previous report\n, we have added a large body of data and tools to gtop , for example i d region assignments , exon information on eukaryotic proteins , an efficient mechanism to search within a user - specified set of genomes , and tools for phylogenetic profile search . since its inception ,\ngtop has employed a user - friendly interface to let the user grasp features of a query protein at a glance .\na gtop user can readily obtain comprehensive structural and functional data of all the proteins encoded by entirely sequenced genomes .\na list of the genomes stored in gtop is available at http://spock.genes.nig.ac.jp/~genome/org.html , together with the abbreviations of organism names used in the database . in the 2002 paper\nthe database has grown to cover a total of 797 genomes , with 41 , 466 , 114 and 176 genomes of archaea , eubacteria , eukaryota and bacteriophages , respectively .\nthe following data are subject to regular renewal : ( i ) amino acid sequences encoded by genomes newly sequenced after the previous update , ( ii ) amino acid sequences that existed in the previous version but were subsequently modified and ( iii ) reference databases such as pdb , scop , swiss - prot , prosite , and pfam whose new versions were released .\nthe sequences fallen in category ( ii ) were recalculated to keep annotations up - to - date . update category ( iii )\nis crucial to keep annotations up - to - date , because most annotations in gtop are obtained by homology search programs or those based on homology search .\nthe main focus of gtop is structural annotations made by homology searches against the pdb and scop databases . although gtop used psi - blast\n( 12 ) in the previous report , it now employs reverse - psi - blast ( 13 ) , as this method gives comparable results in drastically reduced computation time .\nhmm searches using the superfamily profiles ( 8) of scop domains were additionally conducted , as they are particularly effective in identifying small domains such as dna binding domains .\nfigure 1 presents a time course of the number of the genomes stored and the average fractions of proteins with 3d annotations made by blast and reverse - psi - blast .\nthe fraction of sequences with alignments to pdb shows a steadily increasing trend , reflecting the growth of the pdb database . the fraction aligned by reverse - psi - blast exceeds that by blast , reflecting the higher sensitivity of the former method .\nhowever , one should note that in this statistics a sequence is considered to be annotated if it has at least one pdb hit by blast or reverse psi - blast and it may have large tracts of structurally undetermined regions . when statistics is evaluated residue - wise , the fractions of regions aligned to pdb sequences in the latest version in human and escherichia coli proteins are 47% and 64% , respectively . \n\nfigure 1.the time courses of the number of genomes included and the fraction of the sequences with homologs in the pdb .\nthe line graphs represent the ratios of the sequences with homologs in the pdb , while the column graph stands for the number of genomes in gtop .\nthe scales for the fraction and the number of genomes are shown at the right and left ends , respectively .\nthe blue , green , and red lines correspond to fruit fly , e. coli , and the overall average , respectively .\nthe solid and dotted lines respectively show the ratios obtained using reverse psi - blast , and those using blast .\nthe time courses of the number of genomes included and the fraction of the sequences with homologs in the pdb .\nthe line graphs represent the ratios of the sequences with homologs in the pdb , while the column graph stands for the number of genomes in gtop .\nthe scales for the fraction and the number of genomes are shown at the right and left ends , respectively .\nthe blue , green , and red lines correspond to fruit fly , e. coli , and the overall average , respectively .\nthe solid and dotted lines respectively show the ratios obtained using reverse psi - blast , and those using blast .\nas most proteins do not entirely consist of structural domains , the fraction of residues with structural assignments will not reach unity ; outside of globular domains there exist i d regions that assume no specific 3d structures by themselves , and tend to contain active regions in proteins involved in crucial biological processes such as signal transduction and transcriptional regulation ( 1416 ) .\nrecent research revealed that i d regions exist predominantly on the cytoplasmic side of eukaryotic proteins ( 17 ) , play important roles in cell signaling , transcriptional control ( 18 ) .\nwe predicted i d regions in proteins stored in gtop by the disopred2 ( 19 ) program and presented them .\nfigure 2a shows a gtop screen shot of human androgen receptor , a typical protein with long i d regions .\nas this example illustrates , gtop graphically displays complex domain architectures of eukaryotic proteins composed of structural domains and i d regions . \n\n( a ) the domain assignments of the human androgen receptor are presented in color bars to facilitate intuitive grasp of molecular architecture of the protein .\nthis is a typical protein with long i d regions : the n - terminal half of the protein consists mainly of i d regions ( 18,22 ) , consistent with the i d regions predicted by disopred2 ( gray bars on the line marked by disopred ) .\n( b ) a structurally aligned region of the same protein is shown in the exon view . this page can be obtained by clicking on the characters \n1t7ra circled in figure 2a , and by clicking on the exon display and 3d ( jmol - applet ) buttons in the top section of the pop - up screen .\n( a ) the domain assignments of the human androgen receptor are presented in color bars to facilitate intuitive grasp of molecular architecture of the protein .\nthis is a typical protein with long i d regions : the n - terminal half of the protein consists mainly of i d regions ( 18,22 ) , consistent with the i d regions predicted by disopred2 ( gray bars on the line marked by disopred ) .\n( b ) a structurally aligned region of the same protein is shown in the exon view . this page can be obtained by clicking on the characters \n1t7ra circled in figure 2a , and by clicking on the exon display and 3d ( jmol - applet ) buttons in the top section of the pop - up screen .\nthe existence of introns and exons is a unique feature of eukaryotic genes and the location of exon boundaries in the corresponding protein structure is of interest ( 20 ) .\nwe thus developed tools to display exon boundaries on amino acid sequences and 3d structures .\nthe exons are presented in 5 colors both in the 3d structure and the sequence displays , from which the boundaries can be clearly seen .\nwe developed a 3d viewing system incorporating jmol applet ( http://www.jmol.org/ ) so that the user can view 3d structures in the browser without installing additional software .\nexon information is also presented in green and blue stripes ( near the bottom of figure 2a ) .\ngtop strives to keep precomputed annotations of all the amino acid sequences of proteins derived from all the completely sequenced genomes .\none clear benefit of having precomputed annotations beside the rapidity of supplying information is to make inter - genomic comparative analyses possible .\nphylogenetic profile search is one analytical tool that exploits this advantage : a user - specified search produces the presence and absence pattern of features such as scop folds , superfamilies , and families , pfam domains , prosite motifs , and the number of trans - membrane helices . the user can conduct a search for a specific feature that are present in certain species and/or absent in others ; for example , a search for a scop domain present in all the eubacterial species and absent in all the eukaryotic species in gtop .\nthe summary section of gtop also offers comparative statistics , which has the ratio of 3d annotations in each genome , the frequencies of scop folds , superfamilies , and families , pfam domains and prosite motifs .\nthe tools for keyword , homology , and text searches in gtop were thus modified so that the user can reduce search time through selection of the genomes in which to conduct a search .\nthe user can easily specify organisms with the use of check boxes placed next to organism names .\nan annotation summary of each protein , consisting of abbreviated one - line descriptions , is saved in a master file .\nmaster file information for each protein is displayed below a gtop diagram of the type shown in figure 2a .\nall the available data of each genome have been compiled in one file , freely downloadable from ftp://spock.genes.nig.ac.jp / pub / gtop/. explanations of the meanings for each header can be found at http://spock.genes.nig.ac.jp/~genome/mas-doc.html .\ndespite the wealth of currently available structural data and use of sensitive programs , considerable fractions of most proteins have neither structural domains nor i d regions assigned .\nwe are currently developing a system to accurately classify the fraction into structural domains and i d regions .\nexcitingly this will result in reliable identification of structural domains whose 3d structures remain undetermined .\nwe expect that the installation of this system will provide further insights into the protein structure .\nwe are also considering incorporation of protein protein interaction data to enrich gtop further .\nthe gtop database is supported in part by the target protein research program from the ministry of education , culture , sports , science and technology of japan , and in part by the bioinformatics research and development project from the japan science and technology agency .\nfunding for open access publication charge : the ministry of education , culture , sports , science and technology of japan .", "role": "user" }, { "content": "the genomes to protein structures and functions ( gtop ) database ( http://spock.genes.nig.ac.jp/~genome/gtop.html ) freely provides an extensive collection of information on protein structures and functions obtained by application of various computational tools to the amino acid sequences of entirely sequenced genomes . \n gtop contains annotations of 3d structures , protein families , functions , and other useful data of a protein of interest in user - friendly ways to give a deep insight into the protein structure . from the initial 1999 version \n , gtop has been continually updated to reap the fruits of genome projects and augmented to supply novel information , in particular intrinsically disordered regions . \n as intrinsically disordered regions constitute a considerable fraction of proteins and often play crucial roles especially in eukaryotes , their assignments give important additional clues to the functionality of proteins . \n additionally , we have incorporated the following features into gtop : a platform independent structural viewer , results of hmm searches against scop and pfam , secondary structure predictions , color display of exon boundaries in eukaryotic proteins , assignments of gene ontology terms , search tools , and master files .", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.61358
[ { "content": "Write an abstract for this chemistry paper.\n\nTitle:\nSynthesis of dihydroimidazole tethered imidazolinethiones and their activity as novel antagonists of the nuclear retinoic acid receptor-related orphan receptors (RORs)\n\nPaper sections:\n\n<p>The nuclear retinoic acid receptor-related orphan receptors (RORs) (NR1F1, NR1F2, and NR1F3), in particular ROR-gamma (RORγ ), have recently been the focus of attention to develop novel treatments of metabolic, inflammatory, and autoimmune diseases. Genetic studies demonstrated a key role for RORγ t (a truncated form of RORγ expressed exclusively in the thymus) in the differentiation of pro-inflammatory Th17 cells1; RORα was subsequently shown to also be essential for Th17 cell differentiation.2 A growing body of evidence suggests a pathogenic role for Th17 in several autoimmune diseases, including multiple sclerosis, rheumatoid arthritis, inflammatory bowel disease, type I diabetes, and psoriasis. 3 The discovery that the RORγ inverse agonists SR1001, digoxin, and ursolic acid could suppress Th17 differentiation and delay the onset of disease in an experimental autoimmune encephalomyelitis mouse model of multiple sclerosis,4–6 demonstrated that targeting RORγ and/or RORα might be an attractive strategy to treat inflammatory autoimmune diseases. During the last decade, numerous compounds have been discovered that modulate RORγ transcriptional activity, and are the focus of therapeutic developments by several pharmaceutical companies (reviewed in7). Here we report the synthesis and screening results of dihydroimidazole tethered imidazolidinethiones to identify novel antagonists of RORs. We generated CHO-K1 stable cell lines expressing a 5x-UAS-luciferase reporter and a Gal4-ROR-LBD for all three RORs (α , β , and γ ) under the control of a tetracycline repressor. These cell lines were used for the screening of all synthesized compounds and we identified several molecules with micromolar activity as RORγ /α dual antagonists, which will serve as starting point for future computer-guided combinatorial chemistry approaches to generate potent and selective RORγ inhibitors.</p><p>The synthesis of dihydroimidazole tethered imidazolidinethione compounds is outlined in Scheme 1.9 Starting from resin-bound orthogonally protected Fmoc-Lys(Boc)-OH 1, the Boc group was deprotected with a solution of trifluoroacetic (TFA) acid in dichloromethane (DCM) and the free ε-amine was acylated with a variety of Boc-amino acids (diversity R1) in the presence of diisopropylcarbodiimide (DICI) and benzotriazole (HOBt). The Boc group was deprotected and the generated amine was acylated with different commercially available carboxylic acids (diversity R2). The resin-bound polyamides 2 were treated with excess borane in tetyrahydrofurane to generate the corresponding resin-bound polyamines 3. The primary amine of the tetraamines 3 was selectively protected with 2-acetyldimedone (Dde-OH) and the resin-bound polyamines was treated with thiocarbonyldiimidazole to afford the corresponding imidazolidinthione 4. Following Dde deprotection with a hydrazine in DMF, the free amine was acylated with different carboxylic acids (diversity R3) and the generated compound 5 was treated with phosphorous oxychloride (POCl3) to yield following cleavage of the solid support the corresponding dihydroimidazole tethered imidazolidinethione compounds 6.</p><p>We performed the synthesis of 116 individual compounds and evaluated them on TRex-CHO-luciferase-RORγ cells at 2 μg/ml; 8 compounds elicited significant toxicity as determined by resazurin assay (data not shown) and were discarded for further analysis. 30 compounds that inhibited doxycycline-dependent luciferase stimulation by 70% or more were titrated in a confirmatory assay against RORγ and the top 15 compounds were followed up with a second dose response with a 6 point 2-fold serial dilutions starting at 8 μg/ml, which corresponds to 15–18.7 μM as maximum concentration. This secondary titration experiment was performed in all Gal4-RORγ , RORα , and RORβ stable cell lines and the IC50 values obtained are shown in Fig. 1.</p><p>Compound 6a showed the lowest IC50 towards RORγ (5.1 μM), but it lacked selectivity as it also inhibited transcriptional activity by RORα with comparable efficacy (6.4 μM) and RORβ with slightly higher IC50 (8.3 μM). Compounds 6b and 6c also inhibited RORγ with similar potency (5.6 and 5.8 μM, respectively); whereas 6b was a dual RORγ /α antagonist with lower potency against RORβ , compound 6c showed a slight selectivity towards RORγ with a 2-fold lower IC50. Interestingly, all these compounds share the same R1 (L-Phe) and R2 (phenylacetic acid) groups, varying only in the R3 position. In general, most of the active compounds in this class could be categorized as dual RORγ /α antagonists, with a moderately higher IC50 values against RORβ cells. Compounds 6n and 6o showed some selectivity towards RORγ , but their potency was compromised.</p>\n\nCell lines and luciferase assays\n<p>TRex-CHO-K1 cells (Thermo Scientific), which express the tetracycline repressor, were used to generate stable cell lines expressing a luciferase reporter gene under the control of a 5x-UAS response element (pGL-4.31 [luc2P/Gal4UAS/Hygro], Promega). Following transfection and a two weeks selection with hygromycin, TRex-CHO-UAS-luc cells were transfected with a pcDNA4/TO expression vector (Thermo Scientific) encoding the Gal4-DBD-ROR-LBD fusion proteins containing the following ROR LBDs: mRORγ (NR1F1) (Pro-261-end), hRORα (NR1F2) (Pro-211-end), or rRORβ (Pro 205-end). The pcDNA4/TO expression vector allows for tetracycline controlled expression of the corresponding Gal4-ROR proteins as well as for selection of stably transfected cells with Zeocin. Individual clones for each of the Gal4-RORγ , Gal4-RORα , and Gal4-RORβ cell lines were isolated using a standard serial dilution protocol. Clones with the highest doxycycline-dependent stimulation of luciferase activity were selected for all functional experiments described below. For TRex-CHO-Gal4-RORγ-Luc cells, we also selected for clones that showed sensitivity to SR2211 as reported in the literature.8 All cell lines were maintained in complete Ham's F12 medium containing 10% fetal bovine serum (FBS), 10 μg/ml blasticidine, 100 μg/ml hygromycin, and 100 μg/ml Zeocin, and were used within 2 months of culture without any loss of doxycycline-dependent stimulation of luciferase activity.</p><p>For screening dihydroimidazole tethered imidazolinethione compounds, cells were resuspended in Ham's F12 assay medium containing 2% charcoal-treated FBS and no selection drugs. Cells were seeded in 30 μl of assay medium at a density of 10,000 cells per well in 384 white plates with clear bottom (Greiner). Cells were allowed to attach for 2 h, followed by compound treatment and stimulation with 20 ng/ml doxycycline for an additional 16–20 h. Appropriate solvent controls, negative controls with no doxycycline, and blanks with no cells were included in every plate. Prior to measuring luciferase activity, 6 μl of 0.15 mg/ml resazurin (Cayman Chem.) in PBS were added and cells were incubated for 1 h at 37 °C. Fluorescence intensity was measured at 545 nm excitation/590 nm emission using a CLARIOstar reader (BMGLabtech) as a measure of cell viability, to detect any potential cytotoxic effect of the test compounds. Following the resazurin assay, the medium was removed and 20 μl of BriteLite-Plus luciferase reagent (Perking Elmer) were added; luminescence was measured in a CLARIOstar reader using a 384 aperture spoon and a 0.1 s measuring interval. Luminescence was normalized by the fluorescence intensity and the percentage of inhibition was calculated with respect to control cells stimulated with doxycycline in the presence of solvent control (0.1% DMF). All primary screens were performed in triplicate with a final concentration of 2 μg/ml. Active compounds were subjected to a dose response confirmatory assay followed by a second 6 point titration experiment with quadruplicate data points. IC50 values were calculated using GraphPrism (GraphPad software).</p>", "role": "user" }, { "content": "Targeting the transcriptional activity of nuclear hormone receptors has proven an effective strategy to treat certain human diseases, and they have become a major focus point to develop novel therapies for the treatment of cancer, inflammation, autoimmune diseases, metabolic disorders, and others. One family of nuclear receptors that has attracted most interest in recent years is the retinoic acid receptor-related orphan receptors (RORs), in particular ROR\\xce\\xb3 . ROR\\xce\\xb3 is a critical regulator of the immune system and ROR\\xce\\xb3 antagonists have shown activity in animal models of inflammatory autoimmune diseases. Here we present the synthesis and biological evaluation of dihydroimidazole tethered imidazolinethiones. We have identified several dual ROR\\xce\\xb3 /\\xce\\xb1 and pan-ROR antagonists with significant activity in cellular assays that could serve as starting points for future optimization efforts to generate potent and selective ROR\\xce\\xb3 modulators.", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.805.1
[ { "content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine\n\nMultimedia or spoken content presents more attractive information than plain text content, but it's more difficult to display on a screen and be selected by a user. As a result, accessing large collections of the former is much more difficult and time-consuming than the latter for humans. It's highly attractive to develop a machine which can automatically understand spoken content and summarize the key information for humans to browse over. In this endeavor, we propose a new task of machine comprehension of spoken content. We define the initial goal as the listening comprehension test of TOEFL, a challenging academic English examination for English learners whose native language is not English. We further propose an Attention-based Multi-hop Recurrent Neural Network (AMRNN) architecture for this task, achieving encouraging results in the initial tests. Initial results also have shown that word-level attention is probably more robust than sentence-level attention for this task with ASR errors.\n\nIntroduction\nWith the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test.\nThe listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously.\nWith the fast development of deep learning, neural networks have successfully applied to speech recognition BIBREF8 , BIBREF9 , BIBREF10 or NLP tasks BIBREF11 , BIBREF12 . A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . They incorporated attention mechanisms BIBREF16 with Long Short-Term Memory based networks BIBREF19 . In Question Answering field, most of the works focused on understanding text documents BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Even though BIBREF24 tried to answer the question related to the movie, they only used the text and image in the movie for that. It seems that none of them have studied and focused on comprehension of spoken content yet.\n\nProposed Approach\nThe overall structure of the proposed model is in Fig 2 . The input of model includes the transcriptions of an audio story, a question and four answer choices, all represented as word sequences. The word sequence of the input question is first represented as a question vector $V_Q$ in Section \"Experiments\" . With the question vector $V_Q$ , the attention mechanism is applied to extract the question-related information from the story in Section \"Story Attention Module\" . The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector $V_{Q_n}$ in Section \"Hopping\" . This answer selection vector $V_{Q_n}$ is finally used to evaluate the confidence of each choice in Section \"Answer Selection\" , and the choice with the highest score is taken as the output. All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise.\n\nStory Attention Module\nFig. 3 (B) shows the attention mechanism which takes the question vector $V_Q$ obtained in Fig. 3 (A) and the story transcriptions as the input to encode the whole story into a story vector representation $V_{S}$ . The story transcription is a very long word sequence with many sentences, so we only show two sentences each with 4 words for simplicity. There is a bidirectional GRU in Fig 3 (B) encoding the whole story into a story vector representation $V_{S}$ . The word vector representation of the $t$ -th word $S_{t}$ is constructed by concatenating the hidden layer outputs of forward and backward GRU networks, that is $S_t = [y_{f}(t) \\Vert y_{b}(t)]$ . Then the attention value $\\alpha _t$ for each time index ${t}$ is the cosine similarity between the question vector $V_{Q}$ and the word vector representation $S_{t}$ of each word, $V_{S}$0 . With attention values $V_{S}$2 , there can be two different attention mechanisms, word-level and sentence-level, to encode the whole story into the story vector representations $V_{S}$3 .\nWord-level Attention: We normalize all the attention values $\\alpha _t$ into $\\alpha _t^\\prime $ such that they sum to one over the whole story. Then all the word vector $S_{t}$ from the bidirectional GRU network for every word in the story are weighted with this normalized attention value $\\alpha _{t}^\\prime $ and sum to give the story vector, that is $V_{S} = \\sum _{t}\\alpha _{t}^{\\prime }S_{t}$ .\nSentence-level Attention: Sentence-level attention means the model collects the information only at the end of each sentence. Therefore, the normalization is only performed over those words at the end of the sentences to obtain $\\alpha _t^{\\prime \\prime }$ . The story vector representation is then $V_{S} = \\sum _{t=eos}\\alpha _t^{\\prime \\prime }*S_{t}$ , where only those words at the end of sentences (eos) contribute to the weighted sum. So $V_{S} = \\alpha _4^{\\prime \\prime }*S_4 + \\alpha _8^{\\prime \\prime }*S_8$ in the example of the Fig. 3 \n\nAnswer Selection\nAs in the upper part of Fig. 2 , the same way previously used to encode the question into $V_Q$ in Fig. 3 (A) is used here to encode four choice into choice vector representations $V_A$ , $V_B$ , $V_C$ , $V_D$ . Then the cosine similarity between the output of the last hop $V_{Q_n}$ and the choice vectors are computed, and the choice with highest similarity is chosen.\n\nBaselines\nWe compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 . $\\bullet $ Choice Length: The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions. This included: (i) selecting the longest choice, (ii) selecting the shortest choice or (iii) selecting the choice with the length most different from the rest choices. $\\bullet $ Within-Choices similarity: With the vector representations for the choices in pre-processing of Section \"Experimental Setup\" , we computed the cosine distance among the four choices and selected the one which is (i) the most similar to or (ii) the most different from the others.\n $\\bullet $ Question and Choice Similarity: With the vector representations for the choices and questions in pre-processing of Section \"Experimental Setup\" , the choice with the highest cosine similarity to the question is selected. $\\bullet $ Sliding Window BIBREF24 , BIBREF32 : This model try to found a window of $W$ utterances in the story with the maximum similarity to the question. The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation. After obtaining the window with the largest cosine similarity to the question, the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice. The choice with the highest score is selected as the answer.\n $\\bullet $ Memory Network BIBREF16 : We implemented the memory network with some modifications for this task to find out if memory network was able to deal it. The original memory network didn't have the embedding module for the choices, so we used the module for question in the memory network to embed the choices. Besides, in order to have the memory network select the answer out of four choices, instead of outputting a word in its original version, we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer. We shared all the parameters of embedding layers in the memory network for avoiding overfitting. Without this modification, very poor results were obtained on the testing set. The embedding size of the memory network was set 128, stochastic gradient descent was used as BIBREF16 with initial learning rate of 0.01. Batch size was 40. The size of hop was tuned from 1 to 3 by development set.\n\nResults\nWe used the accuracy (number of question answered correctly / total number of questions) as our evaluation metric. The results are showed in Table 1 . We trained the model on the manual transcriptions of the stories, while tested the model on the testing set with both manual transcriptions (column labelled “Manual”) and ASR transcriptions (column labelled “ASR”).\n $\\bullet $ Choice Length: Part (a) shows the performance of three models for selecting the answer with the longest, shortest or most different length, ranging from 23% to 35%.\n $\\bullet $ Within Choices similarity: Part (b) shows the performance of two models for selecting the choice which is most similar to or the most different from the others. The accuracy are 36.09% and 27.87% respectively.\n $\\bullet $ Question and Choice Similarity: In part (c), selecting the choice which is the most similar to the question only yielded 24.59%, very close to randomly guess.\n $\\bullet $ Sliding Window: Part (d) for sliding window is the first baseline model considering the transcription of the stories. We tried the window size {1,2,3,5,10,15,20,30} and found the best window size to be 5 on the development set. This implied the useful information for answering the questions is probably within 5 sentences. The performance of 31.15% and 33.61% with and without ASR errors respectively tells how ASR errors affected the results, and the task here is too difficult for this approach to get good results.\n $\\bullet $ Memory Network: The results of memory network in part (e) shows this task is relatively difficult for it, even though memory network was successful in some other tasks. However, the performance of 39.17% accuracy was clearly better than all approaches mentioned above, and it's interesting that this result was independent of the ASR errors and the reason is under investigation. The performance was 31% accuracy when we didn't use the shared embedding layer in the memory network.\n $\\bullet $ AMRNN model: The results of the proposed model are listed in part (f), respectively for the attention mechanism on word-level and sentence-level. Without the ASR errors, the proposed model with sentence-level attention gave an accuracy as high as 51.67%, and slightly lower for word-level attention. It's interesting that without ASR errors, sentence-level attention is about 2.5% higher than word-level attention. Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words, especially for the conceptual and high-level questions in this task. Paying too much attention to every single word may be a bit noisy. On the other hand, the 34.32% ASR errors affected the model on sentence-level more than on word-level. This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences. However, with attention on word-level, when a word is incorrectly recognized, the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer.\n\nAnalysis on a typical example\nFig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set, with the proposed AMRNN model using word-level or sentence-level attention on manual or ASR transcriptions respectively. The darker the color, the higher the weights. Only a small part of the story is shown where the response of the model made good difference. This story was mainly talking about the thick cloud and some mysteries on Venus. The question for this story is “What is a possible origin of Venus'clouds?\" and the correct choice is “Gases released as a result of volcanic activity\". In the manual transcriptions cases (left half of Fig 4 ), both models, with word-level or sentence-level attention, answered the question right and focused on the core and informative words/sentences to the question. The sentence-level model successfully captured the sentence including “...volcanic eruptions often omits gases.”; while the word-level model captured some important key words like “volcanic eruptions\", “emit gases\". However, in ASR cases (right half of Fig 4 ), the ASR errors misled both models to put some attention on some irrelevant words/sentences. The sentence-level model focus on the irrelevant sentence “In other area, you got canyons...\"; while the word-level model focused on some irrelevant words “canyons\", “rift malaise\", but still capture some correct important words like “volcanic\" or “eruptions\" to answer correctly. By the darkness of the color, we can observe that the problem caused by ASR errors was more serious for the sentence-level attention when capturing the key concepts needed for the question. This may explain why in part (f) of Table 1 we find degradation caused by ASR errors was less for word-level model than for sentence-level model.\n\nFigure 2: The overall structure of the proposed Attention-based Multi-hop Recurrent Neural Network (AMRNN) model.\n\nTable 1: Accuracy results of different models\n\nQuestion: What is the new task proposed in this work?", "role": "user" }, { "content": "{\"answer\": [\" listening comprehension task \"], \"evidence\": [\"With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today\\u0027s machine can perform with such a test.\", \"The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like \\u201cWhat is name of the highest mountain in Taiwan?\\u201d. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously.\"]}", "role": "assistant" } ]
README.md exists but content is empty.
Downloads last month
25