Dataset Viewer
Auto-converted to Parquet
content
stringlengths
1
311k
Heat Transfer Enhancement by Shot Peening of Stainless Steel In heat exchange applications, the heat transfer efficiency could be improved by surface modifications. Shot peening was one of the cost-effective methods to provide different surface roughness. The objectives of this study were to investigate the influences of the surface roughness on the heat transfer performance and to understand how the shot peening process parameters affect the surface roughness. The considered specimens were 316L stainless steel hollow tubes having smooth and rough surfaces. The computational fluid dynamics (CFD) simulation was used to observe the surface roughness effects. The CFD results showed that the convective heat transfer coefficients had linear relationships with the peak surface roughness (Rz). Finite element (FE) simulation was used to determine the effects of the shot peening process parameters. The FE results showed that the surface roughness was increased at higher sandblasting speeds and sand diameters. Introduction Stainless steels have been used in endless applications ranging from construction, transportation, medical, nuclear, and chemical industries due to their excellent properties. Mainly because of its ability to resist corrosion, this material has long been used in virtually all cooling waters and many chemical environments. One of the most common uses of stainless steel is as a heat exchanger, because it works well in high-temperature conditions (resistance to corrosion, oxidation, and scaling). Generally, stainless steel surfaces are also easy to clean. Most importantly, this material is economical in terms of cost and long-term maintenance service. The heat transfer efficiency of exchangers can be improved by modifying the geometry of the heat exchange tube/plate and altering fluid flows patterns. Recently, surface modification or texturing has shown its potential in many technological developments, such as friction reduction, biofouling, artificial parts, and even stem cell research. As a result, the trend in using surface modification/texturing in heat transfer enhancement has been on the rise, and some of the reviewed literature is presented below. The influences of dimple/protrusion surfaces on the heat transfer were investigated by Jing et al.. Chen et al. found out that the asymmetric dimple with skewness downstream was better than the symmetric shape in heat exchange. The asymmetric flow structures were numerically evaluated by Turnow et al., and the heat transfer was found to be improved with the asymmetric vortex structures. Du et al. discovered that the dimple location significantly affected the flow structure and heat transfer. The enhanced heat transfer by dimples was also found in the work of Zheng et al.. observed that a bleed hole in a dimpled channel helped improve the heat transfer. numerically studied the 3D turbulent flow and convective heat transfer of the dimpled tube. Both dimples and protrusions were investigated by the same research group, and they found both the dimpled and protruded surface flow mixing, which provided a better heat transfer rate. The authors went on to study the effect of the teardrop surface and found flow mixing improvement. Some other dimples and surface textured shapes for heat transfer enhancement in various conditions can also be found in many research studies. By looking at the effects of surface roughness on heat transfer, many studies have also shown similar findings. Dierich and Nikrityuk observed that the roughness influenced the surface-average Nusselt number, and the authors also introduced the heat transfer efficiency factor. Pike-Wilson and Karayiannis did not find a clear relationship between the heat transfer coefficient and surface roughness. Ventola et al. proposed the heat transfer model, taking into account the size of the surface roughness and turbulent fluid flow. Tikadar et al. looked at heat transfer characteristics in a roughed heater rod. They found that there was an abrupt increase in the heat transfer coefficient at the transition region from the smooth to the surface roughness area. Several manufacturing processes can create surface textures on metals; for instance, laser texturing, rolling, elliptical vibration texturing, and extrusion forging and extrusion rolling processes. Shot peening has been widely used to create surface irregularities (surface roughness) on metal parts. Most of the research studies on the shot peening of stainless steels were focused on residual stress, fatigue and corrosion, surface characteristics, and tribology. The main interest of this research is to determine the efficiency of heat transfer of a shot peened surface in stainless steel tube. Most of the literature mentioned above used the finite volume method (FVM) to determine the heat transfer characteristics of textured surfaces. In addition, many studies utilized the finite element method (FEM) to understand the effects of shot peening process parameters. The FVM was carried out in this study to analyze the heat convection performance of a considered shot peened surface in comparison with the smooth one. Then, the FEM was used to predict the shot peening parameters that would provide the enhanced heat transfer surface. Heat Transfer of Pinned Fins Since the heat exchanger of interest in this study was a 316L stainless steel tube, the heat transfer testing apparatus, as shown in Figure 1, was used. Table 1 presents the material properties of the considered tube in this study. The primary purpose of the heat transfer experiment was to determine the heat convection performance of the tube surfaces: smooth surface and rough surface. The fin specimens were hollow tubes 21.34 mm in diameter, 100.00 mm in length, and 2.87 mm in thickness. Note that both ends of each fin were covered with thin circular plates. The smooth surfaces were prepared by machining and polishing to obtain the peak surface roughness (R z ) of 0.015 m. The rough surfaces were prepared by machining and shot peening to obtain the peak surface roughness of (R z ) of 25 m by using the sand diameter of 350 m. In each test, four fins were pinned to the heater. The fan was installed below the heater at the Air Inlet location. Three thermocouples were attached to the following locations: Inlet Temperature (T in ), Mid-Point Temperature (T mid ), and Outlet Temperature (T out ) through the Air Outlet. The thermocouples (N-type) with the ±1% C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both T mid and T out, and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. (Tout) through the Air Outlet. The thermocouples (N-type) with the ±1% °C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both Tmid and Tout, and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. Coatings 2020, 10 The equation of the convective heat transfer coefficients (hconv) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The equation of the convective heat transfer coefficients (h conv ) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The total air mass flow was:. The power of the heater could be described as: Since T in = T air, the temperature of the heater was obtained from: The heat transfer rate was found to be: Thus, the convective heat transfer coefficient could be determined by the following equation: If the new constant (C) was set to be: then the convective heat transfer coefficient could be rewritten as: Coatings 2020, 10, 584 of 15 In addition, Q in could be obtained from the following equation: Q heater dt ≈ Q avg t heat As a result, the convective heat transfer coefficient of the pinned fins could be determined from: Note that the C value could be calculated by using the variable values in Table 3. The variables Q avg, V air, t heat, and T in were input variables. In addition, T out = T mid if the measured temperature point was at the Midpoint Temperature (T mid ). Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. In the actual heat transfer experiment, the heater was turned on for 20 min (theat) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (Vair), and the temperatures on Tin, Tmid, and Tout were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (hconv) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (Rz) and airflow speed (Vair). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the hconv value of each condition. In the actual heat transfer experiment, the heater was turned on for 20 min (t heat ) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (V air ), and the temperatures on T in, T mid, and T out were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (h conv ) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (R z ) and airflow speed (V air ). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the h conv value of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (D S ), Impact Angle (), and Impact Velocity (V I ), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 6 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (R z ) of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (DS), Impact Angle (), and Impact Velocity (VI), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 5 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (Rz) of each condition. Effects of Surface Roughness to Heat Convection The results of the CFD model validation with the heat transfer experiment are presented in Figure 6. Note that the results of the CFD calculations were mesh-independent. In Figure 6a, the temperature-velocity plot at T out of the experimental and simulation surfaces was illustrated. The errors between the experiments and simulations at T out ranged from 1% to 7% (Figure 6b). The temperature-velocity plot at T mid was also displayed in Figure 6c, and the errors at T mid were illustrated in Figure 6d. The highest error value at T mid was approximately 3%. Considering the error values both at T out and T mid, the CFD model was considered valid in this study. In Figure 7, the temperature contour maps of the smooth and rough surfaces at varying flow speeds are shown. At V = 0 m/s, this condition was considered natural or free convection because the fan was not running, allowing the heated airflow to pass the measured temperature points naturally. If the air velocities were set to be 1.21 or 2.42 m/s, the conditions were forced convection. In the free convection conditions, the rough surfaces provided higher temperatures at both T mid and T out than the smooth surfaces did. In the forced convection scenarios, the temperatures at T mid and Tout significantly dropped in comparison to those of the rough surfaces. The results implied that the rough surfaces provided a better heat transfer performance. The velocity contour maps in Figure 8 could also be used to explain the phenomena. In the natural convection conditions (no influence of air velocity), the heated airflows in the rough surfaces were already higher than those of the smooth surfaces, Coatings 2020, 10, 584 9 of 15 leading to the higher temperatures in T mid and T out. The main reason was due to the increased surface areas of the pinned fins, which allowed more air to exchange heat. With the influences of air velocity (forced convection), the increased in airflow speed generally led to reduced temperatures, which could be clearly noticed in the smooth surfaces. At higher airflow velocities, the small vortex (air swirling) around the dimpled areas accumulated to the large swirling around at the downstream side of the rough surfaces. This large air circulation area (vortex shading) had a lower air velocity, allowing more air to exchange heat with the fins. As a result, more airflow could carry heat from the rough surfaces to the measured temperature points. On the contrary, the smooth surfaces did not develop the large vortex sharing area downstream. Thus, only a fraction of low-velocity airflows downstream was generated around the smooth fins. As a result, there was a lower amount of air to exchange heat at high airflow velocities. The predicted temperatures and the convective heat transfer coefficients (hconv) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the hconv of each condition. The higher value of hconv represented a higher heat transfer efficiency. According to the figure, the hconv values of the smooth surfaces were equal to zero The predicted temperatures and the convective heat transfer coefficients (h conv ) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the h conv of each condition. The higher value of h conv represented a higher heat transfer efficiency. According to the figure, the h conv values of the smooth surfaces were equal to zero in the natural convection cases. In the forced convection cases, the h conv values increased with airflow velocity and surface roughness. Comparing the h conv values between T mid and Tout, the h conv values at T out were lower, since the measured location was further away from the heat source. Increasing the airflow velocities from 1.21 to 2.42 m/s caused the h conv values to double. If the surface roughness values were increased up to 250 m, the h conv could be increased up to 155% at V = 1.21 m/s and 192% at V = 2.42 m/s. Most importantly, if the rough surface conditions were considered (R z = 25 to 250 m), the linear relationships between the h conv and R z values could be observed. Although the h conv values linearly increased with R z, the surface roughness could not be increased without limits to enhance the heat transfer efficiency further. The most critical issues were the manufacturability of such a high-depth surface and the heat-transfer limits. In this study, the ratio of surface roughness (R z = 250 m) to pipe diameter (21.34 mm) or R SD was approximately 0.01. In the forced convection conditions, the convective heat transfer coefficient's ratios between rough and smooth surfaces or h conv-RS ranged from 1 to 2.9. Thus, the ratio between h conv-RS and R SD ranged from 100 to 290, which could be used to compare with other tubes having different diameters and surfaces. The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 m were not zero (approximately 32). The Reynold (R e ) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. Re = dairVairD/air R e = d air V air D/ air where d air is the density of air, V air is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The R e numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the R e numbers. In the free convection conditions, the increased surface roughness did not affect the R e numbers. Note that the R e numbers at 250 m were not zero (approximately 32). The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 m were not zero (approximately 32). Effects of Shot Peening Parameters to Surface Roughness The effects of the shot peening parameters on surface roughness are presented in Figure 11. Note that the results of the FE calculations were mesh-independent. The R z values were plotted against the impact angles at various impact velocities in Figure 11a,b. The R a values were plotted against the impact angles at different impact velocities in Figure 11c,d. The FE results demonstrated that increasing sand diameters and impact velocities led to an increase in both R a and R z values in all the cases. Larger sand diameters and higher blasting speeds created higher momentum, leading to higher impact energy and causing higher deformed (dimpled) areas. However, increasing the impact angles led to the reduced impact areas and thus decreasing R a and R z values. Based on the results, it could be noticed that using the sand diameters of 100 m up to 350 m resulted in the R a and R z values ranging from 1 m up to 18 m, which would provide the increased h conv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 m would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. led to the reduced impact areas and thus decreasing Ra and Rz values. Based on the results, it could be noticed that using the sand diameters of 100 m up to 350 m resulted in the Ra and Rz values ranging from 1 m up to 18 m, which would provide the increased hconv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 m would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. Discussion The results of this work were divided into two parts: the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (Rz) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the Rz values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot Discussion The results of this work were divided into two parts: the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (R z ) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the R z values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot sandblasting process investigating mixed sand diameters at controlled speeds is currently ongoing. The results of this future work should be more applicable to the actual shot peening processes to obtain any desired surface roughness values, particularly on tube profiles. The ultimate impact of this continuous research work would provide more significant benefits to the energy-saving systems. Conclusions The project investigated the effects of surface roughness on the convective heat transfer coefficients of the 316L stainless steel tubes and the effects of the shot peening parameters on the surface roughness. The CFD simulation was carried out to study the influences of the surface roughness at varying airflow velocities. In the free convection conditions, the pinned fins' increased surface areas allowed more air to exchange heat. In the forced convection conditions, more airflow could carry heat from the rough surfaces to the measured temperature points. The linear relationships between the peak surface roughness (R z ) and the convective heat transfer coefficients were found. The influences of the shot peening process parameters (sand diameter, impact angle, and impact velocity) on the tube surfaces were investigated by using the FE simulation. The results showed that the increase in sand diameter and impact velocity increased surface roughness. The increased in convective heat transfer coefficient values (up to 50%) could be obtained by using the sand diameters of 100 m up to 350 m, resulting in the shot peened surface having the R a and R z values ranging from 1 m up to 18 m. The established links among the heat transfer efficiency, surface roughness, and shot peening parameters in this study could be used to enhance the heat transfer efficiency.
The oligosaccharides of the glycoprotein pheromone of Volvox carteri f. nagariensis Iyengar (Chlorophyceae). The sexuality-inducing glycoprotein of Volvox carteri f. nagariensis was purified from supernatants of disintegrated sperm packets of the male strain IPS-22 and separated by reverse-phase HPLC into several isoforms which differ in the degree of O-glycosylation. Total chemical deglycosylation with trifluoromethanesulphonic acid yields the biologically inactive core protein of 22.5 kDa. This core protein possesses three putative binding sites for N-glycans which are clustered in the middle of the polypeptide chain. The N-glycosidically bound oligosaccharides were obtained by glycopeptidase F digestion and were shown by a combination of exoglycosidase digestion, gaschromatographic sugar analysis and two-dimensional HPLC separation to possess the following definite structures: (A) Man beta 1-4GlcNAc beta 1-4GlcNAc; (B) (Man alpha)3 Man beta 1-4GlcNAc beta 1-4GlcNAc Xyl beta; (C) (Man alpha)2 Man beta 1-4GlcNAc beta 1-4GlcNAc; (D) (Man)2Xyl(GlcNAc)2. Xyl beta Two of the three N-glycosidic binding sites carry one B and one D glycan. The A and C glycans are shared by the third N-glycosylation site. The O-glycosidic sugars, which make up 50% of the total carbohydrate, are short (up to three sugar residues) chains composed of Ara, Gal and Xyl and are exclusively bound to Thr residues.
Canine leptospirosis in Canada, test-positive proportion and risk factors (2009 to 2018): A cross-sectional study Over the past decade, there has been an apparent increased frequency and widened distribution of canine leptospirosis in Canada, however, this has been minimally investigated. Availability and clinical uptake of Leptospira polymerase chain reaction (PCR)-based testing of dogs in Canada may provide important insight into the epidemiology of this canine and zoonotic infectious disease. Study objectives were to evaluate clinical canine Leptospira PCR test results from a large commercial laboratory to determine temporal and spatial distribution in Canada and identify dog, geographic and temporal risk factors for test-positive dogs. This cross-sectional study analyzed data obtained from IDEXX Laboratories, Inc. on 10,437 canine Leptospira PCR tests (blood and/or urine) submitted by Canada-based veterinarians (July 2009 to May 2018). Multivariable logistic regression was used to identify risk factors for test-positive dogs. Test-positive proportion varied widely annually (4.814.0%) and by location. Provinces with the highest test-positive proportion over the study period were Nova Scotia (18.5%) and Ontario (9.6%), with the prairie provinces (Manitoba and Alberta combined) having the lowest proportion (1.0%); the northern territories could not be evaluated due to limited testing. In the final model, dog age, sex, breed, month, and year test performed, and location (urban/rural, province) of the practice submitting the sample were significant predictors of a positive Leptospira PCR test. Dogs less than one year of age (OR = 2.1; 95% CI: 1.62.9), male sex (OR = 1.3; 1.11.5), toy breed (OR = 3.3; 2.54.4), and samples submitted from an urban practice (OR = 1.3; 1.01.8) had the greatest odds of a positive Leptospira PCR test as compared to referent groups. Significant two-way interactions between province-month and year-month highlight the complex spatial and temporal influences on leptospirosis occurrence in this region. Our work suggests a high incidence of canine leptospirosis regionally within Canada. Identifiable dog and location factors may assist in future targeted prevention efforts. Introduction Leptospirosis is a globally important zoonotic disease. Spread primarily by the urine of animal host species, historically leptospirosis was predominately diagnosed in dogs that had rural lifestyles (e.g., live on livestock farms, take part in rural outdoor activities such as field trials). The epidemiology of canine leptospirosis has evolved over recent years, identifying five serovars (each with varying reservoir host species) that appear to be most important for canine health in North America. Peridomestic wildlife species (e.g., rodents, raccoons), as well as dogs are reservoirs of key Leptospira serovars, supporting the increased recognition of leptospirosis as an important disease of dogs residing in strictly urban environments. Different Leptospira serovars are present in different areas of North America, likely reflecting regional variations in the epidemiology of the disease. The incidence and seroprevalence of Leptospira spp. in dogs appear to be increasing, particularly in North America. Clinical disease in dogs may be severe, and therapy frequently entails costly treatment and long-term monitoring. Further, infected dogs may serve as a source of infection for other animals and people, and common environmental exposure may allow dogs to serve as a sentinel for human risk. Despite the importance of leptospirosis for canine health, the epidemiology of this disease in dogs is poorly understood. This knowledge gap is particularly evident in Canada, where canine leptospirosis has been minimally studied, with existing studies limited by region and date. Further, recent anecdotal data suggest increased disease incidence in eastern and Atlantic Canada, with a large, suspected outbreak in Nova Scotia, Canada in 2017. Multiple diagnostic methods have been developed to identify Leptospira-infected dogs. Unfortunately, diagnosis can be challenging with some testing methodologies based on antibody response, making it difficult to differentiate clinical disease from prior exposure or vaccination. Most prior studies of leptospirosis in dogs have used such antibody-based tests (e.g., microscopic agglutination test; MAT) . In recent years, polymerase chain reaction (PCR) leptospirosis testing has become increasingly used in clinical veterinary medicine. The PCR test may reduce the interpretation challenges commonly encountered with antibodybased tests, extending such benefits to population-based studies of the disease. At present, diagnosis is typically confirmed through consistent clinical signs, suggestive clinicopathologic changes (thrombocytopenia, renal and/or liver enzyme elevations, dilute urine), response to appropriate antimicrobials, and either PCR (urine, blood, or both) and/or serology testing (ideally, paired acute and convalescent microscopic agglutination test (MAT)) serology or inclinic ELISA (IgG or IgM). Prevention of disease is most effectively accomplished through avoidance of contaminated environments. However, the ability to completely avoid contaminated areas is challenging and typically impractical. Leptospira vaccination is generally considered non-core, and administration relies on practitioners' level of awareness and ability to make an appropriate risk assessment of the dog based on location, lifestyle, and other factors. Given the scarcity of Canadian specific publications on leptospirosis, anecdotal information regarding increased disease incidence in eastern and Atlantic Canada, and importance of reliable data to inform dog owner and veterinarian risk assessment and targeted prevention, there is a clear need for further work in this area. Similar habitat risks and approaches to prevention may be applicable to dog and human disease, and dogs may serve as sentinels for human health risks. Thus, addressing research gaps for dogs may have applications to human leptospirosis prevention. The objectives of our study were to evaluate a clinical dataset of canine Leptospira PCR positive test results and determine temporal (month, annual) and spatial distribution in Canada, and to identify dog, geographic, and temporal risk factors for PCR-positive test dogs. Material and methods This was a cross-sectional study that used 10,437 PCR test results for canine leptospirosis. Tests were submitted between July 1, 2009 and May 1, 2018 to IDEXX Laboratories, Inc. Data were obtained from the reported results of routine clinical tests (IDEXX Real-PCR1 Test) performed on blood and/or urine samples from dogs submitted from veterinary clinics in Canada. Permission to access and use data was obtained from IDEXX Laboratories, Inc. The PCR test used has been validated in dogs, with reported high sensitivity and specificity (92% and 99%, respectively using MAT as the gold standard). In summary, the PCR test is based on IDEXX's proprietary real-time PCR oligonucleotides (IDEXX Laboratories, Westbrook, Maine). Hap-1 gene sequences were aligned, a region was selected for primer and hydrolysis probe design, and real-time PCR was run with standard primer and probe concentrations using the Roche LightCycler 480 Probes Master mastermix (Version 3.0, Applied Biosystems). The test detects Leptospira spp. DNA from only the recognized pathogenic strains due to the presence of the hap1 gene, including L. interrogans and L. kirschneri. The same test was used over the study period. All dogs were assumed to be client-owned, and diagnostics and treatments were at the discretion of the client. Specific clinical data were unavailable, but it was presumed that most dogs were tested due to presence of clinical signs suggestive of leptospirosis. Data were available in electronic database format, with data on month and year test performed, dog signalment (age in days, sex/reproductive status, breed), Canada Post forward sortation area (FSA; first three postal code characters) for the submitting veterinary clinic, test result, and dog unique identifier. Dog unique identifiers were created by IDEXX Laboratories, Inc. based on a combination of dog name, owner name and clinic ID and verified by study author (JWS) for entries with the same unique identifier based on signalment and FSA information. Postal code for the dog's residence and vaccination status were unknown. Repeat entries were removed. An entry was considered a repeat if the same dog (based on unique identifier, signalment, and FSA) was tested more than once in a given calendar year or, for December and January entries, spanning two calendar years. If the test outcomes for a set of repeat entries were the same, the most recent entry was retained in the dataset and additional entries were removed. If the test outcomes differed for a set of repeat entries, a single positive (most recent) entry was retained. From these data, variables were derived for the dog's age in years at the time of testing (� 1.0, 1.1-4.0, 4.1-7.9, � 8.0), AKC breed group (sporting, herding, hound, non-sporting, terrier, toy, working, mixed; based on breed listed, if more than one breed listed, then categorized as mixed), month when testing was performed (Jan-Feb. Although geographically NS is included within Atlantic Canada, it was separated for analysis due to anticipated differences in the epidemiology of canine leptospirosis between these regions. Data maps and analysis Data mapping. To visualize the spatial distribution of testing and positive canine Leptospira test results, the frequency of tests performed, frequency of test-positive dogs, and test-positive proportion of dogs were separately mapped by FSA for all years combined using FSA boundary files from Statistics Canada and ArcGIS version 10.2.2 (Environmental Systems Research Institute). Calculating incidence-type measures is challenging with companion animals, as the owned canine population in Canada is unknown. In this circumstance, the human population was used as a proxy for the canine population; we calculated this measure by dividing the number of positive canine Leptospira PCR tests over the study period for a given FSA by the 2016 human census population for that FSA (reported as test-positive dogs per 100,000 people). Test-positive dogs per 100,000 people by FSA was mapped as described above. Potential 'hot spots' of canine Leptospira in Canada were identified visually as areas (FSAs) with relatively increased number of cases, increased test-positive proportion, and increased number of cases per human capita as compared to surrounding FSAs. Data analysis. Test-positive proportion of leptospirosis at the dog-level was calculated overall and for subgroups (province/region, year, month) by dividing the number of positive canine Leptospira PCR tests by total number of tests. Ninety-five percent Clopper-Pearson confidence intervals were calculated. Years for which only partial year data were available (2009 and 2018), were excluded from annual descriptive statistics (test-positive proportion) but were included in all model building. The association between dog, temporal, and spatial variables and a positive Leptospira PCR test was explored using logistic regression models. The main outcome of interest was a positive canine Leptospira PCR test. Descriptive statistics, Odds Ratios (OR), and 95% confidence intervals (CI) for the ORs were calculated for all variables. Univariable logistic regression models were built and variables with a likelihood ratio test P-value < 0.2 were eligible to be tested for inclusion in the final multivariable model. Spearman's rank correlation (Phi coefficient for two dichotomous variables) was performed between all predictors eligible for multivariable analysis. When predictors were highly correlated (correlation coefficient � |0.80|), one variable was retained based on perceived importance/relevance for drawing conclusions from the analysis. A final multivariable logistic regression model was built using a backwards stepwise approach. Confounding was assessed when removing variables from the multivariable model. Variables were kept in the model as confounders if their removal changed the coefficients of one or more retained terms by �20%. Statistical significance was based on a likelihood ratio test P-value < 0.05. Biologically relevant 2-way interactions between variables retained in the final multivariable model were assessed for significance using a likelihood ratio test. Predictive probabilities and associated 95% CIs for a positive test result were graphed to visualize interaction terms. Model fit was assessed with the Hosmer-Lemeshow goodness of fit test. Stata 16 (StataCorp, College Station TX) was used for analysis. Results A total of 19,066 PCR test results were available over the study timeframe. Removal of 8,629 repeat entries was performed, the vast majority of which (7,882; 91%) were exact repeats except for sample source (urine, blood), resulting in 10,437 Leptospira PCR test results used in the analyses. Most records (8,454; 81%) were complete, with AKC breed group being the most frequently missing data element (8,807; 84% present) ( Table 1). The population of dogs tested was 52% male and had a mean age of 6.9 years (SD 3.9; range 0.1-20). The number of PCR tests submitted increased each year (full calendar years 2010: 223; 2017: 2,581), with the greatest annual increase between 2010 and 2011 (263% positive change). Of the total 1,620 FSAs in Canada, samples were reported from 788 FSAs (48.6%; from which there were a median of 6 samples/FSA, range 1-283/FSA; Fig 1). In the univariable analysis, dog signalment (age, sex, AKC breed group), location (province, rural/urban status), and time of testing (month, year) were significant predictors for a positive Leptospira test result (all P < 0.02) and retained in the final multivariable model (all P < 0.05; Table 1). In addition, the two-way interactions of province � month (odds of a positive test result reported in each month depended on the province where the test was performed) and year � month (odds of a positive test result reported in each month depended on the year the test was performed) were both significant predictors (each P < 0.04) when added to the main effects model and thus retained in the final model. The final model fit the data (Hosmer-Lemeshow P = 0.88). In the multivariable model, younger dogs were at significantly increased odds of being Leptospira-positive as compared to elderly dogs (referent � 8.0 yr), with dogs less than or equal to one year of age having the greatest odds of infection (OR = 2.1; 95% CI 1.6-2. Province, month, and year were included in interaction terms and therefore visualized with margins plots (Figs 6 and 7). From January through August, the predicted probabilities of dogs testing positive for Leptospira was relatively low (generally < 10%) with minimal annual deviations (exception July-August 2017), while in the latter half of the calendar year (Sept-Dec), the predicted probabilities were generally higher (> 10%) with more pronounced annual deviations (Fig 6). Differing effects of time of year (month) were noted among the provinces. Ontario, British Columbia, Quebec, and Nova Scotia revealed an increased predicted probability of dogs testing positive for Leptospira in the fall/winter (September-December; Fig 7). This was most pronounced in Ontario and Nova Scotia, which had the greatest peak predicted probabilities (~40%), while a mild increased peak predicted probability was noted in British Columbia. Limited data made it difficult to accurately predict the probabilities of dogs testing positive for Leptospira in the prairie and Atlantic provinces. Discussion There have been few studies on canine leptospirosis in Canada. As such, the epidemiology of the disease in the country remains poorly defined and limited to a single geographical area (Ontario). In the United States, recent MAT-positive prevalence for canine leptospirosis was estimated to be 14% between 2000 and 2014. Another US study, evaluating canine Leptospira PCR tests submitted through a commercial diagnostic laboratory (2009 to 2016) found an overall test-positive proportion of 5.4%. Our PCR-based work identified an overall Canadian canine leptospirosis test-positive proportion of 8.4%. While it is important to acknowledge that clinical data were not available for our study, preventing us from confirming that test-positive dogs were clinically affected leptospirosis cases, we presume that the results of the PCR testing likely reflect clinical disease. This is because Leptospira testing would be predominantly performed in dogs with signs of disease. As the PCR test used only detects Leptospira spp. nucleic acid of pathogenic strains, a positive test result in a dog with clinical disease supports recent infection. The analysis of PCR-based data lessened challenges commonly observed with leptospirosis MAT interpretation (e.g., interference related to vaccination and exposure). Prior studies have consistently noted annual and geographic fluctuations in the occurrence of canine leptospirosis. Similarly, we noted pronounced annual variation in the proportion of test-positive tests (4.8-14%). One of these variations was consistent with an anecdotally reported outbreak of canine leptospirosis in the Halifax region of Nova Scotia. Additionally, we noted marked variation in the canine leptospirosis test-positive proportion across the Canadian regions . These regional variations may reflect "hot spots" for canine leptospirosis with consistently elevated disease risk and locations likely to experience future elevated risks. However, regional variation in clinician awareness and testing patterns (e.g., only test dogs with a high suspicion for leptospirosis, test dogs along the continuum of suspicion) may also be responsible for these variations. Multiple factors are considered to influence test-positive prevalence of canine leptospirosis [1,2,11,. These reported factors have included dog location (i.e., urban vs. rural), month of testing, monthly rainfall at time of testing, and use of prevention strategies (e.g., increased vaccination efforts). The Canadian provinces with the highest test-positive proportion for canine leptospirosis in our study were Ontario and Nova Scotia. Similarly, developed case maps visualized likely areas of leptospirosis 'hot spots' in these two regions (high number of cases and high number of cases per human capita in given FSAs). These findings align with previous studies from the United States that observed clusters of cases and increased seroprevalence/test-positive proportion in specific regions. These US-based studies have indicated that increased rainfall, flooding, and proximity to bodies of water in these regions, along with the presence of reservoir hosts could explain the observed regional distribution of canine leptospirosis. It is likely that similar environmental factors are associated with (perhaps responsible for) the noted Canadian distribution we observed; however, further investigation is needed to confirm this. Similar to other studies, we observed dogs from urban areas were at increased odds for testing positive for Leptospira as compared to those from rural regions. This could be due to encroaching wildlife populations, or other factors such as veterinary healthcare seeking behaviors or socioeconomic status, leading to exposure to area wildlife or domestic cats, which may act as Leptospira carrying reservoir hosts in these regions. These urban wildlife (e.g., rodents, raccoons) and feline reservoirs have been identified as purported risk factors for canine leptospirosis [24,. Further work identifying the regional distribution of serovars and serovar-reservoir relationships, perhaps targeting wildlife and feline reservoirs, would be useful to further guide prevention efforts, potentially including vaccine development. Risk factor evaluation in our work shared similarities with the recent US-based study evaluating Leptospira PCR data. Significant predictors of a positive leptospirosis test were younger age and male sex. Male sex has been repeatedly identified as a risk factor for canine leptospirosis, as demonstrated in a recent systematic review/meta-analysis. Our current work adds to this 'higher risk canine profile' toy and terrier breeds, a finding suggested in a previous study (i.e., dogs weighing <15 pounds (6.8 kg) had the greatest odds of being diagnosed with leptospirosis). Further work examining vaccine coverage in smaller dogs in Canada, especially from urban centers, would be useful to determine if lower vaccination coverage may be playing a role in leptospirosis risk in these breeds. Historically, there have been concerns with increased adverse events in these breeds following leptospirosis vaccination and while recent data suggest these fears are largely unfounded with the current canine leptospirosis vaccines, anecdotally concerns persist. Seasonal variation in canine leptospirosis has been observed in prior studies, with an increase in prevalence/proportion test-positive from late summer to early fall. Potential explanations for such seasonal variations include changes in precipitation or temperature that impact survival of Leptospira, or seasonal canine activities or Leptospira reservoir host behaviors/movements that increase exposure risk for dogs. This seasonal effect was observed in our work; we noted a trend that dogs were more likely to be test-positive September through December in zones with a greater seasonal temperature variation (e.g., Quebec and Ontario) as opposed to those without this variation (e.g., British Columbia). This finding is consistent with the recent US-based PCR study, but contrasts with previous MAT-based work. Similar to other observational studies of this type, there are limitations inherent to our work. Leptospirosis testing was performed based on clinician-owner decision, the result of which may have introduced various biases, including temporal, regional, and canine signalment-related testing approaches. Signalment data was provided by testing veterinarians or support staff, which may include data entry errors as well as potential biases for documentation varying with breed listed in the data (e.g., listed as a single breed when in fact mixed breed). Another limitation is our data were acquired from a single commercial laboratory, possibly leading to regional under-representation, and thus they might not be representative of the population as a whole. This could lead to locations for which canine leptospirosis testing data were not available (e.g., few or no test results in certain regions of Canada), resulting in regions with unmeasured Leptospira occurrence and a potential lack of generalization of our work. However, it is likely that these regions are of limited consequence to the overall conclusions of our work due to the historic and widespread sample submission coverage of the country (as observed by FSA test submissions, especially considering human and dog distribution in the country). Further, maps were created to provide estimates of risk levels of canine leptospirosis across Canada. These estimates are subject to various errors and biases including likely changes in test use/availability over the study period and regional and temporal differences in at-risk canine populations. Another limitation of the dataset and other studies of this type is the lack of dog clinical data and recent travel history. We assumed that samples were received from dogs presenting to veterinary practices with clinical disease consistent with leptospirosis and from an exposure relatively close to the submitting veterinary practice. In conclusion, this work identified focal regions of canine leptospirosis in Canada, with the highest test-positive proportion (and related hot spots) in Ontario and Nova Scotia. The case maps and identified risk factors will allow practitioners and dog owners to identify areas of high risk for leptospirosis exposure and occurrence where dogs live, visit, and perform, which will allow for targeted prevention efforts.
Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics. Introduction The unsupervised generation of data is a dynamic area of machine learning and an active research frontier. Besides generating a desired distribution, one often wants to guide the generative model towards certain desired criterion. For example, when generating music one might wish to reward the model for choosing certain melodic patterns. In the case of molecular design, besides generating valid molecules, one may want to optimize molecular properties to screen for their potential in solar cells or batteries or OLEDS. One way to impose arbitrary objectives to generative models is via naive reinforcement learning (RL), where we define hard coded rewards and treat the model as a player taking actions in a game-like setting. Unfortunately, depending on the objective, this approach may lead to unphysical or uninteresting samples. Following the chemistry example, compounds can be represented as SMILES strings -text sequences that encode the connectivity graph of arbitrary molecules. SMILES have grammar rules based on chemical bonding, which can lead to valid expressions such as "N#Cc1ccn1" or invalid such as "[C[[[N", encoding a non-plausible molecule. In this setting a simple objective fucntion such as "molecules should be valid" might skew our model to create monotonous repetitions of valid characters and generate strings such as "CCCN", "CCCCN", "CCCCCN", which are valid but not very interesting molecules in terms of chemical diversity. Previous work has relied on specific modifications of the objective function to reach the desired properties. For example, in order to increase the number of generated valid molecules, Ranzato et al added penalties for molecules with unrealistically large carbon rings (size larger than 6), molecules shorter than any training set sample, or molecules with less carbons than any molecule in the training set.Without penalty or reward terms, RL can easily get stuck around local maxima which can be very far from the global maximum reward. This type of reward optimization depends highly on experimentation as well as domain specific knowledge. In the Objective-Reinforced Generative Adversarial Network (ORGAN) introduced in this work, we explore the addition of an adversarial approach to the reinforcement learning setting using generative adversarial networks (GANs) to bias the generative process. GANs are a family of generative models proposed by Goodfellow et al which are able to generate compelling results in a number of image-related tasks. The proposed ORGAN model adds a GAN discriminator term to the reinforcement learning reward function. The generator is trained to maximize a weighted average of two rewards: the "objective," which is hard coded and unchanging, and the discriminator, which is dynamically trained along with the generator in an adversarial fashion. While the objective component of the reward function ensures that the model selects for traits that maximize the specified heuristic, the changing discriminator part does not let the model lock on certain modes and therefore to generate uninteresting or repetitive data. In order to implement the above idea, we build on SeqGAN, a recent work that successfully combines GANs and RL in order to apply the GAN framework to sequential data. While our implementation uses recurrent neural networks (RNNs) to generate sequential data, in theory our model can be adapted to generate any type of data, as long as the GAN is trained via RL. We implement our model in the context of molecules and music generation, optimizing several different metrics. Our results show that ORGAN achieves better objective scores than maximum likelihood estimate (MLE) and SeqGAN, without sacrificing the diversity of generated data 1. Model As illustrated in Figure 1, the informal idea of ORGAN is that the generator is trained via policy gradient to maximize two rewards at the same time: one that improves the hard-coded objective and another that tries to fool the discriminator in a GAN setting. More formally, the discriminator D is a Convolutional Neural Network (CNN) parameterized by. We feed both real and generated data to it and update D like we would any classifier The generator G is an RNN parameterized by using Long Short Term Memory (LSTM) cells that generates T length sequences Y 1:T = (y 1,..., y T ). Let R(Y 1:T ) be the reward function defined for full length sequences. We will define it later in this section. We treat G as an agent in an reinforcement learning context. Its state s t is the currently produced sequence of tokens Y 1:t and its action a is the next token y t+1 to select. The agent's stochastic policy is given by G (y t |Y 1:t−1 ) and we wish to maximize its expected long term reward where s 0 is a fixed initial state. Q(s, a) is our action-value function that represents the expected reward at state s of taking action a and following our current policy G to complete the rest of the sequence. For any full sequence Y 1:T, we have Q(s = Y 1:T −1, a = y T ) = R(Y 1:T ) but we also wish to calculate Q for partial sequences at intermediate timesteps, considering the expected future reward when the sequence is completed. In order to do so, we perform N -time Monte Carlo search with the canonical rollout policy G represented as where Y n 1:t = Y 1:t and Y n t+1:T is stochastically sampled via the policy G. Now Q becomes Following the original SeqGAN work, in order to apply reinforcement learning to an RNN, an unbiased estimation of the gradient of J() can be derived as Finally in ORGAN we simply define the reward function for a particular sequence Y 1:t as where D is the discriminator and O is the objective representing any heuristic that we like. When = 0 the model ignores D and becomes a naive RL, whereas when = 1 it is simply a SeqGAN model. Algorithm 1 shows pseudocode for the proposed model. Highlighted in blue are the specific differences between SeqGAN and our model. All the gradient descent steps are done using the Adam algorithm. Note that our implementation on top of SeqGAN is merely because we are working with sequential data. In theory, the ORGAN model can be used with most types of GAN. Pre-train G using MLE on S; Generate negative samples using G for training D ; Pre-train D by minimizing the cross entropy; repeat for g-steps do Generate a sequence Y 1: Experiments We compare ORGAN with three other methods of training RNNs: SeqGAN, Naive RL, and Maximum Likelihood Estimate (MLE). Each of the four methods is used to train the same architecture. All training methods involve a pre-training step of 240 epochs of MLE. The MLE baseline simply stops right after pre-training, while the other methods proceed to further train the model using the different approaches. For each dataset, we first build a dictionary mapping the vocabulary -the set of all characters present in the dataset -to integers. Then we preprocess the dataset by transforming each sequence into a fixed sized integer sequence of length N where N is the maximum length of a string present in the dataset along with around 10% more characters increase flexibility. Every string with length smaller than N is padded with "_" characters. Thus the input to our model becomes a list of fixed sized integer sequences. Molecules In this work, we used three different chemistry datasets consisting of SMILES strings representing molecules in pharmaceutical contexts: Drug-like -A random subset of 15k drug-like molecules from ZINC database of 35 million commercially-available compounds for virtual screening, typically used for drug discovery. The maximum sequence length is 121 and the alphabet size is 37. Small mols -A random subset of 15k molecules from the set of 134 thousand stable small molecules. This is a subset of all molecules with up to nine heavy atoms (CONF) out of the GDB-17 universe of 166 billion organic molecules. The maximum sequence length is 31 and the alphabet size is 25. Tiny mols -A smaller subset of the Small mols dataset containing all molecules with less than 12 atoms. The maximum sequence length is 29 while the alphabet size is 22. When choosing reward metrics we picked qualities that are normally desired for drug discovery: Novelty: A function that will return a value of 1 if a SMILES encoded is a valid molecule that is outside of the training set. It will return 0.3 if it is only valid, and 0 if not. Each plot is optimized for a particular objective (bold line). Visibly these rewards present the most growth in each case signifying the generated data is indeed receiving bias from the metric. Diversity: A function from 0 to 1 that measures the average similarity of a molecule with respect to a random subset of molecules for a training set. The closer to 0, the less diverse this molecule is. Solubility (Log(P)): A function that measures the solubility of a molecule in water normalized to the range 0 to 1 based on experimental data. The value is computed via RDKit's LogP function. Synthetizability: A normalized version of the synthetic accessibility score as implemented in RDKit, a measure based on molecular complexity and fragment contributions that estimates how hard or how easy it is to make a given molecule. Figure 3 shows some of the generated molecules. In Figure 2 we can observe that indeed the reward is inducing a bias in the generated data since each particular reward is growing the fastest before plateauing to a maxima. We also find that some metrics will improve after time along with the reward. This behavior is expected since many of the rewards are not independent. We can also observe that novelty presents the same pattern. This is again not Naive RL Table 2: SMILES strings for the molecules illustrated in 3, each column is from a different training algorithm. Upon a glance, the naive RL molecules seem much less interesting than the SeqGAN and ORGAN ones because they are not as diverse and posses repetitive substructures." surprising, since novelty is essentially counting valid sequences which are necessary for all other rewards. Meanwhile, Figure 3 illustrates the role that plays in the generative process. While the maximum of the reward druglikeliness lies in the RL setting we can see that the generated molecules are quite simple. By increasing we are end up with a lower (but still high) druglikeness while generating more acceptable molecules. This pattern is perceived in almost all metrics, sometimes the maxima will also be in the intermediate regime. In our experiments we also noted that naive RL has different failure scenarios. For instance, when trained on the Small mols dataset, it learned to generate longer sequences with monotonous patterns like "CCCCCCCCCCCCCCCCC" or "CCCCOCCCCCOCCCCCC". When trained on the drug-like dataset, however, the naive RL model learned to generate sequences significantly shorter than those in the training set such as the single atom molecule "N." The GAN component can easily prevent any of these failure scenarios since the discriminator can learn to penalize string batches that do not look like the training set (in this case they do not have the same average length) as seen in average lengths of sequences generated by each of the models in Table 1. Music To further demonstrate the applicability of ORGAN, we extended its application to music. We used ABC notation, which allows music to be expressed in a text format, and facilitates reading the dataset and analyzing its contents. In this work we use the Nottingham dataset, filtering out sequences longer than 80 notes. We generate songs optimizing three different metrics: Tonality: This measures how many perfect fifths are in the music that is generated. A perfect fifth is defined as a musical interval whose frequencies have a ratio of approximately 3:2. These provide what is generally considered pleasant note sequences due to their high consonance. Melodicity:In order of decreasing consonance, we have the following intervals: perfect fifth, perfect fourth, major sixth, major third, minor third, minor sixth, major second, minor seventh, and minor second. We decided that, for an interval to be considered melodic, it must be one of the top three in the above list. Note that tonality is a subset of melodicity, as maximizing tonality also helps maximize melodicity. Ratio of Steps: A step is an interval between two consecutive notes of a scale. An interval from C to D, for example, is a step. A skip, on the other hand, is a longer interval. An interval from C to G, for example, is a skip. By maximizing the ratio of steps in our music, we are adhering to conjunct melodic motion, Our rationale here is that by increasing the number of steps in our songs we make our melodic leaps rarer and more memorable. Moreover we calculate diversity as the average pairwise edit distance of the generated data following the approach of Habrard et al. We do not attempt to maximize this metric explicitly but we keep track of it to shed light on the trade-off between metric optimization and sample diversity in the ORGAN framework. Table 3 shows quantitative results comparing ORGAN to other baseline methods optimizing for three different metrics. ORGAN outperforms SeqGAN and MLE in all of the three metrics. Naive RL achieves a higher score than ORGAN for the Ratio of Steps metric, but it underperforms in terms of diversity, as Naive RL would likely generate very simple rather than diverse songs. In this sense, similar to the molecule case, although the Naive RL ratio of steps score is higher than ORGAN's, the actual generated songs can be deemed much less interesting. By tweaking, the ORGAN approach allows one to explore the trade-off between maximizing the desired objective and maintaining diversity or "interestingness." Besides the expected correlation between Tonality and Melodicity, we also noticed an inverse relationship between Ratio of Steps and any of the other two. This is because two consecutive notes, what qualifies as a step, do not have the frequency ratios of perfect fifths, perfect fourths, or major sixths, which are responsible for increasing Melodicity. Figure 4 shows the distribution of the tonality of data sampled from ORGAN and from the training set. As increases, the curves become more smooth since the discriminator forces the model to approach the structure of the training set. Without the discriminator component, naive RL ( = 0) Figure 4: Tonality distributions of the sampled data from ORGAN with different values of as well as from the training set. The x axis represents the tonality metric while the y axis represents the frequency of that particular tonality in the sampled data. creates a distribution that does not seem very realistic because of its lack of diversity. In all cases, however, the RL component of ORGAN successfully skews the model towards data with higher values of tonality. Conclusion and Future Work In this work, we have presented a general framework which builds on the recent advances of Generative Adversarial Networks to optimize an arbitrary objective in a sequence generation task. We have shown that ORGAN improves desired metrics achieving better results than RNNs trained via either MLE or SeqGAN. More importantly, we are able to tune the data generation towards a particular reward function while using the adversarial setting to keep data non-repetitious. Moreover, ORGAN is much easier to use as a black box than similar objective optimization models, since one does not need to introduce multiple domain-specific penalties to the reward function: many times a simple objective "hint" will suffice. Future work should attempt to formalize ORGANs from a theoretical standpoint in order to understand when and how they converge. It is crucial to understand when GANs converge in general, which is still an open question. Future work should also do more to understand the influence of the choice of heuristic on the performance of the model. Finally, future work should extend ORGANs to work with data that is not sequential, such as images. This requires framing the GAN setup as a reinforcement learning problem in order to add an arbitrary (not necessarily differentiable) objective function. We believe this extension to be quite promising since real valued GANs are currently better understood than sequence data GANs.
Permutation Separations and Complete Bipartite Factorisations of K_{n, n} Suppose p<q are odd and relatively prime. In this paper we complete the proof that Kn,n has a factorisation into factors F whose components are copies of Kp,q if and only if n is a multiple of pq(p+q). The final step is to solve the "c-value problem" of Martin. This is accomplished by proving the following fact and some variants: For any 0 k n, there exists a sequence (1,2,..., 2k+1) of (not necessarily distinct) permutations of {1, 2,...,n } such that each value in { k,1 k,...,k } occurs exactly n times as j(i) i for 1 j 2k 1a nd 1 i n.
The Global Trade in Live Cetaceans: Implications for Conservation Cetaceanssmall whales, dolphins and porpoiseshave long been popular performers in oceanaria. Captive cetaceans have also been used for research and employed in military operations. In some jurisdictions cetacean display facilities have been phased out or prohibited, and in the US and Hong Kong a high proportion of the whales and dolphins now in captivity have been captive-bred. A large, growing and increasingly opportunistic trade in dolphins and small toothed whales nevertheless exists, its centres of supply having shifted away from North America, Japan, and Iceland to the Russian Federation and developing nations in Latin America, the Caribbean, West Africa, and Southeast Asia. Demand for live captures is being driven by: a new wave of traditional-type oceanaria and dolphin display facilities, as well as travelling shows, in the Middle East, Asia, Latin America, and the Caribbean; increasingly popular programs that offer physical contact with cetaceans, including the opportunity to feed, pet, and swim with them; and the proliferation of facilities that offer dolphin assisted therapy to treat human illness or disability. Rigorous assessment of source populations is often lacking, and in some instances live capture is adding to the pressure on stocks already at risk from hunting, fishery bycatch, habitat degradation, and other factors. All too often, entrepreneurs appear to be taking advantage of lax (or non-existent) regulations in small island states or less developed or politically unstable countries to supply the growing global demand for dolphins and small whales. The regulation of trade in live cetaceans under CITES is fraught with problems, not least the poor quality of reporting and the lack of a rigorous mechanism for preparation, review, and evaluation of non-detriment findings. Preparation of this article was supported by WDCS, the Whale and Dolphin Conservation Society. The authors are especially grateful to Cathy Williamson and Philippa Brakes for contributing data, ideas, and helpful critical comments.
Imaging sparse scatterers through a multi-frequency CS approach In this paper an inverse scattering technique based on the multi-task Bayesian Compressive Sensing is presented within a multi-frequency framework. After recasting the problem in a probabilistic sense, the solution to the imaging problem is determined by means of an efficient Relevant Vector Machine coupled with a contrast source inversion procedure. Selected numerical results are discussed to assess and compare the efficiency and robustness of the proposed strategy with respect to the state-of-the-art techniques.
Hypoconnectivity networks in schizophrenia patients: a voxelwise meta-analysis of rs-fMRI In the last few years, the eld of brain connectivity has focused on identifying biomarkers to describe different health states and to discriminate between patients and healthy controls through the characterization of brain networks. A particularly interesting case, because of the symptoms' severity, is the work done with samples of patients diagnosed with schizophrenia. This meta-analysis aims to identify connectivity networks with different activation patterns between people diagnosed with schizophrenia and healthy controls. Therefore, we collected primary studies exploring whole brain connectivity by functional magnetic resonance imaging at rest in patients with schizophrenia compared to healthy people. Thus, we identied 25 high-quality studies that included a total of 1285 people with schizophrenia and 1279 healthy controls. The results indicate hypoactivation in the right precentral gyrus and in the left superior temporal gyrus of people with schizophrenia compared with the control group. These regions have been linked to decits in gesticulation and the experience of auditory hallucinations in people with schizophrenia. A study of heterogeneity demonstrated that the effect size was inuenced by the sample size and type of analysis. These results imply new contributions to the knowledge, diagnosis, and treatment of schizophrenia both clinically and in research. Introduction Schizophrenia is the most important severe mental health disorder and implies an extraordinary health problem. Risk factor identi cation and etiological studies remain unresolved in the scienti c response. The genetic and neurobiological factors that have been associated with schizophrenia are quite heterogeneous. Several studies have focused on the dopaminergic dysfunction hypothesis concerning schizophrenia. For example, dopaminergic hypoactivity has been associated with the prefrontal cortex with negative symptomatology (Buckley and Castle, 2015). Currently, there is no biological marker for the diagnosis of schizophrenia. Thus, the early identi cation of people with a high risk of schizophrenia poses a major public health challenge before they begin to manifest the symptomatology of the disorder. Consequently, a better understanding of this disorder's neurological substrates can help to identify better strategies for early diagnosis and psychological and individualized pharmacological treatment (Nickl-Jockschat and Abel, 2016). Functional magnetic resonance imaging in the resting state (rs-fMRI) has proven to be a promising tool to contribute to the diagnosis of several disorders, such as autism (), attention and hyperactivity de cit disorder (), major depression disorder (Craddock, Holtzheimer, Hu and Mayberg, 2009) and schizophrenia Chyzhyk, Savio and Graa, 2015;;;Qureshi, Oh and Lee, 2019). Thus, it is a noninvasive technique and does not require the active collaboration of the patient (Lee, Smyser, and Shimony, 2013), which is especially important for evaluating brain activity in those populations that have affected their cognitive performance. In the study of connectivity in rest and schizophrenia, evidence has shown signi cant differences in patients compared to healthy populations. More speci cally, the disconnection hypothesis has been studied, a framework in which the diversity of symptoms typical of schizophrenia is conceptualized as a result of disconnections in neural networks (Friston and Frith, 1995). In line with this hypothesis, alterations in the default mode network (DMN), the most prominent resting network (Woodward, Rogers and Heckers, 2011), have been shown. In addition, a reduction in precuneus connectivity with other areas was noted in patients with schizophrenia compared to the control group. Connectivity intensity in this area is negatively correlated with the severity of negative symptoms, more speci cally with the apathy domain (). Gangadin et al. also demonstrated a reduction in the connectivity of the hypocampal-mesencephal-striate network in patients with schizophrenia. In addition, patients showed increased long-range positive connectivity in the right middle frontal gyrus (MFG) and short-range positive connectivity in the right MFG and right superior medial prefrontal cortex, which are brain regions in the anterior DMN. Hua et al. noted decreased connectivity between the thalamus and prefrontal cortex and cerebellum but an increase in the connectivity of the thalamus and the motor cortex in patients with schizophrenia. In addition, studies of bilateral asymmetric connectivity have noted that patients with a predominance of positive symptomatology showed signi cantly more asymmetry to the left hemisphere. Instead, the predominance group of negative symptoms showed more asymmetry to the right. These results suggest that predominantly positive and predominantly negative schizophrenia may have different neural bases and that certain regions in the frontal and temporal lobes, as well as the gyrus and precuneus, play an essential role in mediating the symptoms of this disorder (). Additionally, Chen et al. showed evidence that cerebellum disconnection is network-speci c; that is, the group of patients with schizophrenia showed decreased cerebellum connectivity with the prefrontal lobe and more corticocerebellar connectivity with regions involved in sensory-motor processing, which may be indicative of the de ciencies in inhibition observed in people with schizophrenia. In addition, Li et al., also with schizophrenia patients, showed reduced insula connectivity with the sensory cortex and putamen compared to people with a high risk of psychotic disorder. Complementarily, schizophrenia patients have shown increased connectivity between the posterior cingulate cortex and the inferior left gyrus, mid-left frontal gyrus, and mid-left temporal gyrus. Conversely, schizophrenia patients have shown decreased connectivity in the executive control network and the dorsal attention network. These results show that resting-state network connectivity is altered in patients with schizophrenia, so the alterations are characterized by reduced segregation between the DMN and the executive control networks in the prefrontal cortex and temporal lobe (Woodward, Rogers and Heckers, 2011). This study found no statistically signi cant distinctions in the connectivity of the salience network; instead, Huang et al. showed evidence of hyperconnectivity of the salience network and the prefrontal cortex and cerebellum, as well as hypoconnectivity between the cortico-striatal-thalamic-cortical subcircuit and the salience network. In recent years, some meta-analyses have explored rs-fMRI in patients with schizophrenia compared to control groups. For example, Xiao et al. showed evidence that people with schizophrenia had increased connectivity, estimated with regional homogeneity (ReHo), in the right superior frontal and right superior temporal gyrus, as well as decreased ReHo connectivity in the right fusiform gyrus, left superior temporal gyrus, left postcentral gyrus, and right precentral gyrus (focused on ReHo studies). Dong et al. conducted a meta-analysis showing that patients with schizophrenia presented hypoconnectivity in the DMN, affective network (AN), ventral attentional network (VAN), thalamic network (TN), and somatosensory network. They also showed hypoconnectivity between van and TN, VAN and DMN, VAN and the frontoparietal (FN), between FN and TN, and between FN and DMN. Only hyperconnectivity was found between the AN and VAN (focused on seed-based analysis studies). The abovementioned Li et al. showed evidence through a meta-analysis that supports hypoconnectivity in certain brain networks in schizophrenic patients. More speci cally, the self-referential network (superior temporal gyrus) and DMN (right medial prefrontal cortex and left precuneus and anterior cingulate) focused on independent component analysis (ICA) studies. Finally, Gong et al. showed that people with schizophrenia presented a decreased amplitude of low frequencies (ALFF) in the bilateral postcentral gyrus, bilateral precuneus, left inferior parietal gyrus, and right occipital lobe. In addition, they found an increased ALFF in the right-handed, left inferior frontal gyrus, left inferior temporal gyrus, and right anterior cingulated cortex. To our knowledge, no meta-analysis includes rs-fMRI studies involving the whole brain, as well as studies that use different analysis techniques (ICA, ReHo, ALFF, falFF, etc.). Moreover, given the incongruences between the studies in this eld, the aim of this meta-analysis is to identify functional connectivity networks of the whole brain using a paradigm of rs-fMRI in patients with schizophrenia compared to healthy people (without any neurological or psychiatric disorder). Thus, it is expected that patients diagnosed with schizophrenia will show statistically signi cant differences in functional connectivity compared to healthy people. In addition, a secondary objective is to analyze the relationship between the effect size and mediator variables, such as sample size, age, gender, etc. Methods Study selection. Two independent investigators performed a bibliographic search using the following databases: PubMed, Web of Science (WoS), Psycinfo, Google Scholar, and Scopus. Additionally, the Boolean algorithm with the keywords used is presented in Supplementary Appendix 1. We included studies published until February 28, 2021. This meta-analysis was conducted according to the "Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA)" guidelines. The inclusion criteria for the studies were as follows: 1) they were published in English or Spanish; 2) the full text was available; 3) they were a primary study in a human population; 4) they included a patient group diagnosed with schizophrenia following the DSM-IV criteria or the structured clinical interview for DSM-IV (SCID); 5) they compared brain activation between schizophrenia patients and healthy people; 6) they used rs-fMRI; or 7) the studies reported Montreal Neurological Institute (MNI) or Talairach coordinates of the whole brain contrast comparing schizophrenia persons and control subjects. The exclusion criteria were as follows: 1) systematic reviews or meta-analysis; 2) methodologic studies; 3) patients with schizophrenia with other psychiatric or neurological disorders; or 4) studies focused on dynamic connectivity or use graph analysis or any other technique that does not identify coordinates. The studies were screened out as shown in Figure 1. Our search yielded a total of 3563 studies . Subsequently, 2106 duplicate papers were removed through Mendeley, and 64 duplicate papers were removed via Rayyan. A total of 1234 studies were excluded after title/abstract screening because they did not meet the inclusion criteria. Later, 159 articles were sought to read the full text, but 8 studies could not be found. Thus, after the full-text screening, 73 studies were excluded because they did not provide information about the peak activation coordinates, 16 because they did not report on the statistics associated with the coordinates, and 37 because they did not analyze the whole brain. Finally, only 25 studies matching our inclusion criteria were included and are marked with an * in the reference list. In addition, we obtained a 100% rate of agreement between the two investigators for the study search and selection. Voxel-Wise Meta-analysis. We used seed-based d mapping (SDM) software (available at http://www.sdmproject.com) to analyze the differences between schizophrenia patients and healthy subjects. The approach details have been described in Radua and Mataix or Mller et al.. First, the reported peak coordinates of all functional differences, which were statistically signi cant at the whole brain level in these studies, were chosen. We ensured that all included studies used the same statistical threshold throughout the whole brain to avoid possible bias toward regions with liberal thresholds. Thus, we considered the minimum threshold to be de ned by a.001 signi cance value and Student's t reference value with the degrees of freedom of each study estimated by the conventional expression (n 1 +n 2 -2). Second, we recreated peak coordinates for each study with a standard MNI map of the group difference effect size based on their peak t value by means of a nonnormalized Gaussian kernel to the voxels near the peak, which assigns higher values to the voxels closer to peaks. Third, the mean map was obtained by voxelwise calculation of the study map random-effects mean, weighted by the sample size. Fourth, to correctly balance the sensitivity and speci city, we used the p value of 0.05 as the main threshold with an additional peak height of z = 1. Jackknife sensitivity analysis was performed to test the replicability of the results. After the calculation of Cohen's d and the con dence interval (CI) analysis of the different papers was performed, a descriptive analysis of every paper result was resumed in different images to clarify the results obtained in every included study. Quality Assessment. We assessed the quality of the included studies using a checklist consisting of 11 items that focused on the clinical characteristics of the participants, the neuroimaging and data analysis methodology, the results, and the conclusions of the studies. The quality assessment scale is shown in Supplementary Appendix 2. This checklist was based on previous metanalyses and has been described elsewhere (Shepherd, Matheson, Laurens and Green, 2012;). One author reviewed the included studies and determined a complete rating. The resulting scores were discussed between two investigators, and a consensus quality score was obtained. Results Studies included in the meta-analysis. Supplementary Appendix number 3 shows the data obtained in each study to describe each mediator variable for each analyzed paper. Table 1 shows the basic descriptive statistics of the mediator variables. We want to highlight that the total number of patients analyzed assumes a signi cant number (n = 1285) and a control group (n = 1279). With regard to the techniques used to estimate connectivity networks, the most common are ALFF (20.68%) and ReHo (34.48%). Meta-analysis Result. In Figures 2 and 3, the forest plot shows the effect size of each study, as well as the total mean of the effect size and a con dence interval of 95%. Conversely, if the effect size was negative, the patient group showed decreased activation compared to the healthy group. Therefore, we can see that the most signi cant positive effect size found is 1.715, with the upper limit being 2.448. On the other hand, the highest negative effect size is -1.673, with the lower limit being -2.361. Notably, in the case of negative differences, the work of Turner et al. and Fryer et al. show a lower width of the con dence interval, and the same work by Turner et al. shows the most accurate interval in the case of positive differences. In this meta-analysis, people with schizophrenia did not show any hyperactivation compared to controls. However, they show decreased activation in the right precentral gyrus, speci cally in the Brodmann area (BA) 4. In addition, they also show hypoactivation in the left superior temporal gyrus corresponding to BA 22. Thus, two clusters were found, one with 640 voxels and one with 150 voxels. The results are displayed in Table 2. In addition, Figure 4 shows the graphical representation of the areas that are hypoactivated (visualized with BrainNet Viewer; Xia, Wang and He, 2013; http://www.nitrc.org/projects/bnv/). Note that the size of the node is proportional to the voxels it represents. That is, the larger the node, the more voxels there are in that area and the larger the region it represents. It is also worth noting that the nodes represented in blue correspond to the right precentral gyrus (BA 4), and the yellow nodes refer to the left superior temporal gyrus (BA 22). (Table 3) was carried out to check the replicability of the results. This analysis revealed that the right precentral gyrus (with coordinates of 50, -10, 40) was replicable in all 29/29 datasets (each dataset with one study left out) and that the left superior temporal gyrus (-58, -20, 4) was replicable in 26/29 datasets (the results were not con rmed only in 3 of the simulations performed, that is, 89.66% reliability). Therefore, we can establish a very high reliability of the results obtained. Reliability analysis. A jackknife sensitivity analysis Publication bias analysis. In Figures 5 and 6, we can see funnel plots wherein publication bias is graphically displayed. As we can see, the graphs suggest that there is no publication bias since the points (which correspond to the effect sizes of each study) are distributed uniformly on one side and on the other side of the value 0 of the abscises axis (effect size). Therefore, in our case, we can see how studies have been published with increasingly signi cant effects. In addition, Table 4 shows the values of Z statistics, as well as their signi cance (p =.504 and p =.751), which indicates that there is no bias. Heterogeneity analysis. To determine the possible heterogeneity between the studies included in the present meta-analysis, Q and I 2 statistics were estimated. The results obtained are in (Table 5), where the values of tau () representing the variance of the effect size distribution are displayed. In addition, we can see that Q statistics, from both positive and negative peaks, are statistically signi cant (p <.001). In this way, there is heterogeneity between the different studies, so it is appropriate to explore the mediator variables that could explain this phenomenon. On the other hand, it must be pointed out that the degree of heterogeneity is calculated by the I 2 index, as we can see are values that indicate a moderate degree of heterogeneity. As mentioned above, the fact that there is heterogeneity between the studies included in the metaanalysis leads us to perform an analysis of possible mediator variables that could explain the variability between the effect sizes (here considered in absolute value). Since more than one statistically signi cant effect is present in some works, the effect size of each article analyzed is estimated by the mean of the effect size included in each paper. The following variables were explored: type of data analysis used in studies (ReHo, ALFF, fALFF, etc.), sample size, age, and sex of the patient group and control group, total, general, positive, and negative PANSS scores applied to the group of patients with schizophrenia, illness duration, and quality of the studies analyzed. Categorical variables. First, Welch's t-test was used to analyze the relationship between the effect size and the type of analysis (ReHo or ALFF, the rest of the techniques were underrepresented and were eliminated from this analysis). The results indicate that there was a statistically signi cant relationship between the effect size and the type of analysis used (t = -2.381; df = 13.927; p uni =.016; r =.538). In fact, the mean effect size in ReHo is d = 1.242, and in ALFF studies, d =.964, which suggests that studies using ReHo can obtain a greater effect size than those using ALFF, and this effect has high intensity according to Cohen's criteria. Quantitative variables: Meta-regression analysis. A meta-regression was performed to analyze whether the quantitative variables described above (Table 1) had a statistically signi cant impact. Thus, Table 6 presents the minimum quadratic estimates of each meta-regression to evaluate the effect of each mediator variable on the estimate of the effect size. From the above table, it follows that only sample sizes are statistically signi cant predictors of the effect size heterogeneity. The negative value of the regression coe cients indicates that works with higher sample sizes obtain lower values of the effect. In summary, the analysis performed with mediator variables indicates that, in the studies analyzed, the effect size is higher in ReHo estimated with smaller samples. Discussion To our knowledge, this is the rst meta-analysis to study rs-fMRI in the whole brain in patients with schizophrenia compared to healthy people, including studies that use different analysis techniques. The results indicate that there was hypoactivation of the right precentral gyrus and left superior temporal gyrus in patients with schizophrenia compared to the control groups. These results are congruent with other ndings. Dong et al. also showed evidence in his metaanalysis (only ALFF studies) of a reduction in connectivity in the left superior temporal gyrus in patients with schizophrenia. Similarly, another meta-analysis also showed evidence of decreased ReHo in this area (). Additionally, in a systematic review, connectivity alterations were found in this area in studies performed with both rs-fMRI and task fMRI (). In addition, children of schizophrenic patients also show reduced activation of the left superior temporal gyrus during hearing comprehension (). It should be noted that dysfunction in this region has been related to the presence of auditory hallucinations in patients with schizophrenia (;Hugdahl, Lberg and Nygrd, 2009). More speci cally, Plaze et al. showed evidence that the anterior area of the left superior temporal gyrus is part of the brain network associated with the perception of auditory hallucinations in patients with schizophrenia, indicating that activity in this cortical region may be related to the severity of hallucinations. Activation of this area has also been demonstrated during the experience of auditory verbal hallucinations (). Consistent with our results, Gong et al. showed evidence of ALFF alteration in the right precentral gyrus. In addition, Li et al. found hypoconnectivity between the right precentral gyrus, which is involved in motor function, and the postcentral and precentral gyrus and cerebellum. Additionally, Xiao et al. showed evidence of decreased ReHo in the right central gyrus. In addition, hypoactivation in this area has been shown in relatives of people with schizophrenia compared to the control group (Scognamiglio and Houenou, 2014). It should be noted that dysfunctions in praxis networks in patients with schizophrenia, which includes the right precentral gyrus, correlate with de cits in the gesticulation of these patients (). Our results are congruent with the meta-analysis of Gong et al., and we found that the duration of illness was not related to the effect size. Nevertheless, unlike the meta-analysis of Gong et al., we did not nd that any of the PANSS scores were related to the effect size. There are some limitations in this meta-analysis. First, we did not include those studies that did not report the coordinates or those that did not report the associated statistics. In addition, we have not taken into account the different subtypes of schizophrenia, which would be interesting for future research. Additionally, some analysis techniques used by the studies that we included were underrepresented and could not be taken into account when evaluating whether they could predict the effect size. There were several missing values regarding PANSS scores.
Throughput maximization of ad-hoc wireless networks using adaptive cooperative diversity and truncated ARQ We propose a cross-layer design which combines truncated ARQ at the link layer and cooperative diversity at the physical layer. In this scheme, both the source node and the relay nodes utilize an orthogonal space-time block code for packet retransmission. In contrast to previous cooperative diversity protocols, here cooperative diversity is invoked only if the destination node receives an erroneous packet from the source node. In addition, the relay nodes are not fixed and are selected according to the channel conditions using CRC. It will be shown that this combination of adaptive cooperative diversity and truncated ARQ can greatly improve the system throughput compared to the conventional truncated ARQ scheme and fixed cooperative diversity protocols. We further maximize the throughput by optimizing the packet length and modulation level and will show that substantial gains can be achieved by this joint optimization. Since both the packet length and modulation level are usually discrete in practice, a computationally efficient algorithm is further proposed to obtain the discrete optimal packet length and modulation level.
Ulcerative colitis complicated by autoimmune hepatitis-primary biliary cholangitis-primary sclerosing cholangitis overlap syndrome. Autoimmune liver diseases mainly include autoimmune hepatitis (AIH), primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC) and overlap syndrome. Patients with IBD are usually complicated by autoimmune liver diseases. We described a rare case of UC complicated by AIH-PBC-PSC overlap syndrome. The male patient had a long history of UC. After admitted to hospital, the patient was found to have abnormal liver function, and was diagnosed with AIH-PBC-PSC overlap syndrome by liver puncture biopsy. This case will help us learn more about patients who are confirmed with IBD complicated by autoimmune liver diseases.
Measurement of t ¯ t and single top quark production cross sections in CMS With a delivered integrated luminosity of around 140 fb − 1 at a center-of-mass energy of 13 TeV in the CMS experiment during Run 2, almost 300 million top quarks and top antiquarks were produced. As top quarks can be produced through either strong or electroweak interaction, they are a suitable tool to probe the strong and electroweak sectors of the standard model. Precision measurements of top quark pair (t¯t) and of single top quark production cross sections deliver constraints on several standard model parameters, e. g., the top quark mass, the strong coupling S, and the parton distribution functions. In addition, a sufficient amount of data has been collected to search for rare t¯t production modes. In this contribution, recent measurements of the differential t¯t cross sections and the latest inclusive and differential cross section measurement for single top quark production in association with a W boson performed by the CMS Collaboration are presented, as well as the search for exclusive t¯t production using the CMS-TOTEM Proton Precision Spectrometer.
Physical forcing of nitrogen fixation and diazotroph community structure in the North Pacific subtropical gyre Dinitrogen (N2) fixing microorganisms (termed diazotrophs) exert important control on the ocean carbon cycle. However, despite increased awareness on the roles of these microorganisms in ocean biogeochemistry and ecology, the processes controlling variability in diazotroph distributions, abundances, and activities remain largely unknown. In this study, we examine 3 years (20042007) of approximately monthly measurements of upper ocean diazotroph community structure and rates of N2 fixation at Station ALOHA (22°45N, 158°W), the field site for the Hawaii Ocean Timeseries program in the central North Pacific subtropical gyre (NPSG). The structure of the N2fixing microorganism assemblage varied widely in time with unicellular N2fixing microorganisms frequently dominating diazotroph abundances in the late winter and early spring, while filamentous microorganisms (specifically various heterocystforming cyanobacteria and Trichodesmium spp.) fluctuated episodically during the summer. On average, a large fraction (∼80%) of the daily N2 fixation was partitioned into the biomass of <10 m microorganisms. Rates of N2 fixation were variable in time, with peak N2 fixation frequently coinciding with periods when heterocystous N2fixing cyanobacteria were abundant. During the summer months when sea surface temperatures exceeded 25.2°C and concentrations of nitrate plus nitrite were at their annual minimum, rates of N2 fixation often increased during periods of positive sea surface height anomalies, as reflected in satellite altimetry. Our results suggest mesoscale physical forcing may comprise an important control on variability in N2 fixation and diazotroph community structure in the NPSG.
Perennial Language Learners or Competent Language Users: An Investigation of International Students Attitudes towards Their Own and Native English Accents English is widely used as a global language. The traditional monolithic model of English has been challenged as the development of World Englishes (WE) and English as a lingua franca (ELF) paradigms challenge the ownership of English. With this newly emerging status quo, English language teaching (ELT) should also recognize the diversity and dynamism of English. This article discusses students attitudes towards their own and native English accents, and describes the influence of English accents in ELT. Data were collected using semi-structured interviews with nine international students from Cambodia, China, Indonesia, Malaysia, and Sri Lanka who were studying at a university in Southern Thailand. The derived data were analysed using qualitative content analysis. The findings revealed that most students still perceived their accents as being deficient, and they believed that native speakers English accents were the norm of English use and the ultimate learning goal. Thus, entrenched native ideology was still persistent among these students. The article also provides some implications for pronunciation teaching from a WE and ELF framework with the Teaching of Pronunciation for Intercultural Communication (ToPIC). It is hoped that an awareness of English as a global language could be recognized, and ToPIC could be applied to ELT in more contexts to reflect the global status of English.
Traumatic work related mortality among seafarers employed in British merchant shipping, 19762002 Aims: To establish the causes and circumstances of all traumatic work related deaths among seafarers who were employed in British merchant shipping from 1976 to 2002, and to assess whether seafaring is still a hazardous occupation as well as a high risk occupation for suicide. Methods: A longitudinal study of occupational mortality, based on official mortality files, with a population of 1 136 427 seafarer-years at risk. Results: Of 835 traumatic work related deaths, 564 were caused by accidents, 55 by suicide, 17 by homicide, and 14 by drug or alcohol poisoning. The circumstances in which the other 185 deaths occurred, including 178 seafarers who disappeared at sea or were found drowned, were undetermined. The mortality rate for 530 fatal accidents that occurred at the workplace from 1976 to 2002, 46.6 per 100 000 seafarer-years, was 27.8 times higher than in the general workforce in Great Britain during the same time period. The fatal accident rate declined sharply since the 1970s, but the relative risk of a fatal accident was 16.0 in 19962002. There was no reduction in the suicide rate, which was comparable to that in most high risk occupations in Britain, from 1976 to 1995; but a decline since 1995. Conclusions: Although there was a large decline in the fatal accident rate in British shipping, compared to the general workforce, seafaring has remained a hazardous occupation. Further prevention should focus on improvements in safety awareness among seafarers and shipping companies, reductions in hazardous working practices, and improvements in care for seafarers at risk of suicide.
Study on Direct-Driven Wind Power System Control Strategy The development of direct-driven wind power system using permanent magnet synchronous generator (PMSG) is very fast, and the back-to-back converter has been paid much attention for its excellent performance. The work principle of the generator-side converter (GSC) and control strategy of PMSG are explained in detail, and the steady-state and dynamic performances are analyzed by simulation. The experimental prototype is built to achieve generation and motor state operation, and the vector control of PMSG is realized by generator-side converter. The simulation and experiment results show that using PWM converter as generator-side converter for direct-driven wind power system with PMSG, can achieve good control performance for PMSG and provide possibility to supply excellent power energy transmitted to power grid.
Getting the right right: redefining the centre-right in post-communist Europe Existing literature on the centre-right in Eastern and Central Europe is small and fragmentary, in contrast with the voluminous, detailed and often sophisticated comparative literatures on the Left and the Far Right in the region. A review and synthesis of the existing literature suggests the possibility of a definition of the Right and Centre-Right in the region, which can both accommodate its diversity and provide a shared framework for analysis. The Centre-Right should be understood as neither an atavistic throwback to a pre-communist past nor a product of the straightforward assimilation of Western ideologies. Rather, it is a product of the politics of late communism, domestic reform, European integration and post-Cold War geopolitical realignment, which has powerfully reshaped historical influences and foreign models. © 2004 Taylor & Francis Ltd.
A Content Analysis of Military Psychology: 20022014 Content analysis of articles published in professional journals is a viable method to assess the trends and topics a profession deems to be important. Military psychology does not involve only 1 subspecialty of psychologists, so research from many different perspectives has contributed to the field. The purpose of this manuscript is to present a post-9/11 content analysis of articles published in Military Psychology to identify critical issues and trends in the research and practice of military psychology. A total of 379 articles were analyzed, and revealed that the majority were empirical (n = 304, 80.2%) and employed quantitative methods (n = 283, 93.1%). The primary key topics were personnel (air force, army, military, and navy; n = 166), military (psychology, training, veterans, etc.; n = 104), and career issues (e.g., employee, interests, job, vocation, etc.; n = 57). Trends and directions for the future of military psychology are also considered.
The effect of technology readiness in IT adoption on organizational context among SMEs in the suburbs of the capital The role of Information Technology (IT) in the digital era today is critical in the business environment. IT will provide convenience in the process of managing a Small and Medium Enterprises (SMEs) so that companies have a competitive advantage in the current economic development. Some research has been discussed the issue of IT adoption's impacts and uses in SMEs, in particular in the suburbs bordering the national capital. The purpose of this study is to identify the impact of IT and the factors that influence technology readiness in IT adoption the organizational context among SMEs located in the suburbs of Jakarta. This article follows a quantitative research approach based on case studies and structured questionnaires. The results of SEM analysis using the SmartPLS3 application give the results that awareness of technology, local government support, SME management support, and financial support are essential factors in IT adoption in SMEs. This article tries to look at other phenomena regarding the limitations SMEs have in using IT and tries to make a recommendation on how to overcome them. It is hoped that this research will contribute to the development of information systems models both in terms of academics and practitioner.
Chopping secondary mirror control systems for the W. M. Keck Telescopes The Keck 1 chopping secondary was built by the Palo Alto Research Laboratories of the Lockheed (now Lockheed Martin) Missiles and Space Company. The only software component of the delivered system is a proprietary error correction algorithm; Keck wrote software to generate acceleration-limited azimuth and elevation demands, to rotate these demands as a function of telescope position, to interact with the error correction system, and to mange hardware start-up and shutdown. The Keck 2 chopping secondary, also built by Lockheed, was originally conceived as an infrared fast steering mechanism (IFSM) and is simpler than the Keck 1 system, with lower power and acceleration limits and, therefore, lower chop amplitude and frequency specifications. As far as possible, it provides the same external interfaces as the Keck 1 system. A new EPICS- based telescope control system has been written for Keck 2 and was retrofitted on Keck 1 in March 1997. The Keck 1 chopper control software has been converted to the EPICS environment and, at the same time, altered so that the same software supports both choppers. This conversion has retained as much as possible of the complex real-time code of the old system while at the same time fully utilizing EPICS facilities. The paper presents more details of both the old and the new systems and illustrates how the new system is simpler than the old as well as being much better integrated into the overall telescope control system. Operational experience is presented.
. A primitive parathyroid adenoma has been studied by electron microscopy, analytical ion microscopy and electron probe X ray analysis. A number of lysosomal structures has been observed in the cells. Observation of unstained ultrathin sections shows that these lysosomes contain two varieties of structures: dense homogeneous droplets and very dense and small granulations. Aluminium associated with phosphorus has been detected in high concentration in the small granulations. The relations between aluminium and parathyroid function and the possible role of aluminium in the pathology of the parathyroid gland remain to be clarified.
Water Quality Assessment of Anchar Lake, Srinagar, India Abstract The aim of this study was to ascertain the current condition of the Anchar lake water body in the Indian state of J&K in terms of water quality using some main parameters such as pH, TDS, EC, DO, and nitrates content. For the years 2019 and 2020, samples were obtained for two seasons: summer and winter. The quantitative analysis of the experimental results indicates a general increasing trend and considerable variance in nitrates content, as well as a gradual decrease in pH, indicating that the lakes acidity is increasing, but only within the basicity range, with real values approaching neutrality: TDS and EC content suggest a very favorable situation, but when the overall parameters are tested, they show a defect. Since the sampling sites were well aerated, the dissolved oxygen content showed a growing pattern, and as a result, this metric proved to be useless in deciding the overall scenario in the lake. In the winter, the longitudinal trend line indicates a 10% decrease in pH, while in the summer, it shows a 4.4 percent decrease in pH. In winters, the longitudinal trend line reveals a 6.7 percent growth in nitrate content, while summers see a marginal decline. In the winter, the longitudinal trend line shows a 7% rise in dissolved oxygen, while in the summer, it shows a uniform trend.
Social Knowledge: The Study of Three Processes of Metamorphosis Social knowledge is more dynamic than natural science. A full recognition of this character is the precondition for upholding the validity of statements in social knowledge. In order to maintain the validity of such statements and to avoid the metamorphosis of social knowledge into other theoretical constructs, this paper, based on referring to the ideal type of social knowledge, aims to describe and explain three processes whereby social knowledge is metamorphosed into theoretical dogmatism, theoretical alienation, and theoretical slavery. Introduction According to the viewpoint of some outstanding philosophers of social sciences and some salient sociologists of knowledge, social knowledge is a form of knowledge directed toward a historical and social context. Also, such knowledge has a special function in society (see: Soroush, 2005;Glover, ;Schutz, 1967Schutz, & 1980Berger and Luckmann, 1966;Braybrooke,1986;Hollis, 1994;Little, 1991;Rosenberg, 1995). Based on this point of view the ideal type of social knowledge is required to have an updated historical, sociological, and functional validity. Due to such characteristic of social knowledge, the main aim of this essay is to clarify and explain one the most important challenges in regard to the preserving of social knowledge validity. This challenge includes three processes through which the above-mentioned types of validity are undermined. These three processes lead social knowledge to transform into the other forms of theoretical constructs: Theoretical dogmatism, theoretical alienation, and theoretical slavery. Theoretical Dogmatism In general, it should be noted that social knowledge is closely situated in its historical time and therefore carries an aura of specific temporality. This means that the present knowledge is hardly applicable to a past historical period. Similarly, the past knowledge can hardly solve the problems and challenges in the present time. As Karl Mannheim contends in Ideology and Utopia, social knowledge is the product of a specific historical condition. Accordingly, any form of knowledge which is decontextualized from its historical condition and considered valid for other periods is most likely liable to turn into a type of theoretical construct known as "Dogmatism". It seems that describing and explaining this process sheds more light on the historicity of social knowledge. Social knowledge becomes transformed into dogmatism whenever it's cognitive and motivational basis changes in the course of history while the social knowledge itself remains unchanged and therefore becomes ossified. Substantiating this claim requires asking two important questions. Firstly, what is exactly meant by cognitive and motivational basis of social knowledge? Secondly, how is the mechanism of the relationship between the changes in those basic assumptions and the transformation of social knowledge into dogmatism? With regard to the first question, it must be noted that different human communities face various forms of issues, problems, challenges, and chances in their historical evolution. These conditions can be categorized into theoretical and practical levels. Among the many reactions to these conditions, the reactions of the thinkers and scientists of a society are most significant. They try to reflect and react to the conditions in a most efficient manner. Therefore, it can be maintained that the theoretical and practical conditions constitute the motivational foundation for producing a set of ideas, more accurately, social knowledge. The reality and reliability of such forms of social knowledge are closely related to the existing general human understanding and knowledge. Accordingly, this variable can be considered the cognitive basis for ideas and knowledge. It is clear by now that there are two variables in the production and formation of social knowledge, namely, types of problems, and the level of human knowledge. It should be also mentioned that there are occasions when ideas produced by thinkers become guidelines for social understanding and praxis by some social groups. The acceptance of these ideas is dependent upon certain cognitive and motivational assumptions too. In fact, these assumptions held by the society's thinkers to the production of ideas and then the same assumptions held by social groups to select and consume the recently produced ideas. In general, when some social groups accept these ideas in terms of their explanatory and normative aspects and use them as the blueprint for social praxis, those ideas turn into ideology. To put it more accurately, ideology is a type of idea or knowledge which works as the basis for social understanding and praxis by some social groups. Accordingly, it can be said that given the variables influencing ideas, ideology is the product of problems and a specific level of knowledge. Then from a logical perspective, the validity of ideology depends on the credibility of the basic assumptions. 2 Let us now turn to the second question, namely, the relationship between the changes in the basic assumptions and the transformation of knowledge into dogmatism. In this regard, it should be noted that theoretical and practical problems, as well as the level of human knowledge under the influence of different factors, are changed. Such changes are stronger and more extensive in the contemporary world. These changes create conditions which are by nature different from previous conditions. Therefore, the existing ideas fall short of dealing with the challenges of the new conditions. In such cases, a group of the society's thinkers embarks on producing new knowledge or tries to adjust and modify the existing knowledge. These theoretical, practical, and cognitive changes logically necessitate changes in the ideology which is based on previous conditions so that it can adapt itself to the new changes and reconstruct itself again. In other words, the proponents of a certain ideology need to update their configuration of social understanding and praxis in the light of new changes. In fact, by accepting these changes, they need to translate appropriate ideas from the realm of theory into the realm of ideology. Having done so, social knowledge and consequently ideology would be able to maintain their historical validity and functionality in the face of a new condition. Otherwise, knowledge and ideology would lose their logical validity and become dogmatism. In fact, dogmatism is the result of a condition in which some people present solutions which are either applicable only for problems in a different historical past or are hardly the best possible solutions for the new problems from the perspective of the evolution of human cognition and knowledge. In these cases, social knowledge loses its organic relationship with the changes in realities and cognitive conditions, and although it may carry the name of social knowledge, it is nothing but theoretical dogmatism. In light of the above questions, the demarcation line between social knowledge and dogmatism is clear by now. Accordingly, the basis of social knowledge is the theoretical and practical problems and the general level of human cognition and knowledge. However, dogmatism is rooted in the other variables which are discussed in the following. In fact, a study of the history of social knowledge reveals their origin and helps us to distinguish between social knowledge and theoretical dogmatism. 3 In other words in the case of dogmatism, knowledge, and ideology have been deprived of their logic and instead of having theoretical and practical efficiency for social understanding and praxis become leant on the other variables.4 Accordingly, it is necessary to protect social knowledge against dogmatism. This is only possible through a constant evaluation of the validity of social knowledge for other historical periods. Based on the mentioned points, it is useful to explain the emergence and formation of dogmatism in more details. Clearly, this can help us in finding ways to deal with the problem of dogmatism. It seems that there are two types of factors in the emergence of dogmatism, namely, objective factors and subjective ones. The first objective cause is the influence of dogmatists among the circle of social scientists and therefore the reproduction of dogmatist procedures under the title of knowledge and ideology. It should be added that perhaps one of the most important challenges for the ideologies is the infiltration of dogmatist views among the circle of ideologues. The influence of such people can make the ideology seem irrational through the detachment of ideology from its motivational and cognitive basis and therefore hinder the process of reformation. The second objective factor is the impact of power relations. With regard to the role of power relations in the creation of dogmatism, it should be noted that one of the causes behind the emergence of dogmatism includes the strong ties between knowledge and ideology in one hand, and different forms of social and political power of bearers of such knowledge and ideology on the other. It is obvious that all dominant ideologies make power, wealth and social status for some special groups. For most people, the proponents and followers of a certain ideology have the necessary requirements to be deemed as competent enough to possess higher social positions and power. However, when the motivational and cognitive foundation of an ideology changes and ideological transformation becomes a public demand, that is, the ideology of period one requires modifications and readjustments, the proponents of that ideology who used to enjoy certain prerogatives find their positions unstable and at risk. In fact, the legitimacy of their power is questioned in these cases. Therefore, they have three options. The first option is to completely leave their previous positions and be replaced by the harbingers of new ideology (Ideology two). This rarely happens because the proponents of ideology one are hardly willing to let go of their positions, that is, due to many reasons they do not voluntarily step aside from their ideological stances. Second, these people may join the followers of ideology two by recognizing the existing changes and therefore gain legitimacy even in the context of ideology two. This rarely happens given the fact that those in the positions of power and wealth do not have the sufficient time to become aware of the changes happening outside of the circles of power and wealth. The third option assumes that those in positions of power (material, social, and symbolic) can fight the ideological changes in order to maintain the legitimacy of their social positions in the previous ideological context. This latter option is usually opted for. However, since changes in the motivational and cognitive foundations are inevitable, this latter option turns their ideology into nothing but dogmatism. The last objective cause is the separation of ideology from its environment. There are cases where the rise of dogmatism is the result of the wide gap between the thinkers and ideologists of a society and the changes in the surrounding environment. It is clear that people adjust their actions and behaviors in terms of their relation to the environment in which they live. Accordingly, if they do not interpret correctly the changes in the environment, their reactions will not be based on the realities, and therefore might seem irrational. From this perspective, dogmatism emerges as the result of the insufficient knowledge of people with regard to their environment, as it is called as with regard to the motivational and cognitive changes of the social knowledge and ideology of their time. However, it should be emphasized that given the growing communication media in today's world few people fall victim to this variable and therefore it may seem as an overstatement to say that dogmatism is the result of this factor in some wide scale. As for the subjective causes of dogmatism, the first factor is the lack of rational-scientific analysis of the accuracy and efficiency of social knowledge and ideology. As iterated earlier, transformation in the motivational and cognitive foundation of knowledge and ideology logically leads to the transformation of knowledge and ideology themselves. Accordingly, having a rational mindset to the recognition of this necessity is highly significant. However, there are people who contend that ideology can be considered rationally, scientifically, or logically. In this view, it is considered as an eternal truth. Such an understanding of ideology severs the relationship between ideology and its motivational and cognitive foundation. Ideology becomes sacred and instead of adapting the ideology to the needs and demands of the society, the latter is curtailed or even sacrificed in the name of ideology. This is exactly what is meant by the metamorphosis of social knowledge into dogmatism. The above problem indicates the existence of a set of wrong assumptions about ideology and the consequent transformation of ideology into dogmatism. This is even more evident in the case of religious ideologies because religious ideologies are naturally based on sacred texts. Accordingly, including the parameter of time in understanding and interpreting these texts may be labeled as distortion, misinterpretation, and deconstruction, evoking harsh reactions. However, different studies strongly support the idea that religious ideologies and on a broader level all religious texts are interpreted and understood in relation to our motivational and cognitive assumptions. Therefore, if these assumptions change, religious ideology will change too. Resisting the openness to new interpretations is against the teachings of religious belief itself (Soroush, 1996 b). The second subjective cause addresses the issue of the mental tendency for stability and resistance to change. The subjective willingness to maintain the stability of knowledge and ideology may lead to dogmatism. This is usually because of the fear of making mistakes. With regard to the way in which this variable may cause dogmatism it should be noted that whenever an ideological agent reaches the conclusion that his/her ideological stance needs some adjustments due to some changes in its motivational and cognitive principles, certain emotional and mental pressures will be imposed on the agent including his/her fear of mistake. The agent might think that if he/she the agent tries to change his/her ideological stance, it is possible to make a mistake and consequently there will follow some irreparable damages. Therefore, subjective dilemmas and doubts will undermine the determination of the agent in changing the ideological stance he/she used to believe in. ideological change, these people need to reconsider such a subjective inclination for stability. It is axiomatic that remaining committed to the principle of stability will prevent ideological readjustment and therefore will lead to dogmatism. Readjusting and updating one's ideology requires not only brave decision, toleration of negative reactions, keeping one's mind open to the true information, fighting one's subjective and personal tendencies for maintaining the ideological benefits and questioning the legitimacy of an ideology, it also requires overcoming the fear of making mistakes. However, this should not mean the impulsive change of ideology without considering the correctness of one's decision. Rather, it means that when after investigating the motivational and cognitive changes of an ideology, one should not hesitate to change his/her previous stance because of conservatism or fear of making mistakes. The damages caused by dogmatism are much more serious than the possible damages caused by one's fear of making mistakes. To prevent dogmatism, one needs to overcome the fear and accept the responsibilities of making a decision to initiate ideological reforms. So far, we have sufficiently discussed the process whereby social knowledge is turned into dogmatism. In the following section, the process in which social knowledge is transformed into theoretical alienation will be taken into consideration. Theoretical alienation One of the characteristics of social knowledge is its close relationship with the sociological condition. Taking into account of the correspondence of social knowledge with a local situation and a specific group, that is, the little chance of universal social knowledge is important in analyzing and understanding this type of knowledge. Accordingly, sociologists of knowledge usually emphasize the links between social knowledge and a specific social condition. Emile Durkheim has defined knowledge as the reflection of social conditions (Hamilton, 1998;Kafi, 2004: 248-249). Sociologists like Max Scheler have made a distinction between the form and content of knowledge and have argued that the construction of form is influenced by social variables (Alizadeh, 2004: 187-200). Karl Mannheim has argued that one of the conditions for the validity of social knowledge is its symmetry with the socio-cultural context in which it is produced (Azhdarizadeh, 2004: 211-229). As mentioned earlier, it seems that the relationship between social knowledge and its sociological context can be considered from two aspects. The first aspect concerns the correspondence of knowledge with the place in which it is produced. The second aspect concerns the relationship between social knowledge and the groups about which the knowledge is produced. According to this view, a specific type of social knowledge may be applicable to a certain place or group while it falls short in explaining the conditions of another place or group. Therefore, in addition to historical limitations, social knowledge has sociological limitations and only applies to a specific condition. This means that we need to evaluate the validity of social knowledge in terms of its sociological applicability. As the above discussion implies, the weakening of the sociological validity of social knowledge leads to the problem of what is known as theoretical alienation. To define theoretical alienation, it can be said that it happens when the motivational and cognitive basis of social knowledge in terms of sociological context changes while the knowledge itself does not readjust itself in relation to these changes and tries to remain attached to the previous social condition. Given the fact that we have already discussed the notion of motivational and cognitive basis of knowledge, a description of the process of the transformation of social knowledge into theoretical alienation would suffice in this case. It should be mentioned that when the theoretical, practical, and cognitive problems of human life change and the existing ideas fall short in coping with the new condition, some thinkers propose the adaptation of solutions from similar contexts as a way of overcoming the problems and issues of their own context. Influenced by such thinkers of the society, some people may select what they have adapted from another context, as an updated or even a new ideology. Consequently, there will be a favorable, low-cost, and short-term transformation in knowledge. However, since there are always major differences between societies and the social conditions of groups, such an adaptation is hardly successful. It may only be successful when the similarities between the local and communal contexts of the societies are examined in details. Therefore, if done without sufficient study, there will emerge as a consequence a type of knowledge which does not have any exact correspondence with the problems, issues, and cognitive difficulties of the new place or group. In short, it will not be able to solve the existing problems. This is the reason why instead of using the term "social knowledge" the concept of "theoretical alienation" can be used. In this view, alienation is the result of the decision of some people who think that appropriating a solution belonging to a different social place or group, could be able to solve the problems and issues of their own. It is clear that similar to the process whereby knowledge is transformed into dogmatism, also in this case knowledge is severed from the changes in problems and evolution of cognition. Although it may still be considered as "knowledge" it is in reality a form of theoretical alienation. Similar to dogmatism, the basis of alienation is different from the basis of social knowledge. Theoretical and practical problems and the level of human cognition are the origin of knowledge. However, alienation is rooted in something else which will be discussed in the following. With regard to the factors influencing the transformation of social knowledge into theoretical alienation, it should be mentioned that they are remarkably similar to those factors involved in the transformation of social knowledge into dogmatism. These factors can be divided into two groups, namely, objective, and subjective. Similar to the causes of dogmatism, the objective causes of theoretical alienation consist of three variables. First, the influence of alienated thinkers and intellectuals into the circle of social thinkers and ideologists and the consequent reproduction of alienated constructs in the form of knowledge and ideology. As explained in the discussion of dogmatism, welcoming those who accept a certain knowledge or ideology without a rational or logical reason can have dire consequences. It should be highly emphasized that the infiltration and growing influence of such people may make knowledge and ideology digressed from its rational and logical course. In fact, when the link between the motivational and cognitive basis and knowledge and ideology is severed, there will be little hope for reformation. The second factor concerns the influence of power relations. Regarding the role of power in the creation of knowledge or theoretical alienation, it should be noted that a certain group of intellectuals, social theorists, and ideologists who are usually not in positions of power may assume that by replacing their local and communal ideology with the popular ideologies from around the world, they can dispossess power from their opponents and possess it themselves. In fact, this group adopts the line of thought that considers dominant global discourses as a way of providing legitimacy and social power for themselves. Doing this for possessing power separates the relationship between ideology from its motivational and cognitive basis. The kind of knowledge which is produced as a consequence of this process is not able to bring about ideological change and instead turns knowledge into theoretical alienation. The third objective cause concerns the separation of ideologists from their social environment. Alienation happens when due to the lack of accurate understanding of the social groups the ideologists and intellectuals are not able to effectively connect themselves to the society. Therefore, they are not able to make an appropriate reaction to the incumbent changes and by suggesting solutions which are cut from the motivational and cognitive basis of their social condition, they produce ideological alienation. However, as mentioned earlier, given the fact that information as become easily accessible in today's world, such variable does not have far-reaching influence. With regard to the subjective causes of alienation, we could focus on two factors. The First one is the lack of rational-scientific analysis of the accuracy and efficiency of social knowledge and ideology. Whenever there is not sufficient critical examination of the applicability of some adapted ideas, or they are not evaluated with empirical, historical, and interpretive methods, one logically can expect alienation. To put it differently, such ideas should be evaluated in terms of their relation to a specific place and social groups so that their appropriateness and applicability becomes clear. However, the problem is that instead of choosing this critical approach, some people ignore the differences for the sake of similarities. This methodological error gradually leads to the production of theoretical alienation instead of effective knowledge and ideology. Similar to dogmatism, the second subjective cause is the tendency for stability and resistance to change. Occasionally, the tendency to import ideas belonging to another place and social groups is the result of the fear of making mistakes. How this variable creates alienation is almost clear. If the ideologists are aware of the differences between their own context and the context from which their ideas are adapted, but believe that modifying the adapted ideas may distort its totality, they would eschew from changing them and therefore alienation will most likely be produced. Accordingly, having the courage to think critically and to have the determination for bringing innovation, when the differences between two social contexts are clear, is an important factor for being able to effectively and appropriately deal with the problems of one's own social conditions. Theoretical slavery Social knowledge, like other forms of human knowledge, is ultimately at the service of specific goals and objectives. There are certain ideologies and social ideas which are primarily produced for achieving specific goals. However, it should be pointed out that the functionality and purposiveness are more significant in social knowledge because it has very strong influence on the social theoretical frames. When the goal and duty of knowledge are not authentic at the time of producing, the possibility of using this knowledge for achieving authentic goals and functions is almost ruled out.5 It should be noted that according to some point of views the authentic function and goal of social knowledge is to help the human being to reach the perfection via solving theoretical and practical problems. If social knowledge lacks this aspect, it does not have one of the important factors for the validity of social knowledge. According to Max Scheler knowledge has three parts. First, it is the knowledge of control and achievement of goals and objectives. Second, it is knowledge of essence and culture. Third, it is the knowledge of the reality of salvation. The first part of knowledge is found in science, the second in philosophy and metaphysics and the third in religion. Scheler believes that there is a hierarchy in the types of knowledge. The knowledge of salvation is at the top, and then comes knowledge of essence, and finally knowledge of control. In this view, each type of knowledge serves a higher order of knowledge (Alizadeh, 2004 a: 183-184). Based on Scheler's idea it can be argued that the ultimate purpose and authentic function of knowledge are nothing but human salvation. Accordingly, such purposiveness and functionality can be used as significant criteria for evaluating the accuracy of social knowledge. However, it is clear that in many instances the purpose of social knowledge is determined by power relations. Therefore, more often than not, social knowledge is governed by the interests of people in positions of power. The prevalence of this phenomenon has convinced thinkers like Michel Foucault to conclude that knowledge is basically at the service of power (Alizadeh, 2004 b: 322-329). Given this important phenomenon, it is necessary to discuss the ways in which the functions of knowledge change and how it is possible to prevent the equation of knowledge and power. As mentioned earlier, social knowledge is the product of some motivational factors and cognitive assumptions. The content of these factors and assumptions has shown two different paths during history. On the first path, certain issues and problems have drawn the attention of social thinkers whose solutions would either satisfy their theoretical concerns or would contribute to the improvement of public welfare. Contrary to this, on the second path, solving certain issues and problems would, in fact, serve the interests of people in the position of power. In fact, most of the social thinkers have often faced this dilemma. The irony is that concerning oneself with the public issues and problems would hardly result in any real rewards for the social thinkers. On the contrary, serving the power interests would always end up in considerable privileges and rewards. True social thinker would choose the first path, that is, dealing with the problems of the people. For them, social thinking should always maintain its legitimacy. However, there are also social thinkers who have served power. To be able to have a logical evaluation of the above dilemma it can be said that what the first group of thinkers do has stronger basis in the logic of social thinking. For this group, the condition for the validity and truthfulness of knowledge is its legitimate function. In this view, the ultimate objective and function of knowledge and therefore its validity depend on serving human values (Habermas, 1987quoted in Soroush, 2005. In contrast to this, the second group of thinkers who serve the interests of power relations, distort the logic of knowledge in terms of its objective and function and therefore imprison thought in the cells of power. The above phenomenon can be referred to as "theoretical slavery". To illustrate the working of this process, it is to be said that the motivational and cognitive basis of knowledge. Idealistically, is intended to bring about human salvation and perfection by producing knowledge, including social knowledge. This knowledge is then transformed into ideology through social acceptance. However, in the course of time, in addition to the ideal motivational and cognitive basis of knowledge, other types of motivational and cognitive bases emerge which are at the service of power. With the appearance of this new phenomenon, social thinkers diverge into two groups. The first group remains committed to the true objective of knowledge, namely, human salvation. The second group chooses to serve the interest of people in power and help produce what is known as "instrumental knowledge". This latter type of knowledge is used as an instrument by the people in power to reach their own goals. The social thinkers may gain certain privileges by doing so; however, they pay a high price by imprisoning their thoughts and minds in the web of power relations. In such conditions, theoretical slavery takes the place of an ideology based on true knowledge. While carrying the name of social knowledge, this type of knowledge is nothing but theoretical slavery in the interests of people in power. It may be useful to discuss the causes of process whereby social knowledge is transformed into theoretical slavery. The factors causing this problem can be divided into two groups, namely, objective and subjective. The objective causes of theoretical slavery consist of three important variables. Similar to the two other explained processes, the first variable concerns the influence of theoretical slaves into the circle of social thinkers and the consequent reproduction of theoretical slavery in the form of knowledge and ideology. Theoretical slaves accept a certain type of knowledge and ideology. However, their approach to knowledge and ideology is instrumental and irrational. For them, knowledge and ideology should serve power and can be used as an instrument to gain and exert dominance over others. A disproportionate level of such influence may marginalize true knowledge and ideology. In such a context, theoretical slavery is camouflaged as true knowledge and ideology. The second variable concerns the influence of the relationship between power and knowledge. Those in power have always tried to dominate the minds of social thinkers and use them for instrumental purposes. This is intensified by the material and economic needs of the thinkers. Those in power can enslave the social thinkers through economic means. There are also social thinkers who have ambitions to gain power themselves. However, it should be noted that in most cases these social thinkers are not only enslaved themselves by the instruments of power, but they also deprive the other people from the possibility of reaching true knowledge. Because under the impact of their cooperation with the people in power, such people get an opportunity to suppress true thinkers and then introduce the instrumental knowledge as a true knowledge. Therefore, in this condition, the only knowledge which will be produced is instrumental. The third objective cause refers to the contemporary socialization of the social thinkers in terms of the ultimate aim of the production knowledge. Nowadays, universities around the world hardly ever address the question of the ultimate aim of knowledge. Most people are educated with the notion that knowledge is only limited to the understanding of the phenomenon and the reality of the world. In this view, the goal, use and function of knowledge do not matter and is only a personal matter. Therefore, there is no systematic education about the use and function of scientific discoveries which lead to arbitrary appropriation of knowledge by everyone. In this condition, knowledge can be purchased by those in power and there seems to be no problem in the enslavement of knowledge to power. With regard to the subjective causes of the emergence of theoretical slavery, we can mention two things. The first cause concerns the prevalence of irrational evaluation of the ultimate goal of knowledge. This phenomenon is the result of illogical socialization of the modern thinkers which has been already discussed. However, it is not limited to socialization process. The point is that when we rationally accept that the ultimate goal of knowledge is human salvation and perfection, theoretical slavery becomes a kind of digression from the principles of rational thought. Accordingly, following rational thinking is the precondition for resisting slavery by power instruments. Therefore, one of the important factors in the emergence of theoretical slavery is the lack of rational evaluation of the ultimate goal of knowledge. The second subjective cause in the emergence of theoretical slavery is the fear of power. This is especially evident in totalitarian societies. In such conditions, social knowledge is not allowed to go beyond the limits imposed by the instruments of power with regard to official forms of knowledge. Therefore, autonomous and free thinkers may be repressed. Fear of repression and persecution demotivates most thinkers. It is clear that such conditions only lead to the production of instrumental knowledge and theoretical slavery. Conclusion The aim of the present essay was to briefly discuss and explain the processes whereby social knowledge is turned into negative forms. This does not mean that all aspects of the issue have been examined. Rather, an introductory remark was intended to initiate further researches. Since social knowledge is historical and situated in a specific socio-cultural context, the emphasis of the present essay was on the necessity of continuous evaluation of the historical, sociological and functional validity of social knowledge. It was argued that lack of attention to this issue may lead to processes whereby knowledge is transformed into theoretical dogmatism, theoretical alienation, and theoretical slavery. As explained, there are various subjective and objective causes which lead the social knowledge to transform into mentioned negative forms. So, these causes necessarily should be controlled in order to reach a valid social knowledge.
. AIM To know whether caregivers of Alzheimer's disease (AD) patients on donepezil treatment are more satisfied with the orally disintegrating tablet (ODT) formulation than with the film-coated tablets. PATIENTS AND METHODS Multicenter, cross-sectional study of patients with probably AD by DSM-IV or NINCDS-ADRDA criteria, on monotherapy with donepezil, ODT or film-coated tablets. Satisfaction with treatment was assessed by the caregiver self-administered generic Treatment Satisfaction with Medicines Questionnaire (SATMED-Q) -range: 0, no satisfaction, to 100, maximal satisfaction-, total and in six dimensions: undesirable effects, efficacy, medical care, medication ease and convenience, medication impact on daily activities, and overall satisfaction. RESULTS 546 patients were enrolled (9,6% institutionalized); 64,8% women; 78,2 +/- 6,5 years of age; disease evolution of 22.5 +/- 24.6 months, Minimental State Examination (MMSE) mean score: 18,5 +/- 5; 67.9% on film-coated tablets and 32.1% on ODT. After adjusting by MMSE and time of treatment, caregivers of patients on ODT showed significantly higher SATMED-Q total score (74.5 +/- 11.8 vs. 70.4 +/- 12.3; p lower than 0.0004) and medication ease and convenience (84.9 +/- 16.4 vs. 79.8 +/- 17.6; p = 0.0059), impact of medication on daily activities (50.2 +/- 22.8 vs. 43.7 +/- 25.5; p = 0.0006) and satisfaction with medical care (79.4 +/- 19.5 vs. 75.6 +/- 21.8; p = 0.04894) scores. 91.6% of caregivers of patients on ODT (versus 82.9% of those on film-coated tablets; p = 0.023) stated that taking the medication was easy for their relatives. CONCLUSIONS Results show that caregivers of AD patients on donepezil treatment are more satisfied with ODT versus film-coated tablets, especially due to its better ease of use.
RSK DSCLOSURE ON ACCOUNTING AND FINANCIAL REPORTING STANDARDS: CONTENT ANALYSIS OF BORSA STANBUL (BIST) MANUFACTURING SECTOR The information and data produced by the accounting system reflects the changes in the economy and the sector, as well as the effects of the decisions taken within the company on the business structure and operating results. The risks that companies face in the market are important for financial information users. The standards established for an international accounting practice in this regard also state that the information to be disclosed to the public should include the risks that companies are exposed to. The aim of this study is to analyze the risk disclosures of companies that prepare their financial statements according to international accounting standards in Turkey and to shed light on the relationship between accounting data and risk. In this context, the risk disclosure of companies in the BIST Manufacturing sector for 2020 were analyzed in terms of content. As a result of the analysis, data on the derivatives in IFRS 9 and the status of hedge accounting in financial statements and the risk types arising from financial instruments in IFRS 7 are presented. Accordingly, it has been determined that 32% of the companies use derivative instruments and 20% apply hedge accounting. In addition, qualitative and quantitative disclosure data of companies regarding credit, liquidity, market and other risk disclosures for 2020 were also analyzed. In the footnotes section of the risks arising from financial instruments, it was determined that the most data was related to the foreign currency risk. It has been observed that 50% of the companies that make credit risk disclosures do not make maturity analysis. Finally, the explanations on interest rate risk, other price risk and capital risk were analyzed in terms of content
Impact of an EMR-Based Daily Patient Update Letter on Communication and Parent Engagement in a Neonatal Intensive Care Unit. OBJECTIVE To evaluate the impact of using electronic medical record (EMR) data in the form of a daily patient update letter on communication and parent engagement in a level II neonatal intensive care unit (NICU). STUDY DESIGN Parents of babies in a level II NICU were surveyed before and after the introduction of an EMR-generated daily patient update letter, Your Baby's Daily Update (YBDU). RESULTS Following the introduction of the EMR-generated daily patient update letter, 89% of families reported using YBDU as an information source; 83% of these families found it "very useful", and 96% of them responded that they "always" liked receiving it. Rates of receiving information from the attending physician were not statistically significantly different pre- and post-implementation, 81% and 78%, respectively (p = 1). Though there was no statistically significant improvement in parents' knowledge of individual items regarding the care of their babies, a trend towards statistical significance existed for several items (p <.1), and parents reported feeling more competent to manage information related to the health status of their babies (p =.039). CONCLUSION Implementation of an EMR-generated daily patient update letter is feasible, resulted in a trend towards improved communication, and improved at least one aspect of parent engagement-perceived competence to manage information in the NICU.
The Determinants of Knowability Many propositions are not known to be true or false, and many phenomena are not understood. What determines what propositions and phenomena are perceived as knowable or unknowable? We tested whether factors related to scientific methodology (a propositions reducibility and falsifiability), its intrinsic metaphysics (the materiality of the phenomena and its scope of applicability), and its relation to other knowledge (its centrality to ones other beliefs and values) influence knowability. Across a wide range of naturalistic scientific and pseudoscientific phenomena (Studies 1 and 2), as well as artificial stimuli (Study 3), we found that reducibility and falsifiability have strong direct effects on knowability, that materiality and scope have strong indirect effects (via reducibility and falsifiability), and that belief and value centrality have inconsistent and weak effects on knowability. We conclude that people evaluate the knowability of propositions consistently with principles proposed by epistemologists and practicing scientists.
Larynx, hypopharynx and mandible injury due to external penetrating neck injury. Esophageal and laryngeal injuries due to ballistic injuries are seldom encountered. Ballistic external neck traumas generally result in death. Incidence of external penetrant neck injuries may vary between 1/5000-137000 patients among emergency service referrals. Vascular injuries, esophagus-hypopharynx perforations, laryngotracheal injuries, bony fractures, and segmentations may be encountered in external neck traumas. Here we report a 27-year-old male patient who was referred to our emergency department and presented with hyoid bone fracture, multiple mandibular fractures, and hypopharynx perforation due to a ballistic external neck injury.
Tough Choices: Exploring Decision-Making for Pregnancy Intentions and Prevention Among Girls in the Justice System Despite Californias declining teen pregnancy rate, teens in the juvenile justice system have higher rates than their nonincarcerated counterparts. This study explored domains that may shape decision-making for pregnancy prevention in this group. Twenty purposively selected female teens with a recent incarceration participated in hour-long semistructured interviews about their future plans, social networks, access to reproductive health services, and sexual behavior. Transcripts revealed that, contrary to literature, desire for unconditional love and lack of access to family planning services did not mediate decision-making. Lack of future planning, poor social support, and limited social mobility shaped youths decisions to use contraceptives. Understanding this groups social location and the domains that inform decision-making for pregnancy intentions and prevention provides clues to help programs predict and serve this populations needs.
The HgFET: a new characterization tool for SOI silicon film properties Summary form only given. SOI starting wafer characterization relies heavily on non-destructive measurements such as thickness, uniformity, and lifetime. Leakage current through the BOX is sometimes measured and used to determine an electrical defect density. The electrical quality of the Si film is less well known. One device that can be used to assess Si film properties is the "pseudo-FET" in which point contacts made to the film act as the source and drain while the substrate and BOX act as the gate electrode and oxide. However, the point contacts act as Schottky barriers and the characteristics are pressure sensitive, somewhat limiting the properties that can be measured. A new version of the pseudo-FET called the HgFET is described here, in which a combination of broad area Hg electrodes coupled with special surface treatment are used to overcome the limitations of point contacts. The HgFET can be used for quality control of the starting Si film, yielding the electron and hole mobilities, the BOX charge, the interface state density, the doping level, the hole and electron transconductances, the flatband voltage, the linear and saturated threshold voltages, and the mobility versus field.
Heavy MSSM Higgs Interpretation of the LHC Run I Data We review that the heavy CP-even MSSM Higgs boson is still a viable candidate to explain the Higgs signal at 125 GeV. This is possible in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. We briefly discuss the new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation. Introduction The discovery of a SM-like Higgs boson in Run I of the Large Hadron Collider (LHC) marks a milestone in the exploration of electroweak symmetry breaking (EWSB). Within experimental and theoretical uncertainties, the properties of the new particle are compatible with the Higgs boson of the Standard Model (SM). Looking beyond the SM, also the light C P-even Higgs boson of the Minimal Supersymmetric Standard Model (MSSM) is a perfect candidate, as it possesses SM Higgs-like properties over a significant part of the model parameter space with only small deviations from the SM in the Higgs production and decay rates. Here we will review that also the heavy C P-even Higgs boson of the MSSM is a viable candidate to explain the observed signal at 125 GeV. (the "heavy Higgs case", which has been discussed in Refs. ). At lowest order, the Higgs sector of the MSSM can be fully specified in terms of the W and Z boson masses, M W and M Z, the C P-odd Higgs boson mass, M A, and tan ≡ v 2 /v 1, the ratio of the two neutral Higgs vacuum expectation values. However, higherorder corrections are crucial for a precise prediction of the MSSM Higgs boson properties and introduce dependences on other model parameters, see e.g. Refs. for reviews. In the heavy Higgs case all five MSSM Higgs bosons are relatively light, and in particular the lightest C P-even Higgs boson has a mass (substantially) smaller than 125 GeV with suppressed couplings to gauge bosons. We review whether the heavy Higgs case in the MSSM can still provide a good theoretical description of the current experimental data, and which parts of the parameter space of the MSSM are favored. We also discuss the newly defined benchmark scenarios in which this possibility is realized, in agreement with all current Higgs constraints. Theoretical basis In the supersymmetric extension of the SM, an even number of Higgs multiplets consisting of pairs of Higgs doublets with opposite hypercharge is required to avoid anomalies due to the supersymmetric Higgsino partners. Consequently the MSSM employs two Higgs doublets, denoted by H 1 and H 2, with hypercharges −1 and +1, respectively. After minimizing the scalar potential, the neutral components of H 1 and H 2 acquire vacuum expectation values (vevs), v 1 and v 2. Without loss of generality, one can assume that the vevs are real and non-negative, yielding The two-doublet Higgs sector gives rise to five physical Higgs states. Neglecting C P-violating phases the mass eigenstates correspond to the neutral C P-even Higgs bosons h, H (with M h < M H ), the C P-odd A, and the charged Higgs pair H ±. At lowest order, the MSSM Higgs sector is fully described by M Z and two MSSM parameters, conveniently chosen as M A, and tan. Higher order corrections to the Higgs masses are known to be sizable and must be included, in order to be consistent with the observed Higgs signal at 125 GeV. In order to shift the mass of h up to 125 GeV, large radiative corrections are necessary, which require a large splitting in the stop sector and/or heavy stops. The stop (sbottom) sector is governed by the soft SUSY-breaking mass parameter Mt L and Mt R (Mb L and Mb R ), where SU gauge invariance requires Mt L = Mb L, the trilinear coupling A t (A b ) and the Higgsino mass parameter. The "heavy Higgs case", i.e. the heavy C P-even Higgs boson gives rise to the signal observed at 125 GeV can only be realized in the alignment without decoupling limit. In the so-called Higgs basis (see Ref. for details and citations), the scalar Higgs potential in terms of the Higgs basis fields H 1 and H 2, can be expressed as where the most important terms of the scalar potential are highlighted above. The quartic couplings Z 1, Z 5 and Z 6 are linear combinations of the quartic couplings that appear in the MSSM Higgs potential expressed in terms of H 1 and H 2. The mass matrix of the neutral C P-even Higgs bosons is then given by (2. 3) The alignment without decoupling limit is reached for the "heavy Higgs case". The possibility of alignment without decoupling has been analyzed in detail in Refs. (see also the "-phobic" benchmark scenario in Ref. ). It was pointed out that exact alignment via |Z 6 | 1 can only happen through an accidental cancellation of the tree-level terms with contributions arising at the one-loop level (or higher). Parameter scan and observables The results shown below have been obtained by scanning the MSSM parameter space. To achieve a good sampling of the full MSSM parameter space with O(10 7 ) points, we restrict ourselves to the eight MSSM parameters, called the pMSSM 8, most relevant for the phenomenology of the Higgs sector. Here denotes the Higgs mixing parameter, M 3 (M 1,2 ) is the diagonal soft SUSY-breaking parameters for scalar leptons in the thrid (second and first) generation, and M 2 denotes the SU gaugino soft SUSY-breaking parameter. The scan assumes furthermore that the third generation squark and slepton parameters are universal. That is, we take The remaining MSSM parameters are fixed, The high values for the squark and gluino mass parameters, which have a minor impact on the Higgs sector, are chosen in order to be in agreement with the limits from direct SUSY searches. 200 500 space is scanned with uniformly distributed random values in the eight input parameters over the parameter ranges given in Tab. 1. We calculate the SUSY particle spectrum and the MSSM Higgs masses using FeynHiggs (version 2.11.2) 1, and estimate the remaining theoretical uncertainty (e.g. from unknown higher-order corrections) in the Higgs mass calculation to be 3 GeV. Following Refs., we demand that all points fulfill a Z-matrix criterion, |Z 2L 21 | − |Z 1L 21 | /|Z 1L 21 | < 0.25 in order to ensure a reliable and stable perturbative behavior in the calculation of propagator-type contributions in the MSSM Higgs sector. The Z-matrix definition and details can be found in Ref.. The observables included in the fit are the Higgs-boson mass, the Higgs signal rates (evaluated with HiggsSignals ). The total 2 is evaluated as (see Ref. for more details), where experimental measurements are denoted with a hat. Results for the "heavy Higgs case" Based on the above described 2 evaluation the best-fit point, shown as a star below, and the preferred parameter regions are derived. Points with ∆ 2 H < 2.30 (5.99) are highlighted in red (yellow), corresponding to points in a two-dimensional 68% (95%) C.L. region in the Gaussian limit. The best fit point has a 2 /dof of 73.7/85, corresponding to a p-value of 0.87, i.e. the heavy Higgs case presents an excellent fit to the experimental data. In Fig. 1 we review the correlations for the heavy Higgs signal rates,. (4.1) Here XX = VV,, bb, (with V = W ±, Z) denotes the final state from the Higgs decay and P(H) denotes the Higgs production mode. It can be seen that the heavy Higgs case can reproduce the SM case (R P(H) XX = 1), but also allows for some spread, in particular in R H. Figure 1: Correlations between signal rates for the heavy Higgs case. The best-fit points are shown as a black star, and points with ∆ 2 H < 2.3 (shown in red) and ∆ 2 H < 5.99 (shown in yellow). The MSSM parameter space for the heavy Higgs scenario is shown in Fig. 2. The left plot indicates the preferred regions in the M A -tan plane, where one can see that 140 GeV < ∼ M A < ∼ 185 GeV must be fulfilled, while tan ranges between ∼ 6 and ∼ 11. The right plot shows the preferred regions in the X t /M S -mt 1 plane. Here the heavy Higgs case makes a clear prediction with 300 GeV < ∼ mt 1 < ∼ 650 GeV and X t /M S ∼ −1.5. Some properties of the light C P-even Higgs boson are shown in Fig. 3. The left plot shows the light Higgs boson coupling to massive gauge bosons relative to the SM value. One can see that the coupling squared is suppressed by a factor of 1000 or more, rendering its discovery via e + e − → Z * → Zh at LEP impossible. The right plot gives the BR(H → hh) for M h < ∼ M H /2. Here it is shown that the BR does not exceed 20%, and thus does not distort the coupling measurements of the heavy Higgs at ∼ 125 GeV too much. Updated benchmark scenarios In Ref. an updated set of benchmarks for the heavy Higgs case was presented, superseeding the experimentally excluded low-M H scenario. The parameters of the three new benchmark scenarios are given in Tab. 2. The low-M alt− H (low-M alt+ H ) scenario is defined in the -tan plane with M H ± < (>)m t, while the low-M alt v H scenario has a fixed in the M H ± -tan plane. The experimentally allowed parameter space in the three benchmark scenarios is shown in Fig. 4. 2 The red, orange and blue regions are disfavoured at the 95% C.L. by LEP light Higgs h searches, LHC H/A → + − searches and LHC t → H + b → ()b searches, respectively. The green area indicates parameter regions that are compatible with the Higgs signal (at ∼ 95% C.L., see Ref. While being "squeezed" from different searches, Fig. 4 shows that the heavy Higgs case remains a valid option with the interesting feature of a light C P-even Higgs below 125 GeV. We hope that the new benchmark scenarios facilitate the search for these light Higgs bosons as well as for the heavier, not yet discovered Higgs bosons in Run II. Conclusinos We have briefly reviewed the case that the Higgs boson observed at ∼ 125 GeV is the heavy C P-even Higgs boson in the MSSM, as recently analyzed in Ref.. The analysis uses an eightdimensional MSSM parameter scan to find the regions in the parameter space that fit best the experimental data. It was found that the rates of the heavy C P-Higgs boson are close to the SM rates, but can still differ by 20% or more to yield a good fit. Parameters such as M A, tan or mt 1 are confined to relatively small intervals, making clear predictions for Higgs and SUSY searches. The light C P-even Higgs boson escaped the LEP searches via a tiny coupling to SM gauge bosons, and the decay H → hh is sufficiently suppressed not to impact too strongly the heavy Higgs boson rates. Three new benchmark scenarios have been reviewed that have been defined to facilitate the experimental searches at the LHC Run II.
Survey of part-of-speech tagger for mixed-code Indian and foreign language used in social media Received Apr 29, 2019 Revised Aug 28, 2019 Accepted Oct 6, 2019 A Part-Of-Speech Tagger (POS Tagger) is a tool that scans the text in specific language and allocates chunks of speech to individual word (and another token), such as verb, adjective, nown etc., as more fine-grained POS tags are used in computational applications like 'noun-plural'. Basically, the goal of a POS tagger is to allocate linguistic (mostly grammatical) information to sub-sentential units, called tokens as well as to words and symbols (e.g. punctuation). This paper presents a survey of POS Tagger used for code-Mixed Indian and Foreign languages. Various methods, procedures, and features required to device POS Tagger for code-mixed foreign languages especially for Indian are studied and observations related to it are reported. INTRODUCTION Community language of communication in social media is often combined in nature, where individuals counterfeit their regional dialectal with English and this technique is found to be extremely popular. Natural language processing (NLP) work towards to gather the data from these texts somewhere Part-of-Speech (POS) tagging performs a key title role in receiving the prosody of the inscribed text. One purpose of POS labeling is to disambiguate homonyms. Several kinds of information including dictionaries, lexicons, rules etc. use by taggers. Word may be a member of more than one category. Lexicons have type or types of a specific word. For example, a word address is both verb and noun. Taggers utilizes the probabilistic evidence to solve this indistinctness of actual word. As a preprocessor in text processing POS tagger can be used. Text retrieval and indexing requires POS information. Language processing needs POS tags to choose the pronunciation. For making tagged corpora POS tagger is also used. Dialectal processing methods to code switched text was first accomplished in the early 1980s, whereas in social media text code-switching begun to be considered in the late 1990s. Still, conventional texts code change was rare as to encourage ample curiosity by the computational dialectal research people, and it was first lately that, it emerges a study topic in its own right, with a code-switching workshop at EMNLP 2014. Solorio with Liu, projected a simple but well-designed solution of labeling mixed-code English-Spanish transcript twice -on one occasion for each language, a tagger -and then joining the outcome of the language-explicit taggers to get the optimal word-level tags. For English-Hindi Mixed-Code Social Media Content, a POS Labeling System has been presented in. Efforts has been performed on English-Bengal and English-Hindi data. Nelakuditi, performed, two different kinds of experiments, First, POS taggers based on machine learning and second is uniting POS taggers of individual languages. ISSN: 2252-8814  Survey of part-of-speech tagger for mixed-code Indian and foreign language used in (Bhushan Nikam) 265 POS tagger tool has been designed for various languages, but for code-mixed Indian and foreign Languages, very little work yet is performed with undesirable accuracy. This paper presents review of such work which is prepared into next four Sections. Section 2 and 5 specifies techniques used and approaches involved in the implementation of POS tagger for code-mixed Indian and foreign dialects. Section 3 summarizes efforts made to implement CM POS tagger for Indian Languages. Challenges to implement code-mixed POS tagger is presented in section 4. VARIOUS APPROACHES AND TECHNIQUES USED TO IMPLEMENT CODE-MIXED POS TAGGER FOR INDIAN AND FOREIGN DIALECTS India is homegrown to number of dialects. Language changes and variety in dialect prompt frequent mixing of code in India. Hence, Indians are polyglot by habituation with necessity, and frequently change mix tongues in social media circumstances, that possess additional problems for automatic Indian social media text processing. Requirement for any kind of NLP applications especially in this context Code-Mixed Part-of-speech (CM-POS) labelling is essential. Relating to it, I present a report on various POS tagger approaches and techniques used to implement code-mixed POS tagger for Indian and foreign Languages. Jamatia and Das experimented by using classification algorithms based on four machine learning technique to the undertaking exercise: Conditional Random Fields (CRF), with Sequential Minimal Optimization (SMO), Nave Bayes (NB), and Random Forests (RF). For the Conditional Random Fields they tried the MIRALIUM 1 application, whereas the other three were the applications in WEKA 2 and reported effectuation on the complete dataset (2,583 utterances), after 5-fold cross-validation of all the ML methods using both fine-grained (FG) and coarse-grained (CG) tag sets and noticed that all the ML methods have further problems with HI-EN alternation. In the Machine learning based POS taggers experiment Nelakuditi et. al used three types of Machine Learning techniques for designing the POS tagger viz, Support Vector Machines (SVM), Bayes classification (Bay) and Conditional Random Fields (CRF), with different groupings and distinctions. In second experiment of joining POS taggers of individual languages, CMU's Twitter POS tagger for English with POS tagger developed at LTRC, that is a part of the shallow parser tool 3 for Telugu were used and then finally reported accuracies. Kamal Sarkar, developed HMM-based POS tagging system which is founded on Trigram Hidden Markov Model that uses data from the vocabulary, and some other word level attributes to improve the comment possibilities of the known along with unknown tokens. He gives in to scores for Hindi-English, Bengali-English and Tamil-English Language duos. His scheme has been skilled and tried on the datasets provided for ICON 2015 shared task. In the constrained mode, his technique gains average overall accuracy (averaged over all three language pairs) of 75.60% which is very close to other participating two systems (76.79% for IIITH and 75.79% for AMRITA_CEN) which ordered larger than his system. In the unrestricted mode, his system gets typical overall accuracy of 70.65% which is also nearby to the system (72.85% for AMRITA_CEN) that obtained average overall accuracy highest. Vyas et. al conducted three different experiments: In the first experiment, by assuming the language identities and normalized/transliterated forms of the words, POS tagging is performed. It gives an idea of the accuracy of POS tagging task, if normalization, transliteration and language identification could be done perfectly. Experiments have been conducted with two different POS taggers for English: the Stanford POS tagger and the Twitter POS tagger. In the next experiment, by assuming that only the language identity of the words are known for Hindi their own model is applied to generate the back transliterations. For English, Twitter POS tagger is applied directly to handle social media text. In the third experiment by assuming nothing is known, language identifier process is first applied, and based on the language detected, Hi transliteration module, and Hi POS tagger, or the English tagger is applied and also stated that though the matrix information is not used in any of their experiments, it could be potentially useful for POS tagging which could be explored in future. For constrained and unconstrained training and result submission, Pimpale and Patel, used Stanford POS tagger and machine learning algorithm viz., Decision Tree J48, Decision Tree Random Forest, Naive Bayes and Multilayer Perceptron resp. By concluding, the method used is reporting well for constrained submission, but deficiency of the superiority working information doesn't allow doing ample with it, if they, use the distributed vector illustration of words in feature engineering, that allow them to use non-labeled data for working out. As stated by Sequiera et. al, explored machine learning approaches for Hindi (Hi)-English (En) CM typescript from social media POS tagging starting with repetition of the trials specified in along with, and reconfirming results on dataset. Extending the attributes set applied by Solorio and Liu and doing numerous feature selection experiments, they proposed and conducted a POS-tagging and joint Kamal Sarkar, also proposed a POS tagging system for social media texts. It is developed based on Conditional Random Fields (CRF) trained using a rich feature set that includes contextual features, orthographic features, punctuation features and word length features. He concluded that his system performs well across all three languages Bengali-English-Hindi pairs. He hoped that the proper choice of features along with the suitable grouping of machine learning algorithms would improve the performance of his system. According to Sharma and Motlani, experimented code-mixed POS tagging of Indian social media text using machine learning techniques. Building a POS tagger using constrained system, give them an accuracy of 75.04%, after being estimated on the new test dataset. While by using other resources, namely an unconstrained system, POS tagger did better than the constrained system and gives 80.68% of accuracy. For training and testing of both type of systems they used ten-fold cross-validation method and computed the best model attribute values by undertaking a grid search over all the parameters of the attributes. Finally, for the other two pairs, namely BN-EN (Bengali-English) and TA-EN (Tamil-English), accuracy measured was 79.84% and 75.48% respectively using developed and submitted constrained systems. Pipeline approach, for language identification, Back-transliteration and POS tagging Sisodiya respectively used, logistic based classifier and CRF, Google API, and CRF++ based Hindi POS tagger developed by IIT Kharagpur. Singh and Kanskar employed, controlled word-level classification with and without contextual signs, and sequence labeling using Conditional Random Fields, for implementation of a simple unconfirmed dictionary-based method. A modest dialectal discovery-based investigative used in which first, the text can be separated into portions of tokens belonging to a language, and then each portion be categorized according to its language and further labeled by the POS tagger for that dialectal. Linguistic finding and transliteration text is labeled through an English monolingual tagger and then selecting one out of two labels for a conversation based on some heuristics that was detected by several language detection techniques. As stated by Ghosh et. al, they listed various steps involved in POS labeling task using CRF++ toolkit and Stanford POS Tagger, including chunking, lexicons for dominant languages. They also concluded that Bengali-English and Hindi-English results are more than that of Tamil-English because of difference in labels used in Tamil-English gold standard files. Barman, divided the experiment into four parts viz., implementing, baselines for POS tagging, pipeline systems, their stacking systems and joint model. By performing with the data, five-fold crossvalidation and reported normal cross-validation exactness with investigating the use of hand-crafted features and attributes that can be gained from monolingual POS taggers (stacking), performed researches with different groupings of these attribute sets. They described a trilingual code-mixed corpus with POS comment. Using state-of-the-art methods performing POS tagging and investigating the usage of factorial CRF (FCRF)based joint model found that the best stacking method (S2) that practices the joint features, achieves better than the combine version (FCRF) and the systems with pipeline. They observed that combined modeling outperforms the systems with pipeline in their experimentations. FCRF fall late the best POS labeling system S2. Possibly, to achieve better performance than S2 more training data would help FCRF. According to Gupta et. al, they proposed a system that practices a comprehensive set of features for POS labeling. The feature set was used to design a POS model. Conditional random field (CRF) is applied as the underlying classifier. CRF++, an employment of CRF is used to accomplish the experiment. As CRF++ uses a stated feature template, therefore to discover the optimal feature template a series of experiments were made on the training data set in a cross-validated way. However, they tune the feature pattern on English-Hindi data set only and used the optimal model for all these CM languages (English-Hindi, English-Bengali, and English-Telugu) pairs. Bhargava et. al, experimented similar kinds of approaches to implement POS tagger for English-Telugu, English-Hindi, English-Bengali language pairs with a slight variation to achieve accuracies. VARIOUS APPROACHES AND TECHNIQUES USED TO IMPLEMENT CODE-MIXED POS TAGGER FOR FOREIGN LANGUAGES Efforts are not much more still be seen to implement code-mixed POS tagger for foreign languages. Solorio and Liu just predicted potential code alternation points, in the growth of extra accurate systems for processing code-mixed English-Spanish language. Such mixing of languages is rarely found all over the world, other than in India. CHALLENGES TO IMPLEMENT CODE-MIXED POS TAGGER Building Code-Mixed POS (CM-Part of Speech) taggers for Indian dialects is a particularly interesting problem in computational linguistics due to a lack of accurately glossed training corpora. More cultured language processing techniques are required for POS tagging that is proficient of drawing interpretations from more delicate dialectal information. From a dialectal outlook, meaning arises from the distinctness between dialectal units, including words, phrases, and so on. These distinctness are of two types: paradigmatic (concerning substitution) and syntagmatic (concerning positioning). To implement Code-Mixed POS tagger all these differences are also needed to be considered. CONCLUSION The survey shows that in general, various Machine Learning techniques along with POS tagger are used by researchers to implement CM POS taggers for Indian and foreign languages. Much more work is started to perform for code-mixed Indian languages. But an actual tool for code-mixed POS tagging is not yet available on the internet.
README.md exists but content is empty.
Downloads last month
8