text
stringlengths
16
2.73M
laser engraved wooden panels | resonance speakers | amplifier | microphone | sensors | computer Panauditum is an installation revolving around the role sound plays in the surveillance we encounter in our society. While we are very aware of the value and privacy of our appearance and image we are less aware of how valuable and intimate the sound we produce is. Our voices can be seen as fingerprints, each with its own unique distinguishable character. There have been many cases of equipment such as smartphones, headphones and smart home devices secretly listening to us and our environments, without our knowledge and without our permission. This relates closely to the concept of the Panopticon: “The Panopticon is a type of institutional building and a system of control designed by the English philosopher and social theorist Jeremy Bentham in the late 18th century. The scheme of the design is to allow all (pan-) inmates of an institution to be observed (-opticon) by a single watchman without the inmates being able to tell whether or not they are being watched. Although it is physically impossible for the single watchman to observe all the inmates’ cells at once, the fact that the inmates cannot know when they are being watched means that they are motivated to act as though they are being watched at all times.” wikipedia.org/wiki/Panopticon The Guardian wrote an article on this concept in relation to our current times in July of 2015: “What does the panopticon mean in the age of digital surveillance?” The Panauditum is the auditive version of the Panopticon: it functions as the observer by listening to the visitors of the museum while they are unaware whether they are being monitored, either by the installation or by a smart device they are carrying around. The installation reflects on this notion and raises awareness by confronting the visitor with small fragments of voices. By fragmentation of the voices the semantic content is lost and only the character, the fingerprint, of the voice remains. With these fragments the Pandauditum builds an evolving composition of speech and electronic artefacts of communication devices. This soundscape is emitted by the resonating wooden plates with aesthetically pleasing and intriguing decorations in order to win the trust of the visitor and to lure them in, similar to how the equipment we use pleases us with their aesthetics and friendly appearance. When a visitor approaches the installation closely the Panauditum responds by giving back a fragment with a longer duration, which reveals some semantic content and a clearly recognisable recording – giving a clue about the surveilling nature of the installation. This tension between the seemingly beautiful and friendly appearance and the intrusive functionality is deepened by the almost conscious behaviour and interactivity of the Panauditum. To avoid an indifferent form of interactivity, the recordings of the visitor are not simply delayed but they are being organised on a time-chance based buffer system, of which the various states of behaviour have been composed in Max/MSP. These states manipulate the various parameters of granular synthesis to manipulate the recordings and on quadraphonic spatialisation of the sound through the four resonating wooden planes. The behaviour ranges from very musical polyrhythmic structures to abstract clouds of voices. With this combination of composition, randomness and interactivity Van den Broek aims to create an installation which appears to have a consciousness to give the visitor an impression of a neural network and a smart system. This deepens the experience of having to relate to an intelligent entity of control, just like the hidden structures of surveillance in our society.
CNX Selects RISE with SAP | SAP News Center Feature by Lloyd Adams August 17, 2022 Pittsburgh-based natural resource company CNX has served as an energy leader in the region for the better part of two centuries. In recent years especially, the company has had to draw on its deep history of innovation and growth to drive the industry forward — as sustainability concerns around responsibly sourced gas and methane emissions have continued to grow. However, in order to do so effectively, CNX needed to rethink its approach to enterprise resource planning (ERP) technology and business management. It needed to work to reduce overall risk, centralize and streamline its data, and modernize its company-wide processes. It needed to go to the cloud. Today, through RISE with SAP, the company’s productivity, efficiency, and effectiveness are steadily increasing, giving CNX more time to focus on its mission: delivering quality, sustainable energy solutions for its customers, today and well into the future. Cloud Transformation in a Transforming Industry CNX has made significant sustainability investments for decades, a necessity to keep pace in the contemporary natural resource industry landscape. In fact, the company has been net-zero since 2013. Continuing to move forward, though, required CNX to invest in intelligent business management technology that could push it even further onto the cutting edge. Its transformation plans mainly revolved around three key considerations, according to CNX Vice President of Information Technology Al Jones: De-risking the company’s ongoing technology road map Building a stronger support model “Those things really resonated as we were evaluating RISE with SAP,” Jones said. “Once we saw the value, we saw a way forward, and then we were able to really get things moving.” “RISE-ing” to the Challenge RISE with SAP is designed to adapt and change, so that companies like CNX can use it to get to where they need to go. For CNX, cloud transformation on its own timeline and its own terms was an appealing proposition for managing its significant projected cash flow in 2022. “Technology is like air to the business that we run,” Jones said. “To be able to embrace it at our own pace while continuing to deliver our product to market is invaluable and vital to our business’ continuous growth.” CNX’s ERP technology supports over 180 applications and seven main value streams, turning previously disparate systems into a cohesive data warehouse and a single source of truth. “We’re now poised for a successful future in the cloud,” Jones said. “We have a more structured and disciplined approach, and it is one that offers reliable visibility into our data and enables us to both effectively strategize and execute on our company’s goals going forward.” System to Support a Sustainable Future The lower total cost of ownership in RISE with SAP also equips CNX to pursue other organizational sustainability goals, including two areas of significant investment for the company, according to Jones. Both revolve around the company’s continued push past net-zero when it comes to methane emissions. “We’ve invested heavily in a data project designed to track and report our emissions with far greater accuracy,” Jones said. “In the process, we plan to replace pneumatically controlled valves with electrically controlled ones, and our technology systems will serve as the backbone.” The company is also embracing drone-based and satellite-based technology to track its methane emissions — as well as maintain and strengthen its role as a responsible supporter of natural gas. What’s Next for CNX Opportunities now abound for the leading natural resource company. CNX’s upgraded approach to ERP provides it with not only a better understanding of its data and decision making but with more time for its people to focus on strategic projects that can bring in greater revenue. Projects previously in the background — such as the creation of an energy incubator — can now take center stage as proficiency increases across the entire company. “We are on a journey to do good for our customers,” Jones said. “This brings us further along on that journey, a journey that’s possible because of the technology we have and the system we’ve built to support it.” Lloyd Adams is managing director of the East Market Unit at SAP. Tags: Cloud ERP, RISE with SAP, Sustainability RISE with SAP Gains Strong Momentum Across North America Feature — Get the highlights from the latest 2208 release of SAP S/4HANA Cloud. August 1, 2022 by Arpan Shah
TOC: 1) Quick overview of my 300 Ohm impedance Folded Bazooka Antenna. 2) Quick info on building it. 3) Email response on performance, feed. Gang, I really don't have the time today, but just couldn't contain myself. My Bazooka antenna is now a Folded Bazooka, fed with 300 Ohm TV line to the Balanced output of my ant. tuner. Boy am I getting out! And actually copying QRP stations as never before. I have been getting sig reports 2 units higher than I have sent -- and to stations I couldn't have even heard before! Most of you recall I had a 40 meter dipole, improperly fed with 300 ohm TV line to minimize losses for multiband operation. I built the standard bazooka ant. and fed in exactly the same manner so the test comparison would be valid. The results were that the Bazooka ant. was slightly better than my dipole, and probably due to the larger conductor provided by driving the shield of the 100% dual braid high quality coax I used. Well, Saturday morning after the WSN-40 net, I went for the gold -- Add the extra wire for a Folded Bazooka and present a 300 ohm match to my feedline. WOW! As I said earlier. This puppy is hot. I knew there was a little loss due to the 75 Ohm to 300 ohm mismatch at the ant. terminals, but having eliminated that mismatch, everything is being delivered to the antenna. Feedline radiation is at an all-time minimum. We be in business, folks! I've had my Drake TR-3 ALC at about 40 dB compression just to keep my ears from ringing. As an added bonus, I appear (Deductively) to be able to use the Folded Bazooka on 80 meters. Last night my 5 watts was copied S9 from my Santa Rosa,CA QTH in Northern California to Los Angeles in the south. I won't be contesting in 80 meters, just a check if the added element would act as a 1/4 wave element driven from the ends, or at least inductively coupled to the primary bazooka feed. Apears to be about 60 percent efficient on 80 meters with the 40 meter folded Bazooka. Well, my previous dipole was unusable on 80 meters, so this is definately an improvement. Tests on 20 meters --- Forget it. I just had to share this. At the very least, make yourself a nice folded dipole with 300 Ohm feed and do some of your own experiments with a non-bazooka style folded dipole. Gleefully Submitted, Ed Loranger, we6w -- Recipient of coveted Samuel F. B. Morse Award, NTTC Pensacola, FL 1977. 72/73 de we6w qrp es CW ONLY; Member: QRP-L/ARCI/Norcal/ARS/AR (From Non-Ham to Extra in one Day.) -------------------------------------------------------------------------- 2) Building a crude 300 Ohm input impedance Folded Bazooka Antenna. NOTE: I suggest extra effort be made to get the coax cut to the correct length, compensating for velocity factor. Use a Noise bridge or Antenna Analyzer. ---- This will get you in the ballpark for 7040 KHz (QRP Freq): Get some old coax with decent braid in it for the shield. Cut about 46 feet. Find the center, cut the braid but leave the center conductor and its teflon insulation intact. Pull about 2 inches of each side of the braid out and twist, forming the input connections for you 300 OHM TV twinlead. Lay this 46 feet on the ground like a dipole. Go to one of the far ends, pull off some of the plastic sheath, separate the center and shield and solder them together as a short. Repeat for the other side. Now, add 11 feet of wire to each end of the coax dipole. You now have a 66 to 67 foot long regular dipole but the coax forms the radiating portion for about 23 feet on each leg, then has a short to the center conductor which reflects a high impedance 1/4 wavelength away at the center,feed point. Now, Connect your SWR meter or Antenna analyzer DIRECTLY to the ant. feedpoint. Keep the swr meter close to the antenna. Using a rig with a VERY GOOD 50 Ohm output impedance (I.E. not a HW-8 or other rig having a TANK Capacitor in the output), Set the rig to transmit about 1 watt and at 7000 KHz if your license lets you go there. At the least, try BELOW 7040 KHz so you can prune the antenna ends and see improvement. The antenna will be optimized when you get 1.5:1 SWR Reading, that is: 75 Ohm resonant antenna on a 50 Ohm Rig. Prune each end carefully and in 1/2 inch increments. Cut a piece from one end, carry that cut piece to the other end of the antenna and use the cut piece to measure your next cut. This ensures you keep the line lengths the same. After you get a 1.5:1 VSWR at the Lower Frequency of say 7000 KHz, begin taking only 1/4 or 3/8 inch cuts from each end and as you get close to 7040 KHz, leave the rig there and get your 1.5:1 VSWR. That is the Basic standard Double Bazooka. If you are like me, you want more efficiency and less losses, and use 300 Ohm or greater transmission line. A folded antenna exhibits 4 times the impedance of the standard feedpoint impedance of 73 ohms. So your impedance of the folded bazooka will be about 300 Ohms. If you went to a Tri-folded Bazooka, you could use 650 Ohm line easily!!! Using the same wire you used for the dipole ends, measure the wire to equal the length of your Dipole. This wire will be soldered from one end to the other of the dipole and have 4 to 6 inch spacers every 4 feet or so. I passed the wire through holes drilled in the plastic block which held the center of the antenna too. HINTS: THE Coax length is ACTUALLY a huge variable and you should really cut the 1/2 wavelenth of coax with a noise source or antenna analyzer. Somehow, you have to find the true velocity factor of the coax. It is an electrical half wavelength and much shorter than a 1/2 wave dipole. But at resonance, the coax alone is too short for an antenna, that is why you extend the coax with wire. Off resonance, the coax shorted section transforms the antenna input impedance in an attempt to keep it at 75 ohms across the band. It does this rather well. The measurements I gave you can get you up and running with 85% VF Coax. If you use some other coax, you need to SEVERELY shorten the coax section!!!! Use the following formulas to get in the ballpark: #1) 492/F(MHz) = half wavelengthm in Free Space. 85% VF Coax length= FSWL * 0.85= Actual Coax length needed. 66% VF Coax length= FSWL * 0.66= Actual Coax length needed. Short the ends of the coax and then open the middle braid for feeding. Add wires to the ends such that the length = Half Wave in Free Space. (#1) Trim ends while monitoring VSWR until 75 Ohm Resonance obtained. I.E. VSWR= 1.5:1 Add folded wire element. Radiate Well! 72, Ed ------------------------------------------------------------------------ In response to an email about Folded Bazooka performance, I wrote: Yup, I'm really happy right now. By the way, all this success and my antenna is still only 35 feet up. To truely be optimised, It should be up 1/2 wavelength at 66 feet. But that's not possible at my QTH and rental unit. Antenna is Entirely Homebrew. It is the published design as described as a "Broadband Dipole" in the index of my 1974 Antenna Book. When you see it, and read, they state "Sometimes refered to a Double Bazooka Antenna." I did not use the open wire at the ends as depicted in the drawing. But I did add a single wire from one end to the other to form the Folded Bazooka. This is the trick for the 300 ohm feedline. It is transformer action. A single wire dipole has a 75 ohm (73) input impedance. Adding a second wire converts this to 300 Ohms. The turns ratio Squared* Zin=300 Ohms. so 2 wires * 2* 75=300 Ohms. If you had 3 wires then the input impedance is 9*75=675 Ohms, great for that large spaced OPEN WIRE Feedline. I'll do that some day since the losses for 600 Ohm line are even lower. More detailed construction information is in the Antenna Handbook. Be sure to measure the swr very carefully when you are tuning up. And Remember, Do the SWR measurements up in the clear as best as possible, and with the SWR meter Directly connected to the feedline. A good start with 100% braided, 50 Ohm coax is about 45 Feet, open the shield at the middle. Connect your 300 Ohm (75 for Single bazooka) feedline to each shield. Leave the Coax center wire alone. Short each far end of the coax to the center conductor at the coax end. Then solder a 12 foot piece of 12 AWG wire to each end. Trim the 12 Foot wires as you monitor SWR at 7.000 MHZ (The Low end) As you get close, Start reducing the cuts at each end to about 1/2 inch. The 300 Ohm feedline and folded bazooka may have a slightly better bandwidth than the 75 Ohm fed single bazooka dipole, but should be similar in signal performance at the design frequency. Of course If you have an antenna analyzer, you are WAY ahead of me! Wish I had one..... I'm thinking of building a Noise bridge for my antenna work. Soon. Need to get a scope too. Darn test equip. costs money, and me with 4 kids, spouse and IRS issues... Take care Friend. Review some of the QRP-l archives for last week if you want more info. I made some construction posts then. 72 my friend, Ed Loranger, we6w
It’s important to be open to different ideas – to see what works best in your circumstances Experiment with ways of developing solutions that are culturally legitimate, as well as practically effective It’s... - Understand Indigenous governance - Your culture - Assess your governance - Build your governance - Your people - Systems and plans - Conflict resolution and peacemaking - Governance Stories - Useful links Self-determination at Murdi Paaki The Murdi Paaki Regional Assembly (MPRA) is the peak structure representing the interests of Aboriginal and Torres Strait Islander peoples across 16 communities in western New South Wales. MPRA see self-determination as the key success to their governance model. Their model demonstrates true community control as the Aboriginal people of the region determine the composition of their local CWP, they choose the methods to bring that model together and they choose who represents them on the Regional Assembly. People volunteer their time and those who participate are genuinely interested in making a change for their communities. The model is evolutionary; it’s not competitive leadership but a traditional style of leadership where leadership roles are earned through respect, integrity and transparency. For the MPRA, this self-determination is what makes its governance model so effective: “The key to the way forward is in the concepts and rights that we have implied into the terms ‘self- determination’ and ‘sovereignty’ when we use those words to describe a vision of what we would like our communities to be like and the way we want to live our lives as Indigenous peoples. The starting point for self- determination and Aboriginal sovereignty is the way in which it is expressed by Indigenous peoples at a grass- roots level. It is a bottom- up, rather than top- down approach … The focus of our commitment is to strengthen the role and participation of the 16 major and seven smaller communities in regional decision- making and service delivery in ways more directly relevant to the circumstances of the region’s Aboriginal people and to improve outcomes for them.” The MPRA has grown since its inception in 2004. It has expanded to provide an extensive range of functions that aim to improve the economic, cultural and social status of Aboriginal and Torres Strait Islander peoples and communities across their service region. As a result, many other MPRA organisations have been created. These incremental initiatives have allowed the MPRA to gradually assert and exercise self-governance.1Murdi Paaki Regional Assembly, “Charter of Governance,” September 2015, 6, [link] Read more about MPRA’s goal of self determination and Aboriginal sovereignty in their Charter of Governance.
(r/DestinyTheGame) I agree with you, I would hate to see my newly ascended, 3 weeks ago, Fatebringer nerfed because of how ridiculous the Thorn is. Another option would be completely disallow primary exotics from ToO. They do have the ability, however, to adjust stats for certain weapons so I'm sure that if they do any adjusting they will be able to change it for the THorn solely.
The Massachusetts state medical examiner is trying to figure out why 12-year-old Joshua Thibodeau collapsed on the soccer field and later died at an area hospital. Officials said Joshua did not have any indication of a pre-existing medical condition, and he had a medical waiver clearing him to attend the camp. It’s a tragic story we hear all too often. But as a pediatric cardiologist, I can’t help but wonder if Joshua had ever underwent a cardiac screening – and what may have been found if he did. Some studies suggest that sudden cardiac death (SCD) in sports is a very rare event, but the reality is, these deaths are much more common than previously reported. And when it comes to your children, are you willing to take a chance? Many have argued that the benefit of cardiac screening simply does not outweigh the financial burden, approximately $200, placed on the individual. But saving lives and saving money do not have to be seen as mutually exclusive. The implementation of a well-designed cardiac screening program, which is both comprehensive and detailed, including an electrocardiogram (EKG), may be more cost effective than previously thought. In addition to screening for the major culprit of SCD and hypertrophic cardiomyopathy, less common, but equally deadly cardiac conditions may also be detected. The program should also screen for preventable chronic illnesses, which also are responsible for taxing the already strained health care system. But the truth is, EKGs are not a foolproof way of detecting heart disease in children. And the reason is, physicians commonly make mistakes in EKG readings for children. A study published this month in The Journal of Pediatrics, suggests that mistakes are occurring too often in the reading of EKGs for children and young athletes. Unfortunately, some in the medical community seize upon the flaws in childhood screening to conclude that it is just not worth screening children. I am flabbergasted that more professionals do not view these documented flaws in test reading as an urgent call to do better. I have proposed a systematic approach to screening children that has both accountability and education built into the system. Designing screening programs that are specialized to accomplish a specific medical task can reduce mistakes and increase the accuracy of results. This approach also reduces the cost of screening and, most important, does a better job of protecting our children’s health. We live in a culture where millions can be spent to perfect the aluminum baseball bat to ensure a greater number of home runs, even though it increases the risk of injury and death to our children. Let’s begin to get our priorities straight and put our efforts where it counts: Our children’s health and future security. Dr. Robert J. Tozzi is the chief of pediatric cardiology and founder of the Pediatric Center for Heart Disease at Hackensack University Medical Center in Hackensack, New Jersey. He is also the director of the Gregory M. Hirsch Hypertrophic Cardiomyopathy Center and a Fox News contributor. Dr. Robert J. Tozzi is Chief of Pediatric Cardiology and the Founding Medical Director of The Gregory M. Hirsch Hypertrophic Cardiomyopathy Center at the Hackensack University Medical Center in New Jersey. He is the co-author of several papers published in refereed research journals, and he has lectured extensively in his field at numerous professional conferences. To learn more, visit his website at DRTOZ.com.
Effect of interface friction on the mechanics of indentation of a finite layer resting on a rigid substrate Copper strips of 2.5 mm thickness resting on stainless steel anvils were normally indented by wedges under nominal plane strain conditions. Inflections in the hardness-penetration characteristics were identified. Inflections separate stages where each stage has typical mechanics of deformation. These are arrived at by studying the distortion of 0.125 mm spaced grids inscribed on the deformation plane of the strip. The sensitivity of hardness and deformation mechanics to wedge angle and the interfacial friction between strip and anvil were investigated within the framework of existing slip line field models of indentation of semi-infinite and finite blocks.
(Reddit r/AskReddit) To answer the question here is my weird dream that happened a little while ago: In this weird dream, I got transported back to my schoolboy days and in the dream, I see lots and lots of fucking. The school was taking students from elementary to high school as well as the entire staff (including the custodians, the front desk lady, and cafeteria workers) on a field trip but forgot to get my male friends who I had crushes on and me on the multiple busses for the school's field LMFAO. So one thing led to another and we somehow managed to disable the school's camera system for several hours and then it was a wild sex party in the school.
User: Write an educational story (3-5 paragraphs) targeted at young children using simple words. The story should be inspired from this text snippet: “Process: - Your thermostat senses that the room temperature has fallen below your thermostat setting - Your thermostat sends a signal to your heating system to provide heat for your home - When your oil burner is engaged, heating oil travels from the tank to the burner by way of a pump - Here the oil becomes a fine mist mixed with air - The fuel/air mixture is then injected into the burner where it is ignited in the combustion chamber - The warm air is disbursed through vents, hot water through baseboards, or by steam radiators - Question: suppose the room has less heat happens, how will it affect less fine mist is mixed with air. How does the supposed perturbation influence the second effect mentioned. Answer by more, less or no effect The answer to this question is: If the room has less heat, it means the temperature has fallen below the thermostat setting. In this situation, the thermostat will send a signal to the heating system to provide heat for the room. As a result, the oil” The story doesn’t have to be addressing everything in the snippet, it is there just for inspiration. The story should have the following features: - Science integration: embed basic science concepts within the story, explaining them through the characters' adventures and discoveries. For example, if the story includes a scene where characters are looking at the sky, you could have them wonder why it's blue and explain the physics behind in grade school level. - Dialogue: include at least one dialogue and insightful conversation. - Unexpected twist: conclude with a twist that doesn't resolve as hoped, but leaves a clear lesson about life and science. Do not start with classic sentences like "Once upon a time", be creative. Assistant: It was a chilly winter day in the town of Thermville. Timmy the Thermostat lived happily on the wall of cozy cottage, always keeping an eye on the temperature. One morning, Timmy noticed something strange – the room felt colder than usual! He knew he had to do something. Timmy quickly sent a message down to his friend Ollie the Oil Burner. "Ollie," Timmy said, "the room needs some warmth! Can you help me out?" "Of course, Timmy!" replied Ollie enthusiastically. "I'm here to keep things nice and toasty." And so, Ollie started working, drawing up oil from the storage tank. With the help of a tiny pump, heated oil turned into a fine mist that mixed with air. This fuel-air mixture traveled to the combustion chamber, where it burst into flames. The heat generated warmed the house, making its way through vents, baseboard heaters, or steam radiators. However, since the room wasn't very warm to begin with, only a small amount of oil became mist. There wasn't enough fuel-air mix for the fire to grow big and strong. Feeling concerned, Timmy asked Ollie, "What would happen if we can't make a lot of that fine mist? Will our friends still get warm?" Ollie sighed, realizing that they needed to produce more heat despite having limited resources. They both agreed that even when circumstances were challenging, cooperation between them was essential to create warmth for their beloved townsfolk. So, while the solution might not have been perfect, together they learned the importance of resourcefulness and teamwork during tough times.
Obviously the figures for bombing (as, indeed, the figures for bombardment) are in no way comparable to those reached in later campaigns against enemy-held islands farther south. It must be remembered, however, that the air forces involved in the Aleutians Campaign were never large, and that weather conditions offered impediments to air activity which were matched in no other theater of war. The obstacles overcome by our airmen in conducting their ceaseless attacks would have been considered insuperable at the outset of the war. It is now known that on 8 June Rear Admiral Akiyama issued orders for the abandonment of Kiska. There was no way in which our high command could learn of this enemy decision, so plans for the seizure of the island went forward rapidly during the month of June, while the 11th Air Force dropped 262 tons of bombs with the loss of but two planes. (The bomb weight was held to this low figure by execrable weather.) During the major part of the period from 24 May to 15 August, a task group of cruisers and destroyers was on station north or south of Kiska. Frequently task groups operated both north and south of the island. From 8 June a destroyer blockade was maintained continuously, with the exception of 23 and 24 July, when a submarine was on patrol to the west. Admiral Robert C. Giffen). Ships involved were the Wichita (F) (Capt. John J. Mahoney), Louisville (now commanded by Capt. Alexander S. Wotherspoon), San Francisco (Capt. Albert F. France), Santa Fe (Capt. Russell S. Berkey), Hughes (Lt. Comdr. Herbert H. Marable), Lansdowne (Lt. Comdr. Francis J. Foley), Morris (Lt. Comdr. Edward S. Burns), and Mustin (Lt. Comdr. Earl T. Schreiber). The two last-named destroyers took no part in the bombardment proper, having been assigned as anti-submarine screen for the main force. No enemy opposition was encountered, except sporadic and ineffective antiaircraft fire directed against spotting planes. Target areas were thought to have been thoroughly covered, although observation of results was not generally possible because of overcast and other factors. Particular attention was devoted to coast defense batteries believed to be located on North Head and Little Kiska, antiaircraft batteries at Gertrude Cove and South Head, and the Main Camp area (see chart, 113). Under the fairly good visibility conditions, the 6-inch coast defense guns on Little Kiska were regarded as the primary threat to our vessels. The Santa Fe, with her superior volume of fire, was therefore assigned to lead the cruiser column, smothering this target during the first few minutes, after which she was to swing around to take position astern of the other cruisers. It was, of course, impossible to tell whether this procedure was responsible for the fact that the enemy batteries did not fire. Planes from Amchitka were bombing Kiska during the bombardment, and this may have preoccupied Japanese personnel to an extent which prevented opening fire. It is more likely, however, that the enemy did not wish to disclose his positions. Not long after firing ceased, the fog closed in. Had the bombardment been delayed an hour, no air spot would have been possible. In the course of the shelling, the following ammunition was expended: 312 round 8-inch 55-caliber, 256 6-inch 47 caliber, 1,158 5-inch 38 caliber, 92 5-inch 25 caliber (about 100 tons). * * * During the rest of July, Kiska was bombarded a number of times by destroyers of our blockade force, as follows: 9 July -- Aylwin, 100 rounds 10 July -- Monaghan, 100 rounds 14 July -- Monaghan, 100 rounds 15 July -- Monaghan, 100 rounds 20 July -- Aylwin and Monaghan, 200 rounds 30 July -- Farragut and Hull, 200 rounds Little return fire was received and there was no conclusive evidence of important damage to the enemy. "However, the purpose, that of harassing the enemy, was accomplished," according to CINCPAC. The battleship group approached from the north and the cruiser group from the south. Army planes bombed the island during the approach. Task Group George fired on e main camp and Little Kiska for 21 minutes, while Task Group Gilbert fired for 18 minutes on batteries at North Head, South Head, Sunrise Hill, and the submarine base. Enemy resistance was negligible, and the bombardment proceeded exactly according to plan. Indirect fire was used throughout, though the weather was exceptionally clear. Gunnery performance was excellent. The only disquieting occurrences were "frequent reports of submarines cause by at least two porpoises and three whales sighted in the area between the bombardment track and Kiska Harbor." Aerial photos indicated that all targets had been well covered. The Commanding General 116th Air Force flew past Kiska in the afternoon and reported that the entire area from north of Salmon Lagoon to south of Gertrude Cove was on fire as a result of shells and bombs. Ammunition expended by the task groups was as follows (about 212 tons): |Caliber||Task Group George||Task Group Gilbert| |5-inch 51 caliber HC||272| |5-inch 38 caliber AA common||942||615| |5-inch 25 caliber AA||132||1145| No enemy batteries fired on Task Group George. Only one battery, composed of four 75-mm., was believed to have directed fire at Task Group Gilbert. 20,000 yards. Starshells were used, but the target was never seen, and daylight searches by aircraft and ships failed to reveal anything. Radar officers later suggested that the contacts might have been caused by triple-trip echoes from Amchitka, about 110 miles away, brought about by unusual atmospheric conditions. Using this hypothesis in the case of the New Mexico, target course and speed could have been developed by the use of ranges of about 23,000 yards, instead of 223,000 (the distance to Amchitka), which finally gave target course approximately parallel to the American force and speed slightly less. The heavy ships of Task Group Gilbert alone punished the phantoms with 518 14-inch shells, 485 8-inch, 25 5-inch 38 caliber, and 76 5-inch 25 caliber. At 0840 on 29 July a Catalina made radar contact with 7 ships about 200 miles northwest of Attu. Contact was maintained until 1045 and then lost. Because of the fog, the vessels could never be identified. perhaps they were engaged in evacuating Kiska. The weather was clear at sea level, with a slight surface haze. The ceiling was about 1,000 feet. The bombardment was coordinated with bombing by 18 Liberators which lasted from 1610 to 1700. Task Group Baker covered targets in the area of Gertrude Cove, Main Camp, the west end of Little Kiska, and the South Head batteries. Task Group King covered North Head and the submarine base. There was no retaliatory fire. Ammunition expended is listed below (about 185 tons): |Caliber||Task Group King||Task Group Baker| Bombardments were also conducted during this period by the destroyers of the Kiska blockade, two of which remained continually on station. Ten such bombardments were executed between 2 and 15 August by the Abner Read, Aylwin, Farragut, Hull, Monaghan, and Phelps. A total of 994 rounds of 5-inch ammunition was expended. 45 On a whale. Return to HyperWar: World War II on the WorldWideWeb Compiled and formatted by Patrick Clancey
Turkish Journal of Mathematics [email protected] # Turkish Journal of Mathematics On Certain Varieties of Semigroups Andreas TIEFENBACH Middle East Technical University, Department of Mathematics, 06531, Ankara-TURKEY Abstract: In this paper we generalize the class of completely regular semigroups (unions of groups) to the class of local monoids, that is the class of all semigroups where the local subsemigroups $$aSa$$ are local submonoids. The sublattice of this variety $$(\mathbf{L}(\caL(\cam))$$ covers another lattice isomorphic to the lattice of all bands $$([x2 = x]).$$ Every bundvariety $$\cau$$ has as image the variety $$F - \cau,$$ which is the class of all semigroups, where $$F$$ is a $$\cau$$-congruence $$(a F b \Leftrightarrow aSa = bSb).$$ It is shown how one can find the laws for $$F - \cau$$ for a given bandvariety $$\cau$$. The laws for $$F - \cab$$ are given and it is shown that $$F - \car\cab - \caL(\cag) \caL(\cav) := \{S : aSa \in \cav \forall a \in S\}).$$ Turk. J. Math., 22, (1998), 145-152. Full text: pdf Other articles published in the same issue: Turk. J. Math.,vol.22,iss.2.
From r/formula1: Death is a part of life. Yes. I have seen horrible death before too. They aren’t kids for Christ sakes. Let them be. The sports itself is already extremely dangerous. Any accident has the potential to shoot some dangerous debri through the halo, yet we still use a halo instead of full covered cockpit. Please stop thinking that lawyers and insurance liabilities have to dictate everything.
Answer to a question by Fujita on Variation of Hodge Structures We first provide details for the proof of Fujita's second theorem for K\"ahler fibre spaces over a curve, asserting that the direct image $V$ of the relative dualizing sheaf splits as the direct sum $ V = A \oplus Q$, where $A$ is ample and $Q$ is unitary flat. Our main result then answers in the negative the question posed by Fujita whether $V$ is semiample. In fact, $V$ is semiample if and only if $Q$ is associated to a representation of the fundamental group of $B$ having finite image. Our examples are based on hypergeometric integrals. Introduction An important progress in classification theory was stimulated by a theorem of Fujita, who showed ( [Fujita78a]) that if X is a compact Date: November 14, 2013. AMS Classification: 14D07-14C30-32G20-33C60. The present work took place in the realm of the DFG Forschergruppe 790 "Classification of algebraic surfaces and compact complex manifolds", the second named author was supported by the DFG grant DE 1442/4-1. Kähler manifold and f : X → B is a fibration onto a projective curve B (i.e., f has connected fibres), then the direct image sheaf is a semipositive (i.e., nef) vector bundle on B, meaning that each quotient bundle Q of V has degree deg(Q) ≥ 0. In the note [Fujita78b] Fujita announced the following stronger result: Theorem 1. (Fujita,[Fujita78b]) Let f : X → B be a fibration of a compact Kähler manifold X over a projective curve B, and consider the direct image sheaf Then V splits as a direct sum V = A ⊕ Q, where A is an ample vector bundle and Q is a unitary flat bundle. Fujita sketched the proof, but referred to a forthcoming article concerning the positivity of the so-called local exponents. After Fujita's articles, appeared then Kawamata's articles [Kaw81] [Kaw82], which proved the conjecture C n,1 (the subadditivity of Kodaira dimension for such fibrations, Kod(X) ≥ Kod(B) + Kod(F ), where F is a general fibre) demonstrating the semipositivity also for the direct image of the higher powers of the relative dualizing sheaf W m := f * (ω ⊗m X|B ) = f * (O X (m(K X − f * K B ))). Kawamata's calculations are more directly related to Hodge theory, and especially a simple lemma, concerning the degree of line bundles on a curve whose metric grows at most logarithmically around a finite number of singular points, played a crucial role for semipositivity. Later Kawamata extended his result to the case where the dimension of the base variety of the fibration is greater than one( [Kaw02]). A first purpose of our article is to provide the missing details concerning the proof of the second theorem of Fujita, using Kawamata's lemma and some crucial estimates given by Zucker ([Zuc79]) for the growth of the norm of sections of the L 2 -extension of Hodge bundles. It is important to have in mind Fujita's second theorem in order to understand the question posed by Fujita in 1982 ( Problem 5, page 600 of [Katata83], Proceedings of the 1982 Taniguchi Conference). Saying that a vector bundle V is semi-ample means that the hyperplane divisor H on P := P roj(V ) is such that there exists a positive integer m with |mH| base-point free. In our particular case, where V = A⊕Q with A ample and Q unitary flat, it simply means that the representation of the fundamental group ρ : π 1 (B) → U(r, C) associated to the flat unitary rank-r bundle Q has finite image (cf. theorem 9). The main purpose of this article is to show that the question by Fujita has a negative answer. Theorem 3. There exist surfaces X of general type endowed with a fibration f : X → B onto a curve B of genus ≥ 3, and with fibres of genus 6, such that V := f * ω X|B splits as a direct sum V = A⊕Q 1 ⊕Q 2 , where A is an ample rank-2 vector bundle, and the flat unitary rank-2 summands Q 1 , Q 2 have infinite monodromy group (i.e., the image of ρ j is infinite). In particular, V is not semi-ample. The surfaces in question are constructed using hypergeometric integrals associated to a cyclic group of order 7, and the non finiteness of the monodromy is a consequence of the classification due to Schwarz ([Schw73]). A minor contribution of the present paper is given also by the following result. Theorem 4. Let f : X → B be a Kodaira fibration, i.e., X is a surface and all the fibres of f are smooth curves not all isomorphic to each other. Then the direct image sheaf V := f * ω X|B has strictly positive degree hence H := R 1 f * (C) ⊗ O B is a flat bundle which is not nef (i.e., not numerically semipositive). The underlying philosophy that theorems 4 and 9 convey is that the behaviour of flat bundles which are not unitary is rather wild. 1. Preliminaries and reduction to the semistable case 1.1. Semipositive vector bundles on curves. Let B be a smooth complex projective curve, and assume that V is a holomorphic vector bundle over it, which we identify to its sheaf of holomorphic sections. Definition 5. Consider the projective bundle P := P roj(V ) = P(V ∨ ), and the tautological divisor H such that, p : P → B being the natural projection, p * (O P (H)) = V . Then V is said to be: Recall that obviously ample implies semi-ample, semi-ample implies nef, while H is nef if and only if H is in the closure of the ample cone, or, equivalently, H · C ≥ 0 for every effective curve C. As we shall recall, the conditions: nef and numerically semipositive are equivalent. (1) Observe that if U is a quotient bundle of V , then P roj(U) embeds in P roj(V ) and the tautological divisor restricts to the tautological divisor, hence if V is ample (respectively, nef) then each quotient bundle U is also ample (respectively, nef). We give an alternative proof 1 of a result of Hartshorne (Theorem 2.4, page 84 of [Hart71]), which is important for our purposes. Proposition 7. A vector bundle V on a curve is nef if and only it is numerically semi-positive, i.e., if and only if every quotient bundle Q of V has degree deg(Q) ≥ 0, and V is ample if and only if every quotient bundle Q of V has degree deg(Q) > 0. Proof. One implication was essentially observed in greater generality in (1) of remark 6, except that we should show that a nef bundle has positive degree, and an ample bundle has strictly positive degree. By the Leray-Hirsch theorem the cohomology of P is a free module over the cohomology of B, and its Chow ring is isomorphic to By the same theorem, for every quotient bundle Q of rank k and degree d ′ we obtain a projective subbundle P ′ := P roj(Q) such that the Chow ring of P ′ equals Z[F, H]/(H k − d ′ H r−1 F ). Step 2: if H is not nef, then there exists an irreducible curve C ′ ⊂ P such that H · C ′ < 0. The curve C ′ cannot be contained in a fibre, since H is ample on F , hence there exists a finite morphism from the normalization C of C ′ to B, f : C → B. The pull-back of V , W := f * (V ) has a quotient line bundle L corresponding to the section C ⊂ W , and deg(L) = H · C ′ < 0. The pull back of a stable bundle is semistable, hence from step 2 we obtain an inclusion L ∨ → f * (V ∨ ) and therefore the slope of L ∨ , which is strictly positive if H is not nef, is smaller or equal to the slope µ(E 1 ). Hence V has a quotient bundle (E 1 ) ∨ with strictly negative degree. Step 4: Let us work out the respective cones Ef f of effective curves, resp. N ef of nef divisors for P. The latter is a cone in the vector space NS(P) with basis H, F , and it is the dual of the cone spanned by effective curves in the dual vector space N 1 (P) where we take as basis L := F · H r−2 (a line contained in a fibre) and Γ, where Γ is a minimal section, i.e., such that Γ · H =: m is minimal (observe that m ≥ 0 if H is net). We have thus: The above formulae show that (since the cone Ef f contains L, Γ) F , which is movable, hence nef, is nef but not ample; so F is a boundary ray of N ef , while L is a boundary ray for Ef f . Assume that a curve aL + bΓ is effective: then intersecting with F , which is movable, hence nef, we get b ≥ 0, and indeed b > 0 unless the curve is contained in a fibre (hence a multiple of L). Hence we may assume that the other boundary ray of Ef f is spanned by Γ − aL, where a ≥ 0. Its orthogonal divisor class is given by 0 = (xF +yH)·(Γ−aL) = x+ y(m−a) ⇔ x = y(a−m), hence it is the class is the class (a−m)F +H. We get that H is nef (respectively, ample) if and only if a − m ≤ 0 ⇔ m ≥ a ( resp., m > a). Step 5: Assume now that H is nef and not ample: we want to conclude that µ(E 1 ) = 0, hence concluding that there is a degree zero quotient V → (E 1 ) ∨ . By Step 4, we get that a = m and that there are irreducible curves C ′ with class −αL + βΓ, β > 0, α > 0, as soon as mβ > α. On the normalization C of C ′ we pull back via f : C → B, and observe that V has a line bundle quotient L with slope C ′ · H = −α + βm. Again this slope, which is non negative, is bigger than the slope of E 1 . Take now the limit as α/β tends to m: then we conclude that the slope of E 1 satisfies µ(E 1 ) ≤ 0; since H is nef, we already know that µ(E 1 ) ≥ 0, hence E 1 has degree zero. Remark 8. In general an extension 0 → W → V → E → 0, where W is ample, and E is nef of degree zero, does not split. Since the extension class lies in H 1 (B, W ⊗ E ∨ ), the dual space to H 0 (B, E ⊗ W ∨ ⊗ K B ), which is non zero if B has genus g ≥ 2 and rk(E) = rk(W ) = deg(W ) = 1. We give here a direct proof of the characterization of semi-ample unitary flat bundles; one step of the proof is related to a more general theorem of Fujiwara ([Fujiw92]), concerning semi-ample bundles with determinant of Kodaira dimension equal to zero. Theorem 9. Let H be a unitary flat vector bundle on a projective manifold M, associated to a representation ρ : π 1 (M) → U(r, C). Then H is nef and moreover H is semi-ample if and only if Im(ρ) is finite. Proof. Since H is unitary flat, H is a Hermitian holomorphic bundle, and by the principle 'curvature decreases in Hermitian subbundles' (page 79 of [GH78], see also [Dem] Prop. VII.6.10) each subbundle has seminegative degree and each quotient bundle W of H has semi-positive degree, hence H is nef. Assume that H is semi-ample, and let B be a general linear curve section of M ⊂ P N , so that by Lefschetz' theorem we have a surjection π 1 (B) → π 1 (M). So, w.l.o.g., we may assume that M is a curve B. Step I: we shall show that there exists a finite morphism p : B ′ → B such that the pull back p * (H) is a trivial holomorphic bundle. Step II: a unitary flat vector bundle on a projective curve is a trivial holomorphic bundle if and only if the associated representation is trivial. Step I and II , when put together with the following lemma 10, stating that the image of π 1 (B ′ ) has finite index in π 1 (B), suffice to show the difficult implication. Since ρ is trivial on the image of π 1 (B ′ ) by Step II, therefore the image Im(ρ) is finite. Proof of Step I. Let P := P roj(H) and let π : P → B the projection, and let F ∼ = P r−1 be a fibre. The hypothesis that H is semi-ample means that there exists a positive integer m ≥ 1 such that the linear system |mH| yields a morphism ψ : P → P N , which is finite on each fibre, since We may choose r divisors D 1 , . . . , D r ∈ |mH| such that D 1 ∩ · · · ∩ D r ∩ F = ∅. Therefore we find r multi-sections of π, setting Let B ′ be an irreducible component of the normalized fibre product C 1 × B C 2 × B · · · × B C r : then the pull back H ′ of H admits r sections of O P ′ (H ′ ) yielding a birational map to B ′ × P r . Hence we get an injective homomorphism where the cokernel F is concentrated on a finite set. But then, since 0 = deg(H ′ ) = length(F ), we obtain the desired isomorphism O r B ′ ∼ = H ′ . Proof of Step II. Let B be a projective curve and ρ : π 1 (B) → U(r, C) a unitary representation, and H ρ the associated flat holomorphic bundle. Since ρ is unitary, it is a direct sum of irreducible unitary representations ρ j , j = 1, . . . k. Accordingly, we have a splitting H ρ = ⊕ k j=1 H ρ j . Narasimhan and Seshadri have proven (see corollary 1, page 564 of [NS65]) that each H ρ j is a stable degree zero holomorphic bundle on B. Now, if ρ is nontrivial, there exists an h such that ρ h is non trivial. Assuming that we have an isomorphism H ρ ∼ = O r B we derive a contradiction. In fact, we have a surjection Proof of the easy implication. Conversely, if Im(ρ) is finite, there exists anétale Galois cover p : We have P = (M ′ × P r−1 )/G. For each point x ∈ P r−1 we consider the G-orbit of X, and take a linear form h ′ such that h ′ does not vanish on the orbit Gx: then the product of the G-transforms of h ′ yields a section of O(mH ′ ) (here m = |G|) which is G-invariant and does not vanish on x. Hence H is semi-ample. Lemma 10. Let p : B ′ → B be a finite morphism of curves. Then the image Γ of π 1 (B ′ ) has finite index in π 1 (B). Semistable reduction. Assume now that f : X → B is a fibration of a compact Kähler manifold X over a projective curve B, and consider the invertible sheaf ω : By Hironaka's theorem there is a sequence of blow ups with smooth centres π :X → X such that has the property that all singular fibres F are such that Therefore we shall assume wlog that all the reduced fibres of f are normal crossing divisors. Theorem 11. (Semistable reduction theorem, [KKMsD73]) There exists a cyclic Galois covering of B, B ′ → B = B ′ /G, such that the normalization X ′′ of the fibre product B ′ × B X admits a resolution X ′ → X ′′ such that the resulting fibration f ′ : X ′ → B ′ has all the fibres which are reduced and normal crossing divisors. Remark 12. At each singular fibre F = i m i C i corresponding to a point t = 0 on B, the theorem yields a base change t = τ n , where m i |n, ∀i, and n >> 0. As a notation, we set v : where V := f * ω X|B , and the cokernel u * (V )/V ′ is concentrated on the set of points corresponding to singular fibres of f ′ . In particular, since V and V ′ are semipositive by Fujita's first theorem, if V ′ satisfies the property that for each degree 0 quotient bundle Q ′ of V ′ then there is a splitting V ′ = E ′ ⊕ Q ′ for the projection p : V ′ → Q ′ ,and Q ′ is unitary flat, then V ′ splits as the direct sum V ′ = A ⊕ Q, where A is an ample vector bundle and Q is a flat unitary bundle, and the same conclusion holds also for V . Proof. It suffices to work locally around each point P ′ ∈ B ′ , mapping to a point P corresponding to a singular fibre F of f and consider the base change t = τ n , where n may be assumed not to depend on the point P . By the Hurwitz formula and our assertion would be proven if the divisor P ′ (n − 1)F ′ − R, supported on the inverse images of the singular fibres of f , is effective. Let us work locally around the fibre F ′ of f ′ which lies above the point P ′ . Lemma 14. Let g : X ′ → X ′′ be a birational morphism between normal varieties: then The first thing to do is to separate in R the v-exceptional divisors and the divisors D i , which are the strict transforms of F i . Recall in fact that, if γ i = 0 is a local equation of F i , then and the local equation of D i in the normalization X ′′ becomes τ = 0. Therefore Finally, we get, working again locally, and observing that We are left to prove the second assertion of proposition 13. For this purpose we consider again the Harder-Narasimhan filtration of V , where as usual the slope of V 1 is maximal. It gives rise to an exact sequence 0 → U → V → Z → 0, where U is ample and deg(Z) = 0. We have then a generically invertible homomorphism between two vector bundles of the same rank: We set Q := u * (Z), and observe that V ′ → Q = u * (Z) must be surjective, else V ′ would have a negative degree quotient. Then, by our assumption, it follows that we have a splitting Denote by B * the complement of the branch locus of B ′ → B, and by B ′ * its inverse image. Since Z = u * (Q) G , we have that the restriction Z| B * is a flat bundle associated to a representation ρ * : However, since Z is a vector bundle on B, the restriction of ρ * to generators of the kernel of the surjection π 1 (B * ) → π 1 (B) is trivial, hence ρ * factors through ρ : π 1 (B) → U(r, C). Since the restriction of Q to B ′ * corresponds to the restriction of ρ * to π 1 (B ′ * ), and it is trivial on the kernel of π 1 (B ′ * ) → π 1 (B ′ ), we have shown that ρ ′ factors through ρ. It follows that Z is a flat bundle: we have in fact seen that Q is a quotient (B ′ ×C r )/π 1 (B ′ ), whereB ′ is the universal covering of B ′ , and where the action is determined by the homomorphism ρ ′ : where the orbifold fundamental group is defined (see e.g. [Cat08], pages 101 and following for more details) by the extension We also saw that since Q is the pull back of Z, the representation on C r of the orbifold fundamental group π orb 1 (B ′ → B) factors through the surjection π orb 1 (B ′ → B) → π 1 (B), therefore Z = (B × C r )/π 1 (B) is a flat unitary bundle over B. Fujita's second theorem In this section we shall use some standard differential geometric terminology which we now recall. Definition 15. Let (E, h) be an Hermitian vector bundle on a complex manifold M. Take the canonical Chern connection associated to the Hermitian metric h, and denote by Θ(E, h) the associated Hermitian curvature, which gives a Hermitian form on the complex vector bundle bundle T M ⊗ E. Then (see for instance [Laz04], and also [Kob87]), one says that E is Nakano positive (resp.: semi-positive) if there exists a Hermitian metric h such that the Hermitian form associated to Θ(E, h) is strictly positive definite (resp.: semi-positive definite). If M is a curve, then iΘ(E, h) = Σ λ,µ c λ,µ e * λ ⊗ e µ ⊗ dz ∧ dz, and Nakano and Griffiths positivity (resp. : semi-positivity) coincide, since they both boil down to the requirement that the Hermitian matrix (c λ,µ ) is positive definite (resp. : semi-positive), and we shall simply then say that an Hermitian vector bundle is positive (resp. : semi-positive). These notions then imply respectively ampleness and numerical semi-positivity (nefness) of the bundle E. We pass now to Fujita's second theorem. Theorem 17. (Fujita, [Fujita78b]) Let f : X → B be a fibration of a compact Kähler manifold X over a projective curve B, and consider the direct image sheaf Then V splits as a direct sum V = A ⊕ Q, where A is an ample vector bundle and Q is a unitary flat bundle The details of the proof were never published by Fujita. Thanks to the auxiliary results shown in the previous section, in particular proposition 13, it suffices to prove the theorem in the semistable case, i.e., where each fibre is reduced and a normal crossing divisor. Proof. We first treat the case where there are no singular fibres, and the underlying idea is simpler. Case 1: there are no singular fibres In this case V is semipositive, as it was shown by Fujita in [Fujita78a]. Another proof via Hodge bundles was given by Griffiths in [Griff-70] (see also [Grif84] and [Zuc82]). The underlying idea runs as follows. V is a holomorphic subbundle of the holomorphic vector bundle H associated to the local system H : In fact, we have thatV is an antiholomorphic subbundle and V ⊕V ⊂ H is a subbundle such that the Hermitian orthogonal splitting V ⊕V identifiesV to the dual bundle V ∨ . The bundle H is flat, hence the curvature Θ H associated to the flat connection satisfies Θ H ≡ 0. (in particular, see [Kob87], proposition 3.1 (a), page 42, all the real Chern classes of a flat bundle vanish). We view V as a holomorphic subbundle of H, while is a holomorphic quotient bundle of H. 2 Using arguments similar to the curvature formula for subbundles (see [Grif84], Lecture 2) Griffihts proves ([Griff-70], see also corollary 5, page 34 [Grif84]) that the curvature of V ∨ is semi-negative, since its local expression is of the form ih ′ (z)dz ∧ dz, where h ′ (z) is a semi-positive definite Hermitian matrix. In particular we have that the curvature Θ V of V is semipositive and, moreover, that the curvature vanishes identically if and only if the second fundamental form σ vanishes identically, i.e., if and only if V is a flat subbundle. However, by semi-positivity, we get that the curvature vanishes identically if and only its integral, the degree of V , equals zero. Hence V is a flat bundle if and only if it has degree 0. The same result then holds true, by an identical reasoning, for each holomorphic quotient bundle Q of V by the following argument. Assume now that V is not ample. By Hartshorne's theorem (proposition 7), there is an exact sequence of holomorphic bundles where the quotient bundle Q has degree 0. Dualizing, we obtain It is important to remark that we take here the curvature of a flat, but not unitarily flat, bundle H; in particular the principle: curvature decreases in subbundles (page 79 of [GH78]) does not hold, since this assumes that we take the curvature associated to an Hermitian metric, while the intersection form on H is not definite. and, since V ∨ has semi-negative Hermitian curvature, then by the cited principle 'curvature decreases in Hermitian subbundles' (page 79 of [GH78], see also [Dem] Prop. VII.6.10) Q ∨ has semi-negative Hermitian curvature. However, Q ∨ has degree 0, thus the integral of the semi-negative curvature of Q ∨ is zero, so its Hermitian curvature Θ Q ∨ ≡ 0, hence Q ∨ ∼ =Q ⊂V is a flat subbundle of the flat bundle H, and similarly Q is a flat bundle. Since we have an inclusionQ ⊂V ⊂ H of the flat antiholomorphic subbundleV , we obtain by complex conjugation an inclusion of the holomorphic subbundle Q ⊂ V hence a splitting of the surjection V → Q. Finally, Q is unitary flat since the intersection form on V is, up to constant, strictly positive definite. Case 2: there are singular fibres, which are normal crossing divisors, and the local monodromy is unipotent, since the fibres are reduced. The treatment of the general case is similar: it suffices to show that the degree of Q is the integral of the curvature form on B * , where B \ B * =: S is the set of critical values of f . Recall that the degree of the bundle Q is the degree of its top exterior power, the so-called determinant bundle det(Q). We use here a well known lemma (see lemma 5, page 61 of [Kaw82], also proposition 3.4, page 11 of [Pet84]): Lemma 18. Let L be a holomorphic line bundle over a projective curve B, and assume that L admits a singular metric h which is regular outside of a finite set S and has at most logarithmic growth at the points p ∈ S (i.e., if z is a local coordinate at p, then |h(z)| ≤ Clog|z| −m , where C is a positive constant, and m is a positive integer). Then the first Chern form c 1 (L, h) := Θ h is integrable on B, and its integral equals deg(L). Let us briefly recall how the existence of such a metric is shown to exist. We have the VHS (variation of Hodge structure) on the punctured curve B * given by the local system where X * := f −1 (B * ) and F : X * → B * is the restriction of f to X * . Again V * := V | B * is a subbundle of the flat bundle H * := H * ⊗ Z O B * , and we get a subbundle V * ⊕V * ⊂ H * . H * is a flat holomorphic bundle and the associated holomorphic connection ∇ * on H * is the so called Gauss-Manin connection. We then have the Deligne canonical extension (DH, ∇) of the pair (H * , ∇ * ) to a holomorphic vector bundle DH endowed with a meromorphic connection ∇ having simple poles on the points of S, and with nilpotent residue matrices. We refer to part II of [Kol86] (see especially section 2 , and theorem 2.6) for more details about the presentation of this extension, which we now briefly describe. We let D be the normal crossing divisor f −1 (S), and consider the relative De Rham complex Ω · X|B (log D) with logarithmic singularities along D. The hypercohomology sheaf DH = R i f * (Ω · X|B (log D)) gives an ex- In particular, for i = n := dimX −1 we have, as proven by Kawamata in [Kaw82] (lemma 1, page 59) X|B (log D))). As explained in [G-S75], and also in proposition 4.4., page 433 of [Zuc79], logarithmic forms are precisely those holomorphic forms with the property of being square integrable, and this approach was taken up by Zucker [Zuc79] who used the explicit description of the limiting Hodge structure found by Schmid in [Schm73] in order to prove the following result (which is proven in the course of the proof of proposition 5.2, pages 435-436). Lemma 19. For each point s ∈ S there exists a basis of V given by elements σ j such that their norm in the flat metric outside the punctures grows at most logarithmically. In particular, for each quotient bundle Q of V its determinant admits a metric with growth at most logarithmic at the punctures s ∈ S, and the degree of Q is given by the integral of the first Chern form of the singular metric. Remark 20. Observe that a similar result, but for the determinant of V , is used in [Pet84] to show the flatness of V in the case that deg(V ) = 0. Therefore we can conclude that, since deg(Q) = 0, and since its integral is given by the norm of the second fundamental form, which is semipositive, then the the second fundamental form vanishes identically and Q is a flat sub-bundle. The same argument as the one given for case 1 shows then that we have an inclusion Q * → V * := V | B * . Now Q * := Q |B * is a unitary flat subbundle of the flat bundle H * , in particular the local monodromies at the punctures (the points of S), being unitary and unipotent, are trivial: hence Q * has a flat extension to B which we denote byQ. Clearly we have inclusionsQ and we obtain a homomorphism ψ :Q → Q composing the inclusion Q → V with the surjection V → Q. From the fact that ψ is an isomorphism over B * we infer that ψ is an isomorphism: since det(ψ) is not identically zero, and is a section of a degree zero line bundle. Hence we conclude that the composition of ψ −1 with the inclusion Q → V gives then the desired splitting of the surjection V → Q. Corollary 21. Let f : X → B be a fibration of a compact Kähler manifold X over a projective curve B, and consider the direct image sheaf V . where A is an ample vector bundle and each Q i is flat vector bundle without any nontrivial degree zero quotient. Moreover, (I) if Q i has rank equal to 1, then it is a torsion bundle (∃ m such that Q ⊗m i is trivial), (II) if the genus of the curve B equals 1, then each Q i has rank 1. (III) In particular, if the genus of the curve B is at most 1, then V is semi-ample. Proof. Each time V has a degree zero quotient, this yields a splitting, as shown in theorem 17. Therefore, we obtain that V splits as a direct (II) This is immediate, since the fundamental group of a curve B of genus 1 is abelian, hence every representation splits as a direct sum of 1-dimensional representations. (III) A torsion line bundle is semi-ample, and a direct sum of semiample vector bundles is semi-ample. Remark 22. Part (III) of the above corollary was proven by Barja in [Barja98]. We proceed now to prove Theorem 4 Let f : X → B be a Kodaira fibration, i.e., X is a surface and all the fibres of f are smooth curves of genus g ≥ 2 not all isomorphic to each other. Then the direct image sheaf V := f * ω X|B has strictly positive degree hence H := R 1 f * (C) ⊗ O B is a flat bundle which is not nef (i.e., not numerically semipositive). Proof. Since all the fibres of f are smooth, we have an exact sequence and it suffices to show that the degree of the quotient bundle V is strictly negative, or, equivalently, deg(V ) > 0. We have that , where g is the genus of the fibres of f , and b is the genus of B. As well known (see [BPHV]) also the genus b ≥ 2, and therefore X contains no rational curve and is therefore a minimal surface. Since f is a differentiable fibre bundle, we have for the Euler-Poincaré characteristic of X e(X) = 4(b − 1)(g − 1). Kodaira ([Kod67]) proved that for such fibrations the topological index σ(X), the signature of the intersection form on H 2 (X, R) is positive. By the index theorem (see again [BPHV]) we have Corollary 23. There are flat bundles on curves which are not nef, in particular do not admit an Hermitian metric with semipositive curvature. Proof. See b) of the following remark. In this section we explain how we obtain explicit examples of fibrations where V = f * ω has a flat summand. Remark 25. (I) Observe that changing the generator of G with its opposite has the effect of replacing τ with The curve C has genus 6, and the linear subsystems of the canonical system corresponding to the eigensheaves have a base locus, since the greatest common divisor of the elements in V 1 is w 3 Consider now the Hodge decomposition of the cohomology of C, viewed as a G-representation: The consequence is that We conclude the above discussion with its consequence Proposition 26. Let f : X → B be a semistable fibration of a surface X onto a projective curve, such that the group G = µ 7 acts on this fibration inducing the identity on B. Assume that the general fibre F has genus 6 and that G has exactly 4 fixed points on F , with tangential characters (1, 1, 1, 4). Then if we split V = f * (ω X|B ) into eigensheaves, then the eigensheaves V 1 , V 2 are unitary flat rank 2 bundles. Proof. Since the fibration is semistable, the local monodromies are unipotent: on the other hand, they are unitary, hence they must be trivial. This implies that the local systems H * 1 and H * 2 have respective flat extensions to local systems H 1 and H 2 on the whole curve B. Denote by H j := H j ⊗ O B . Now, by our calculations, V j = H j over B * = B \ S, S being the set of critical values of f . We saw that the norm of a local frame of V j has at most logarithmic grow at the points p ∈ S. This shows that V j is a subsheaf of H j : by semipositivity we conclude that we have equality V j = H j . Counterexamples to Fujita's question In this section we shall provide two examples of surfaces fibred over a curve, with fibres curves with a symmetry of G := Z/7 as in the preceding section. We consider again the equation z 7 1 = y 1 y 0 (y 1 − y 0 )(y 1 − xy 0 ) 4 , x ∈ C \ {0, 1} but we homogenize it to obtain the equation z 7 1 = y 1 y 0 (y 1 − y 0 )(x 0 y 1 − x 1 y 0 ) 4 x 3 0 . The above equation describes a singular surface Σ ′ which is a cyclic covering of P 1 × P 1 with group G := Z/7; Σ ′ is contained inside the line bundle L 1 over P 1 × P 1 whose sheaf of holomorphic sections L 1 equals O P 1 ×P 1 (1, 1). One may observe that the second projection shows that the surface Σ ′ is a ruled surface. Since the branch divisor is a not a normal crossing divisor, we blow up the point x 0 = y 0 = 0, obtaining a del Pezzo surface which we denote by Z, while we denote by Σ the normalization of the induced G-Galois cover of Z. Remark 27. The singularities of the normal surface Σ are of three analytical types, which we describe by their analytical equation (1) z 7 = x 4 y: one for each singular fibre (2) z 7 = x 3 y: three on the fibre at infinity (3) z 7 = xy: one on the fibre at infinity. Finally, we let Y be a minimal resolution of singularities of Σ. Therefore Y admits a fibration ϕ : Y → P 1 with fibres curves of genus 6. We shall prove in a later subsection the following Theorem 28. The above surface X is a surface of general type endowed with a fibration f : X → B onto a curve B of genus 3, and with fibres of genus 6, such that V := f * ω X|B splits as a direct sum V = A⊕Q 1 ⊕Q 2 , where A is an ample rank-2 vector bundle, and the unitary flat rank-2 summands Q 1 , Q 2 have infinite monodromy group (i.e., the image of ρ j is infinite). We shall see in the next section how, varying x, we obtain a rank-2 local system over P 1 \ {0, 1, ∞}, which is equivalent, in view of the Riemann-Hilbert correspondence, to a second order differential equation with regular singular points. Indeed,we shall see that we have in fact a Gauss hypergeometric equation. We denote by X(T ) the minimal resolution of the singularities of T . (1.2) In our case, the fibre P 1 intersects transversally the two curves locally given by the equation u = y = 0. Some calculations with the resolution of these quotient singularities (see [BPHV], page 80) show that the fibre of the minimal resolution X(T ) of T consists of a smooth curve of genus three tangent to E 2 at the intersection point of E 3 and E 2 . We need two blow ups of this point to obtain that the fibre is a normal crossing divisor. Then the multiplicities of the exceptional divisors are respectively 7, 4, 2, 1, 1: we conclude then that in order to obtain the semistable reduction we must take a covering of the base which is ramified at the point P corresponding to the singular fibre of order divisible by 28. (2.1) The A 6 singularity is resolved by a chain of P 1 's with self-intersection equal to −2. The fibre of the minimal resolution X(T ) of T consists of a smooth curve of genus three intersecting E 3 and E ′ 3 transversally at the point E 3 ∩ E ′ 3 , and the sum (E 1 + E ′ 1 ) + 2(E 2 + E ′ 2 ) + 3(E 3 + E ′ 3 ). We need just to blow up the point E 3 ∩ E ′ 3 to obtain a normal crossing divisor. Since the multiplicities of the seven exceptional divisors in the new chain are 1, 2, 3, 7, 3, 2, 1,we conclude then that in order to obtain the semistable reduction we must take a covering of the base which is ramified at the point P corresponding to the singular fibre of order divisible by 42. Observe that ρ determines a Galois covering φ : C → P with Galois group G := Im(ρ). We have that G = µ n := {ζ|ζ n = 1}. Hence we may write α s = e 2πi ms n , where 0 ≤ m s < n, and we set also ν s := ms n . The equation of C is therefore given by z n 1 = Π j (y 1 − s j y 0 ) m j . We have an eigenspace splitting for the direct image of the sheaf of holomorphic 1-forms: We want to relate the above summands to local systems on P . To this purpose, observe that any character χ h : µ n −→ C * , ζ −→ ζ h defines a rank-one local system L j on P \ S, associated to the homomorphism ρ h . Let L be any of the L h : then we have a Hodge decomposition where H (1,0) (P \ S, L) is the space of differentials of the first kind in H 0 (P, j mer * Ω 1 (L)) and H (0,1) (P \ S, L) is the complex conjugate (e.g., L −j is the complex conjugate of L j )of the corresponding group for the dual (i.e., conjugate) local system (cf. [D-M86], Page 19 and Prop. 2.20). This implies that the rank two connection ∇ is equivalent to (the connection on P \ {0, 1, ∞} associated to) the Gauß hypergeometric differential equation t(t − 1)f ′′ + ((α + β + 1)t − γ)f ′ + αβf = 0 (cf. [Kohno99], Page 163). The latter equation is non-resonant (i.e., the difference of two numbers of {α, β, γ} does not lie in Z), implying that the differential equation and hence its monodromy is irreducible. It has the Riemann scheme cf. [Kohno99], Page 164. Since the Riemann scheme describes the exponents of a basis of solutions of an ordinary differential equation in their respective Puiseux expansions, this implies that the local monodromy of ∇ at 0, 1 is a homology of order 7 and hence is of order 7 in the associated projective linear group. Recall the Schwarz' list of the Gauß hypergeometric differential equations with finite projective monodromy groups [Schw73]. As no two consecutive projective local monodromies of order 7 occur amongst the irreducible cases listed there, we conclude that the monodromy of ∇ is infinite. 4.3. Proof of theorem 28. We considered a ramified covering ψ : B → P which is locally at each branch point 0, 1, ∞ of type x → x 7 . Then we got f : X → B as the minimal resolution of the fibre product B × P Y → B. The fibres of f are smooth curves of genus 6 and X has an action of G ∼ = µ 7 which is of type (4, 1, 1, 1) on all smooth fibres. There are only three singular fibres, but around them the monodromy of the rank-2 local systems H * 1 , H * 2 is trivial, because we saw in the previous section that the local monodromy is of order 7. Hence these extend to rank-2 local systems H 1 , H 2 over B. The same argument given in proposition 26 shows then that V j = H j for j = 1, 2. We have then V = U ⊕ Q 1 ⊕ Q 2 , where we set Q j := V j = H j for j = 1, 2, and U := V 3 ⊕ V 4 . Assume that U is not ample, and that it contains a unitary flat summand Q ′ . Without loss of generality, we may assume that Q ′ |B * ⊂ H * 3 . Since H * 3 = V 3 ⊕ V 4 |B * we see that Q ′ has rank 1. By the cited result by Deligne ((I) in corollary 21) Q ′ would be a torsion line bundle, hence also V 3 and V 4 and the monodromy of H * 3 (respectively H * 4 ) would be finite. However, the integrals associated to the factor H * 3 (respectively H * 4 ) also satisfy a Gauss differential equation with infinite monodromy (again by Schwarz' list, since the local monodromies at 0, 1, ∞ ∈ P are of order ≥ 7): this gives a contradiction. Remark 31. The same considerations apply to the second family of curves that we introduced, and also to other families of curves.
The crème au beurre, aka buttercream, is one of the basic preparations in pastry: it is a light and smooth mixture of fat and sugar and it can be used as filling and decor as well. Buttercreams are popular for many American- and English-style cakes (as filling, decor and levelling layer between the actual cake and the icing), for cupcake frosting, but also for some French classics like the Moka cake, Religieuses or the Christmas Yule Logs (bûches de Noël). There are several preparation methods for buttercream, but they all have one step in common: creaming the (soft) butter to incorporate air and make the final cream lighter. Then pasteurized eggs (i.e. “cooked” with a sugar syrup) are usually added, either whole, or just yolks (for richness) or whites (for lightness). The main different methods are: - Simple American buttercream: made by creaming the butter and powdered sugar until light and fluffy. Pasteurized eggs and flavours can be added optionally. It is very easy to make and for this reason it is widely used, but it has a strong taste which might or might not be appropriate for your recipe. - French buttercream: egg yolks are whipped then pasteurized with a hot sugar syrup (aka “pâte à bombe“); only when the mixture has cooled down the soft butter is added and creamed. Probably the most difficult method to master, but the one with the best flavour and smoothest texture. An Italian meringue can be added to the buttercream for additional lightness. - Italian buttercream: in this case, only egg whites are used to make an Italian meringue; when the mixture has cooled down, soft butter is added and creamed. The meringue adds lightness to the cream - Swiss buttercream: it is similar to the Italian buttercream, but the Italian meringue is replaced with a Swiss meringue; this means that egg whites are first heated and lightly whipped with sugar on a bain-marie, then whipped in a mixer until they cool down; finally, butter is incorporated and creamed to make a light and glossy buttercream. - English buttercream: the butter is added to a mixture which is similar to a custard sauce (Crème anglaise) with egg yolks, sugar and milk) - Genoise-style buttercream: it is similar to the French buttercream, but whole eggs are used instead of just yolks. Of course there is also a way to make buttercream with raw eggs (i.e. not pasteurized), but due to health and safety concerns, it is rare to find it in commercial bakeries and is probably better to avoid it anyway! 🙂
Reddit r/WanderingInn: Thanks for sharing! Enjoyable reading. I must say though that the Niers and fellowship arc was a real highlight for me. A gnoll watchman getting a goblin based class, the oldest antinium soldier working under Niers and learning to lead through Unistasis. Goth goblin! And Numbtongue making some strides in accepting his class and its real potential instead of always feeling like the lesser warrior. + Fals and Garia Not to mention making the Centaur - Gnoll connection which I think might be significant now that so many new races are rushing to claim land in Izril. Vol. 9 hype is for real anyway!
++ed by: 4 PAUSE users Matthäus Kiem Term::TablePrint - Print a table to the terminal and browse it interactively. Version 0.022 my $table = [ [ 'id', 'name' ], [ 1, 'Ruth' ], [ 2, 'John' ], [ 3, 'Mark' ], [ 4, 'Nena' ], ]; use Term::TablePrint qw( print_table ); print_table( $table ); # or OO style: use Term::TablePrint; my $pt = Term::TablePrint->new(); $pt->print_table( $table ); print_table shows a table and lets the user interactively browse it. It provides a cursor which highlights the row on which it is located. The user can scroll through the table with the different cursor keys - see "KEYS". If the table has more rows than the terminal, the table is divided up on as many pages as needed automatically. If the cursor reaches the end of a page, the next page is shown automatically until the last page is reached. Also if the cursor reaches the topmost line, the previous page is shown automatically if it is not already the first one. If the terminal is too narrow to print the table, the columns are adjusted to the available width automatically. If the option table_expand is enabled and a row is selected with Return, each column of that row is output in its own line preceded by the column name. This might be useful if the columns were cut due to the too low terminal width. To get a proper output print_table uses the columns method from Unicode::GCString to calculate the string length. The following modifications are made (at a copy of the original data) before the output. Leading and trailing spaces are removed from the array elements and spaces are squashed to a single white-space. s/\p{Space}+/ /g; In addition, characters of the Unicode property Other are removed. In Term::TablePrint the utf8 warnings are disabled. no warnings 'utf8'; The elements in a column are right-justified if one or more elements of that column do not look like a number, else they are left-justified. The new method returns a Term::TablePrint object. As an argument it can be passed a reference to a hash which holds the options - the available options are listed in "OPTIONS". my $tp = Term::TablePrint->new( [ \%options ] ); The print_table method prints the table passed with the first argument. $tp->print_table( $array_ref, [ \%options ] ); The first argument is a reference to an array of arrays. The first array of these arrays holds the column names. The following arrays are the table rows where the elements are the field values. As a second and optional argument a hash reference can be passed which holds the options - the available options are listed in "OPTIONS". The print_table subroutine prints the table passed with the first argument. print_table( $array_ref, [ \%options ] ); The subroutine print_table takes the same arguments as the method "print_table". Keys to move around: • the ArrowDown key (or the j key) to move down and the ArrowUp key (or the k key) to move up. • the PageUp key (or Ctrl-B) to go back one page, the PageDown key (or Ctrl-F) to go forward one page. • the Home key (or Ctrl-A) to jump to the first row of the table, the End key (or Ctrl-E) to jump to the last row of the table. The Return key closes the table if the cursor is on the header row. If keep_header and table_expand are enabled, the table closes by selecting the first row twice in succession. If the cursor is not on the first row: • with the option table_expand disabled the cursor jumps to the table head if Return is pressed. • with the option table_expand enabled each column of the selected row is output in its own line preceded by the column name if Return is pressed. Another Return closes this output and goes back to the table output. If a row is selected twice in succession, the pointer jumps to the head of the table or to the first row if keep_header is enabled. If the width of the window is changed and the option table_expand is enabled, the user can rewrite the screen by choosing a row. If the option choose_columns is enabled, the SpaceBar key (or the right mouse key) can be used to select columns - see option "choose_columns". Defaults may change in a future release. If add_header is set to 1, print_table adds a header row - the columns are numbered starting with 1. Default: 0 If binary_filter is set to 1, "BNRY" is printed instead of arbitrary binary data. If the data matches the repexp /[\x00-\x08\x0B-\x0C\x0E-\x1F]/, it is considered arbitrary binary data. Printing arbitrary binary data could break the output. Default: 0 If choose_columns is set to 1, the user can choose which columns to print. The columns can be marked with the SpaceBar. The list of marked columns including the highlighted column are printed as soon as Return is pressed. If choose_columns is set to 2, it is possible to change the order of the columns. Columns can be added (with the SpaceBar and the Return key) until the user confirms with the -ok- menu entry. Default: 0 If keep_header is set to 1, the table header is shown on top of each page. If keep_header is set to 0, the table header is shown on top of the first page. Default: 1; Set the maximum number of used table rows. The used table rows are kept in memory. To disable the automatic limit set max_rows to 0. If the number of table rows is equal to or higher than max_rows, the last row of the output says "REACHED LIMIT" or "=LIMIT=" if "REACHED LIMIT" doesn't fit in the row. Default: 50_000 The columns with a width below or equal min_col_width are only trimmed if it is still required to lower the row width despite all columns wider than min_col_width have been trimmed to min_col_width. Default: 30 Set the mouse mode (see option mouse in "OPTIONS" in Term::Choose). Default: 0 Set the progress bar threshold. If the number of fields (rows x columns) is higher than the threshold, a progress bar is shown while preparing the data for the output. Default: 40_000 Set the number of spaces between columns. Default: 2 If the option table_expand is set to 1 and Return is pressed, the selected table row is printed with each column in its own line. If table_expand is set to 0, the cursor jumps to the to first row (if not already there) when Return is pressed. Default: 1 Set the string that will be shown on the screen instead of an undefined field. Default: "" (empty string) print_table warns • if an unknown option name is passed. print_table dies • if an invalid number of arguments is passed. • if an invalid argument is passed. • if an invalid option value is passed. if the first argument refers to an empty array. Perl version Requires Perl version 5.8.3 or greater. Decoded strings print_table expects decoded strings. Encoding layer for STDOUT For a correct output it is required to set an encoding layer for STDOUT matching the terminal's character set. Monospaced font It is required a terminal that uses a monospaced font which supports the printed characters. You can find documentation for this module with the perldoc command. perldoc Term::TablePrint Thanks to the Perl-Community.de and the people form stackoverflow for the help. Matthäus Kiem <[email protected]> Copyright 2012-2015 Matthäus Kiem.
CMS Announces Additional 3 Years to Comply with HCBS Regulations Topics: Announcements, Federal News, HCBS, HCBS Rule, CMS released an informational bulletin noting that states will have an additional three years (until 2022) to comply with the settings requirements of the HCBS regulations. CMS notes that states must still work to gain approval for their Statewide Transition Plans by 2019, but that additional time is being provided to achieve compliance in recognition of the complex work necessary.
Enhancement of the expression of HCV core gene does not enhance core-specific immune response in DNA immunization: advantages of the heterologous DNA prime, protein boost immunization regimen Background Hepatitis C core protein is an attractive target for HCV vaccine aimed to exterminate HCV infected cells. However, although highly immunogenic in natural infection, core appears to have low immunogenicity in experimental settings. We aimed to design an HCV vaccine prototype based on core, and devise immunization regimens that would lead to potent anti-core immune responses which circumvent the immunogenicity limitations earlier observed. Methods Plasmids encoding core with no translation initiation signal (pCMVcore); with Kozak sequence (pCMVcoreKozak); and with HCV IRES (pCMVcoreIRES) were designed and expressed in a variety of eukaryotic cells. Polyproteins corresponding to HCV 1b amino acids (aa) 1–98 and 1–173 were expressed in E. coli. C57BL/6 mice were immunized with four 25-μg doses of pCMVcoreKozak, or pCMV (I). BALB/c mice were immunized with 100 μg of either pCMVcore, or pCMVcoreKozak, or pCMVcoreIRES, or empty pCMV (II). Lastly, BALB/c mice were immunized with 20 μg of core aa 1–98 in prime and boost, or with 100 μg of pCMVcoreKozak in prime and 20 μg of core aa 1–98 in boost (III). Antibody response, [3H]-T-incorporation, and cytokine secretion by core/core peptide-stimulated splenocytes were assessed after each immunization. Results Plasmids differed in core-expression capacity: mouse fibroblasts transfected with pCMVcore, pCMVcoreIRES and pCMVcoreKozak expressed 0.22 ± 0.18, 0.83 ± 0.5, and 13 ± 5 ng core per cell, respectively. Single immunization with highly expressing pCMVcoreKozak induced specific IFN-γ and IL-2, and weak antibody response. Single immunization with plasmids directing low levels of core expression induced similar levels of cytokines, strong T-cell proliferation (pCMVcoreIRES), and antibodies in titer 103(pCMVcore). Boosting with pCMVcoreKozak induced low antibody response, core-specific T-cell proliferation and IFN-γ secretion that subsided after the 3rd plasmid injection. The latter also led to a decrease in specific IL-2 secretion. The best was the heterologous pCMVcoreKozak prime/protein boost regimen that generated mixed Th1/Th2-cellular response with core-specific antibodies in titer ≥ 3 × 103. Conclusion Thus, administration of highly expressed HCV core gene, as one large dose or repeated injections of smaller doses, may suppress core-specific immune response. Instead, the latter is induced by a heterologous DNA prime/protein boost regimen that circumvents the negative effects of intracellular core expression. (pCMVcoreIRES), and antibodies in titer 10 3 (pCMVcore). Boosting with pCMVcoreKozak induced low antibody response, core-specific T-cell proliferation and IFN- secretion that subsided after the 3rd plasmid injection. The latter also led to a decrease in specific IL-2 secretion. The best was the heterologous pCMVcoreKozak prime/protein boost regimen that generated mixed Th1/Th2cellular response with core-specific antibodies in titer  3 × 10 3 . Conclusion: Thus, administration of highly expressed HCV core gene, as one large dose or repeated injections of smaller doses, may suppress core-specific immune response. Instead, the latter is induced by a heterologous DNA prime/protein boost regimen that circumvents the negative effects of intracellular core expression. Background Globally, an estimated 170 million people are chronically infected with hepatitis C virus (HCV), and 3 to 4 million persons are newly infected each year [1,2]. The human immune system has difficulties in clearing the virus in either the acute, or chronic phase of the infection with up to 40% of patients progressing to cirrhosis and liver failure [3][4][5][6]. Extensive studies have unraveled important reliable correlates of viral clearance [7][8][9][10][11]. This, together with the growing need to diminish the magnitude of HCV associated liver disease served as a basis for intensive HCV vaccine research. A series of HCV vaccine candidates have moved into clinical trials [11]. One such is the peptide vaccine IC41 consisting of a panel of MHC class I and class II restricted epitopes adjuvanted by poly-L-arginine administered to healthy volunteers [12] and to chronic HCV patients including non-responders to the standard therapy [13,14]. Another therapeutic vaccine employed peptides chosen individually for their ability to induce the strongest in vitro cellular response [15]. In a further vaccine trial, chronic hepatitis C patients received the recombinant HCV envelope protein E1 [16]. The first clinical trial of an HCV DNA vaccine consisting of a codon-optimized NS3/4A gene administered to chronic hepatitis C patients is currently ongoing (CHRONVAC-C ® ; http:// www.clinicaltrials.gov/ct2/results?term=NCT00563173; http://www.bion.no/moter/Vaccine/ Matti_S%E4llberg.pdf). So far, none of the peptide or protein vaccines were able to induce a significant improvement in the health conditions of chronic HCV patients, or a significant decrease of HCV RNA load, specifically if compared to the conventional IFN-based therapy [13,15,16]. The vaccine trials have, however, demonstrated that when achieved, HCV RNA decline in the vaccine recipients correlates with induction of strong IFN-gamma T-cell response [13]. Such a response can best be recruited by DNA vaccines, either alone or with the aid of heterologous boosts [11,17]. Indeed, vaccination of chimpanzees showed the ability to elicit effective immunity against heterologous HCV strains using T-cell oriented HCV genetic vaccines that stimulated only the cellular arm of the immune system [17,18]. Ideally, HCV core could be eliminated by a specific vaccine-induced immune response. It is a strong immunogen with anti-core immune response evolving very early in infection [35,36]. Early and broad peripheral and intrahepatic CD8+ T-cell and antibody response to core/core epitopes is registered in chimpanzees controlling HCV infection HCV, but not in chimpanzees that become chronically infected [37][38][39]. In mice, potent experimentally induced anti-core immune response conferred partial protection against challenge with core expressing recombinant vaccinia virus [40]. However, despite high immunogenicity in the natural infection, core does not perform well as an immunogen, specifically if introduced as naked DNA [2,[41][42][43]. Attempts to enhance core immunogenicity by targeting HCV core protein to specific cellular compartments [44], co-immunization with cytokine expressing plasmids [2,41], adjuvants as CpG [45], or truncated core gene versions [46] had limited or no success. Prime-boost strategies have been used to increase immune responses to a number of DNA vaccines. Immunization regimens comprised of a DNA prime and a viral vector boost for instance for vaccinia virus [47,48], adenovirus [49], fowlpox [50,51], and retrovirus [52]. Priming with DNA and boosting with protein is another promising approach. This regimen has been studied for HIV [53,54], hepatitis C virus [55,56], anthrax [57], Mycobacteria [58,59], Streptococcus pneumoniae [60] and BVDV [61]. DNA vaccines and recombinant protein vaccines utilize different mechanisms to elicit antigen-specific responses. Due to the production of antigen in transfected cells of the host, a DNA vaccine induces robust T-cell responses, which are critical for the development of T-cell-dependent antibody responses [62]. DNA immunization is also highly effective in priming antigen-specific memory B cells. In contrast, a recombinant protein vaccine is generally more effective at eliciting antibody responses than cell-mediated immune responses and may directly stimulate antigen-specific memory B cells to differentiate into antibody-secreting cells, resulting in production of high titer antigen-specific antibodies [63]. Therefore, a DNA prime plus protein boost is a complementary approach that overcomes each of their respective shortcomings. Certain improvement of the immune response was reached after co-delivery of HCV core DNA and recombinant core [2,40,64]. In this study, we have shown that in DNA immunization, poor core-specific immune response can be a consequence of high levels of intracellular core expression, and that such a response can be improved by using low-expressing core genes, or single core gene primes in combination with recombinant core protein boosts. Plasmids for expression of HCV core Region encoding aa 1-191 of HCV core was reverse-transcribed and amplified from HCV 1b isolate 274933RU (GeneBank accession #AF176573) [65] using oligonucleotide primers: sense GATCCAAGCTTATGAGCAC-GAATCC and antisense GATCCCTCGAGTCAAGCGGAAGCTGG containing recognition sites of HindIII and XhoI restriction endonucleases. The amplified DNA was cleaved with HindIII/XhoI and inserted into pcDNA3 (Invitrogen, USA) cleaved with HindIII/XhoI resulting in pCMVcore. Region encoding aa 1-191 of HCV core was also reverse-transcribed and amplified from HCV isolate 274933RU using another set of primers that carried Kozak consensus sequence sense AGCTGCTAGCGCCGCCACCATGAGCACGAATCCT and antisense GATCGTTAACTAAGCGGAAGCTGGATGG primers containing recognition sites of restriction endonucleases NheI and XhoI, respectively. The amplified DNA was cleaved with NheI/KspAI and inserted into the plasmid pCMVE2/p7-2 [66] cleaved with NheI/XhoI, resulting in pCMVcoreKozak. The region corresponding to HCV 5'UTR, and coding sequences for aa 1-809 was reverse-transcribed and amplified from HCV 1b isolate AD78P1 (GeneBank accession #AJ132997) [67], kindly provided by Prof. M. Roggendorf (Essen, Germany) using sense-GACCCAAGCTTCGTAGACCGTGCACCAT and antisense CATGCTCGAGTTAGGCGTATGCTCG primers. The amplified DNA was cleaved with HindIII/XhoI and inserted into pcDNA3 cleaved with HindIII/XhoI resulting in pCMVcoreIRES. HCV 274933RU core differed from HCV AD78P1 core in positions 70 (H versus R), 75 (T versus A), and 147 (V versus T), respectively. Growth of pcDNA3, pCMVcore, pCMVcoreKozak, and pCMVcoreIRES was accomplished in the E. coli strain DH5alpha. Plasmid DNA was extracted and purified by Endo Free plasmid Maxi kit (Qiagen GmbH, Germany). The purified plasmids were dissolved in the phosphate buffered saline (PBS) and used for in vitro expression assays and for DNA immunization. Quantification of core expression in mouse cells NIH3T3 cells were transfected with either pcDNA3, pCM-Vcore, pCMVcoreKozak, pCMVcoreIRES, or pEGFP-N1 (Clontech, CA, USA). The percent of transfection was evaluated by counting the number of GFP expressing cells per 500 transfected NIH3T3 cells using a fluorescence Leica DM 6000 microscope (Leica Camera AG, Germany). Cells were harvested 48 h post-transfection, counted, and 10 4 cells were lysed in 2× SDS Sample buffer. Lysates and samples containing 1 to 50 ng of recombinant core aa 1-173 (corresponding to p21) were run simultaneously on 12% SDS-PAAG and transferred onto PVDF membrane for calibration. Blots were blocked overnight in PBS-T with 5% non-fat dry milk, stained with polyclonal anti-core antibodies #35-6 (1:5000) followed by the secondary antirabbit HRP-conjugated antibodies (DAKOPatts AB, Denmark). Signals were detected using the ECL™ system (Amersham Pharmacia Biotech, Ireland). X-ray films were scanned, and processed using Image J software http:// rsb.info.nih.gov/ij. The data are presented as the Mean Grey Values (MGV). The core content was quantified by plotting the MGV of each sample onto a calibration curve prepared using recombinant core aa 1-173. After core detection, blots were striped according to the ECL protocol and re-stained with monoclonal anti-tubulin antibod-ies (Sigma, USA) and secondary anti-mouse HRPconjugated antibodies (DAKOPatts AB, Denmark). Core content per transfected cell was evaluated after accounting for the percent of transfection and normalization to the tubulin content per well. Immunofluorescence staining BHK-21 cells were seeded on the chamber slides (Nunc International, Denmark) and transfected as above. 24 h post transfection, the slides were dried, fixed with acetic acid and ethanol (1:3) for 15 min and rinsed thoroughly in distilled water. Fixed cells were re-hydrated in PBS, and incubated for 24 h at 4°C with anti-HCV core rabbit polyclonal antibodies (1:50) in the blocking buffer (PBS with 2.5 mM EDTA and 1% BSA). Secondary antibodies were goat anti-rabbit immunoglobulins labeled with TRITC (1:200; DAKO, Denmark). Slides were then mounted with PermaFluor aqueous mounting medium (Immunon, Pa., USA) and read using a fluorescence microscope. Mice and immunization The following immunizations were performed: Scheme I Groups of 12 female 8-week old C57BL/6 mice (Stolbovaya, Moscow Region, Russia) were immunized with a total of 100 g of pCMVcoreKozak, or empty vector, split into four i.m. injections done with 3-4 week intervals. Control mice were mock-immunized with PBS. Scheme II Female 6-8 week old BALB/c mice (Animal Breeding Centre of the Institute of Microbiology and Virology, Riga) had injected into their Tibialis anterior (TA), 50 l of 0.01 mM cardiotoxin (Latoxan, France) in sterile 0.9% NaCl five days prior to immunization. Groups of 6 to 7 mice were immunized with a single 100 g dose of either pCM-VcoreIRES, or pCMVcore, or pCMVcoreKozak, or empty vector, all dissolved in 100 l PBS, applied intramuscularly (i.m.) into the cardiotoxin-treated TA. Control mice were left untreated. Scheme III Groups of 5 to 6 female 6-8 week old BALB/c mice pretreated with cardiotoxin, were injected i.m. with 100 g of pCMVcoreKozak and boosted three weeks later with 20 g of core aa 1-98 in PBS, or primed and boosted subcutaneously with 20 g of core aa 1-98 in PBS. Control animals were left untreated. ELISA Mice were bled from retro-orbital sinus prior to, and 2 to 3 weeks after each immunization, or 5 weeks post a single gene immunization (in Scheme II). Peptides corresponding to core aa 1-20, 23-43 or 133-142 were coated onto 96-well MaxiSorp plates (Nunc, Denmark), and recombinant core aa 1-98, 1-152, or 1-173, on the 96-well PolySorp plates (Nunc, Denmark). Coating was done overnight at 4°C in 50 mM carbonate buffer, pH 9.6 at antigen concentration of 10 g/ml. After blocking with PBS containing 1% BSA for 1 h at 37°C, serial dilutions of mouse sera were applied on the plates and incubated for an additional hour at 37°C. Incubation was followed by three washings with PBS containing 0.05% Tween-20. Afterwards, plates were incubated with the horseradish peroxidase-conjugated anti-mouse antibody (Sigma, USA) for 1 h at 37°C, washed, and substrate OPD (Sigma, USA) added for color development. Plates were read on an automatic reader (Multiscan, Sweden) at 492 nm. ELISA performed on plates coated with core aa 1-98, 1-152, or 1-173 showed similar results (data not shown). Immune serum was considered positive for anti-core antibodies whenever a specific OD value exceeded, by at least two-fold, the signals generated by: pre-immune serum reacting with core-derived antigen, and by immune serum reacting with BSA-coated plate, the assays performed simultaneously. T-cell proliferation assay For T-cell proliferation tests, mice were sacrificed and spleens were obtained two weeks after each immunization in Scheme I; and three and five weeks after the last immunization in Schemes II and III. Murine splenocytes were harvested using red blood cell lysing buffer (Sigma, USA), single cell-suspensions were prepared in RPMI 1640 supplemented with 2 mM L-Glutamine and 10% fetal calf serum (Gibco BRL, Scotland) at 6 × 10 6 cells/ml. Cell were cultured in U-bottomed microculture plates at 37°C in a humidified 5% CO 2 chamber (Gibco, Germany). Cell stimulation was performed with peptides representing core aa 1-20, 23-43, 34-42 and recombinant core aa 1-98, 1-152, and aa 1-173 at dilutions to 3.1, 6.25, 12.5, 25.0, 50.0, and 100 g/ml, all in duplicate. Concanavalin A (ConA) was used as a positive control at 2 g/ml. Cells were grown for 72 h, after which [ 3 H]-thymidine (1 Ci per well; Amersham Pharmacia Biotech, Ireland) was added. After an additional 18 h, cells were harvested onto cellulose filters and the radioactivity was measured on a beta counter (Beckman, USA). The results were presented as stimulation indexes (SI), which were calculated as a ratio of mean cpm obtained in the presence and absence of a stimulator (protein or peptide). Emptyvector immunized and control mice showed SI values of 0.8 ± 0.4. SI values  1.9 were considered as indicators of specific T-cell stimulation. Quantification of cytokine secretion For detection of cytokines, cell culture fluids from T-cell proliferation tests were collected, for IL-2 -24 h, and for IL-4 and IFN- -48 h post the on-start of T-cell stimulation. Detection of cytokines in the cell supernatants was performed using commercial ELISA kits (Pharmingen, BD Biosciences, CA, USA) according to the manufacturers' instructions. Cloning and expression Plasmids were constructed encoding core of HCV 1b isolate 274933RU without translation initiation signals (pCMVcore); and with Kozak translation initiation signal (pCMVcoreKozak). Core with viral translation initiation signal IRES taken in the natural context was derived from HCV 1b isolate AD78P1 [67]. Viral cores had a minimal sequence difference in positions 70, 75, and 147, all three cases representing homologous substitutions. Expression from these plasmids was tested both in vitro and in cell cultures. Plasmids pCMVcore and pCMVcore-Kozak were used as the templates for the T7-driven mRNA transcription; mRNA was translated in vitro in the rabbit reticulocyte lysate system. Both mRNAs generated a translation product of approximately 23 kDa corresponding to the molecular mass of unprocessed HCV core (p23; data not shown). Next, core-expressing vectors were used to transfect a series of mammalian cell lines. Western blotting of BHK-21 and COS-7cells transfected with pCMVcore, pCMVcoreKozak and pCMVcoreIRES using corespecific antibodies demonstrated an accumulation of proteins with the expected molecular mass of 21 kDa that corresponds to core aa 1-171 cleaved from the full-length core by cellular proteases [71,72] (Fig. 1). Minimal amounts of p23 were also detected, specifically after transfections of BHK-21 with pCMVcore and pCMVcoreIRES (Fig. 1). The overall level of HCV core synthesis in BHK-21 cells was somewhat higher than in COS-7 cells (Fig. 1). In both cell lines, the highest level of core expression was achieved with pCMVcoreKozak ( Fig. 1, 2). All cells expressing core and immunostained with core-specific antibodies demonstrated cytoplasmic granular staining characteristic of the processed p21 form of HCV core [72][73][74] (Fig. 2). The expression capacity of the vectors was quantified in murine fibroblasts to reproduce DNA immunization that was to be done in mice. Core expression was assessed on Western blots of SDS-PAAG resolving lysates of NIH3T3 transfected with core expressing and control plasmids ( Fig. 3A and 3B). Images of Western blots were processed using the ImageJ software and individual bands were represented in arbitrary units (Mean Grey Values, MGV). Their correspondence to core quantity was established using calibration curves built with the use of recombinant core aa 1-173 (see Additional file 1) after normalization to the percent of transfection and protein content of the samples. Plasmid pCMVcore with no translation initiation signals provided the lowest level of core expression (Fig. 3B). IRES promoted a two-fold increase, and the Kozak sequence, a 35-fold increase of core expression with > 15 ng of protein produced per expressing cell (Fig. 3B). Immunization of mice with HCV core DNA All plasmids were purified by standard protocols in accordance with a GLP practice for preparation of DNA vaccines, and used in a series of mouse immunization experiments. HCV core DNA in priming and boosts Plasmid directing the highest level of core expression was selected and a pilot experiment defining the strategy of DNA immunization was performed. C57BL/6 mice were immunized four times with 25 g of pCMVcoreKozak, and core-specific antibody and cellular responses were evaluated. No specific response was registered after the 1 st injection (data not shown). The immune response generated after the following three boosts is illustrated by Fig. 4. Three injections of 25 g led to no increase of core-specific IgG response over the initial levels achieved after the first two plasmid injections (Fig. 4A). Three plasmid injections generated a better T-cell proliferative response to core and core-derived peptides than two. However, the response could not be boosted further (Fig. 4B). IFN- and IL-2 response to core was also boostable. However again, no boosting was seen after the initial two pCMVcoreKozak injections (Fig. 4C). Furthermore, the repeated injections led to a significant decrease of IL-2 secretion in response to splenocyte stimulation by recombinant core and peptides representing core N-terminus (p < 0.05; Fig. 4B, and data not shown). Core-specific IL-4 secretion was not detected. Thus, the development of core-specific immune responses occurred within six weeks after the on-start of immunization; repeated boosts with HCV core gene did not lead to a significant enhancement of core-specific immunity. HCV core DNA as a single injection In the next series of experiments, we selected BALB/c mice as a strain that is expected to support a better Th2-type response with stronger antibody production [75]. Plasmid pCMVcore Kozak was given as a single 100 g injection with the effect of repeated intramuscular DNA boosts substituted by pre-treatment of the injection sites by cardiotoxin [76]. T-cell proliferative response, antibody production and cytokine secretion were monitored two and five weeks after immunization. Significant responses in the form of core-specific IFN- and IL-2 secretion exceeding the background levels in empty-vector-immunized mice were detected five weeks after a single administration of HCV core gene (Fig.5). Immunization generated no core-specific T-cell response and a low titer of core-specific IgG. Antibody response against HCV core has already been shown to develop slowly [46], mirroring the development of anti-core antibody response in HCV infected individuals [77]. Here as well, a slow increase in the level of anti-core antibodies Expression of HCV core proteins Plasmids used for NIH 3T3 transfection Core expression (Units) was observed 35 days after a single gene injection as compared to levels detected at day 21 (data not shown). There was no difference between BALB/c and C57BL/6 mice with respect to core-specific IFN- secretion (Fig. 4C versus Fig. 5C), or core-specific IgG production (p > 0.05 Mann Whitney U-test; Fig. 4A versus Fig. 5A and Additional file 1). High core gene expression affects core-specific immune response The magnitude of anti-core response suggested that the increase of HCV core gene dose either by one-time large dose injection, or by repeated injections of smaller doses, did not significantly enhance core-specific immunity. To delineate if that could be influenced by core expression level, BALB/c mice were immunized with a single dose of low-expressing core genes with no translation initiation signals (pCMVcore), or with IRES (pCMVcoreIRES). The results were compared to immunization with core gene regulated by the Kozak sequence (pCMVcoreIRES) (Fig. 5). The T-cell proliferative response to core-and corederived peptides was stronger in mice immunized with pCMVcoreIRES (Fig. 5). The highest anti-core IgG response was raised in mice immunized with pCMVcore that directed the lowest level of HCV core expression ( Fig. 3; Fig. 5A). It was significantly higher than the antibody response induced by pCMVcoreKozak (p < 0.05); the immune response in pCMVcoreIRES-immunized mice was intermediate (Fig. 5A). The T-cell proliferative response to core-and core-derived peptides was stronger in mice immunized with pCMVcoreIRES ( Fig. 5B; p < 0.05). While IL-2 secretion was somewhat higher in mice immunized with highly expressing pCMVcoreKozak, both DNA immunogens provided a similar level of core-specific IFN- secretion (Fig. 5C). Heterologous DNA prime-protein boost regimen We aimed to see if core-specific immune response could be enhanced without increasing core gene doses, but instead by using the heterologous prime-boost immunization regimens. HCV core protein aa 1-98 and pCMV-coreKozak were used to immunize BALB/c mice either separately, or in the DNA prime-protein boost regimen. A high titer of core-specific antibodies was achieved only after the heterologous boost (Fig. 6A). The heterologous regimen effectively induced a proliferative response, both in SI values (p levels 0.034, Mann Whitney U-test) and in the number of positive T-cell proliferation tests (p level 0.014; Fig. 6B); and potent core-specific IFN- and IL-2 secretion (Fig. 6C). Core-specific IL-4 secretion was, in all cases, very low. Heterologous regimen induced significant anti-core antibody production (Fig. 6A). Sera of mice primed with pCMVcoreKozak and boosted with core aa 1-98 were analysed for the presence of anti-core antibodies of IgG, IgG1, IgG2a, IgG2b and IgM subclasses, and the results were compared to seroreactvivity in mice immunized with single injection of core or core expressing plasmids (Fig. 7). Mice primed with pCMVcoreKozak and boosted with core protein had significantly higher levels of anti-core IgG than mice immunized with pCMVcoreKozak (p = 0.0006, Mann-Whitney U-test) or pCMVcore (p = 0.002) (immunization with pCMVcore gave higher level of IgG than immunization with pCMVcoreKozak, p < 0.05). Group with heterologous prime/boost regimen had also an increased levels of anti-core IgG1, although the difference with the control group did not reach the level of significance (p < 0.1). Antibodies of IgG2a or IgG2b subclasses were not found. Low specific anti-core IgM were observed only in mice immunized with recombinant core aa 1-98 (p < 0.1; Fig. 7). It was higher than in mice primed with core DNA and boosted with core protein (p level 0.05). At the same time, core-immunized mice had no anti-core IgG1 or IgG2 (Fig. 7). Thus, the heterologous core DNA prime/core protein boost regimen preferentially induced anti-core IgG, while protein immunization triggered mostly low-level anti-core IgM. Discussion The immune response in DNA immunization depends on the amount of antigen produced from the immunogen in vivo as predetermined by the gene dose and by the gene capacity to direct an efficient antigen expression [78,79]. Normally, the response increases with the increase of the dose and efficacy of gene expression (for examples, see [80][81][82][83]). However, the DNA immunogen used here encodes not just a structural component of the virus, but also a pathogenic factor. HCV core protein interacts with Spectrum of core-specific immune response Figure 7 Spectrum of core-specific immune response. 39,71,84,85]. Of importance for HCV vaccine design was to find to what extent the immune response to HCV core in DNA immunization is influenced (positively, or negatively) by the level of core expression as determined by gene dose (i), and gene expression efficacy (ii). The first issue was addressed in a series of immunizations in which the same dose of HCV core was given as a single or split into multiple injections. We and others have earlier observed that repeated HCV core gene boosts do not lead to an enhancement of core-specific immune response [42,46,68]. On the contrary, both core-specific IFN- and IL-2 production [68] and anti-core antibody response [2,46,64] appear to be down-regulated. Here as well, the overall comparison between immunizations carried out by single and multiple core gene injections in different mouse strains demonstrated that the outcomes of immunization with one 100 g versus two to four 25 g core gene doses were quite similar ( Fig. 4; see also the summary in Additional file 2). Furthermore, antibody response was not boosted; T-cell proliferative response and core-specific IFN- secretion could not be boosted beyond the levels reached after the initial two injections, and core-specific IL-2 secretion even appeared to be suppressed. Thus, core-specific immune response can be achieved after single DNA immunization, while repeated core gene administration may actually suppress core-specific immunity. The issue of translation efficacy was assessed in singledose immunizations with plasmids directing different levels of HCV core expression. There are different ways to increase the level of gene expression efficacy such as the use of strong promoters, optimal species-specific codons, and manipulations with RNA folding [78,78,79]. An important factor is the efficacy of translation initiation. In the CAP-dependent translation of mammalian genes it is determined by sequences flanking the AUG initiator codon. High levels of translation are achieved with the Kozak sequence, a guanine at position +4 and an adenine at -3 from AUG [86,87]. The alternative mechanisms of initiation site selection on eukaryotic cellular and viral mRNAs, also of HCV, include the translation initiation from IRESs (internal ribosome entry sites/segments) [88]. Located in the 5'-UTR region of viral genome, HCV IRES is optimized to hijack the ribosomes and translation factors from the host for the translation of HCV polyprotein [89]. Core is tightly involved in the IRES-mediated regulation of HCV translation with several regulatory signals localized in both core protein and core coding sequence [24, 90,91]. Thus, the 5'-end of HCV genome incorporating 5'-UTR and core coding sequence were harmonized during evolution to provide for the levels of core expression essential for the virus. Both CAP and IRES translation initiation options were employed in the design of core DNA immunogens. Eukaryotic expression vectors were constructed encoding core of HCV 1b without translation initiation signal (pCMVcore), core preceded by the 5'-UTR of HCV 1b isolate AD (pCMVcoreIRES), and core preceded by the consensus Kozak sequence (pCMVcoreKozak). The latter directed the expression of 35-fold more core than the gene devoid of the translation initiation signals, and 16-fold more core than the gene regulated by IRES. However, despite a considerable difference in the core expression capacity, Kozak-and IRES-regulated DNA immunogens induced similar levels of core-specific IFN- secretion (Fig. 5C). More so, while IL-2 secretion was somewhat higher in mice immunized with highly expressing pCMVcoreKozak, a T-cell proliferative response to core-and corederived peptides was stronger in mice immunized with pCMVcoreIRES ( Fig. 5B). Thus, high core expression levels did not promote a better core specific cellular response. DNA-based immunization can induce potent antibody response including virus neutralizing antibodies [92][93][94][95][96]. However, no significant antibody response has ever been induced in core gene immunization unless it was followed by the protein boost [2,64]. Anti-core antibody titers obtained here after immunization with both CAP-and IRES-regulated core genes were also low. Interestingly, however, significantly higher titers of anti-core antibodies were obtained in mice that received the least expressed core gene devoid of any translation regulation signals (pCMV core; Figs. 5A, 7). Thus, the use of highly expressing HCV core DNA did not promote an effective core-specific antibody response. Altogether, this points to possible adverse effects of the high-level as well as of the prolonged HCV core gene expression. We have additional data in support of this concept from immunization of C57BL/6 mice with a synthetic truncated HCV core gene devoid of HCV core nucleotide-sequence dependent regulatory signals. The latter expressed HCV 1b core at five to six-fold lower levels than the viral full-length core gene [97], but nevertheless, was capable of inducing potent core-specific cellular and antibody response http://www.meetingsmanagement.com/ dna_2004/index.htm [98]. DNA immunization with antigens co-expressed in natural virus infection can result in inhibition of both protein expression and specific immune response [99]. More so, pathological effects were reported of the repeated immunization with certain microbial genes, for example the hsp60 gene of Mycobacterium that causes necrotizing bronchointerstitial pneumonia and bronchiolitis in healthy mouse recipients, and multifocal regions of cellular necrosis in lungs when applied therapeutically [100,101]. HCV core is the factor of HCV pathogenicity. It activates cellular and viral promoters [102], induces ER-and mitochondrial stress [103,104], regulates apoptosis [105,106], tumorigenesis [107,108], and induces abnormal lipid metabolism [109]. In experimental systems, core expression leads to the development of diverse pathological effects including CD4+ T-cell depletion, liver steatosis, insulin resistance, and hepatocellular carcinoma [33,110]. One of the notable although controversial features is the capacity of HCV core to suppresses host immunity [32, 39,84,85]. These features of HCV core may explain why here a better immune response was achieved after single immunization with vectors providing for comparatively low HCV core expression. Altogether, this points to the necessity to devise alternative immunization regimens that would help to circumvent possible adverse effects of HCV core. Many approaches can be pursued, with DNA vaccination combined with heterologous protein or recombinant viral boosts considered as the most promising [11]. The principle of this strategy is to prime T-cells to be antigen-specific and then, upon repeated exposure to a specific antigen, induce a rapid T-cell expansion. In heterologous boosts, the encoded antigen is delivered in a different form/different vehicle [111]. DNA plasmids are perfectly fit for priming since they are internalized by antigen presenting cells and can induce antigen presentation via MHC class I or class II. Such heterologous regimens can be effective when infection occurs with both viral particles and virusinfected cells, and neither cellular, nor antibody response is sufficient for sterilizing protection or viral clearance, if acting alone. This approach may help to circumvent the negative effects of intracellular core expression. Indeed, here, the heterologous DNA-prime/protein boost strategy was shown to be advantageous to both immunizations with core DNA and with the recombinant core protein (Figs. 6, 7). Protein alone performed even worse than single DNA injections (Figs. 6, 7). Only the heterologous DNA-prime-protein boost regimen induced a significant core-specific antibody production and potent T-cell response of mainly Th1-profile. This may be beneficial since most correlates of spontaneous HCV clearance are Th1-oriented [32, 39,84,85]. Conclusion This data suggests that the administration of highly expressed HCV core gene, as well as repeated core gene injections may hamper core-specific immune response. The boosting effect of repeated core gene injections is transient as it disappears with subsequent injections. One possible way to enhance core-specific response is to deliver limited intracellular amounts of core, either by giving lower plasmid doses, or by giving vectors with low expression efficacy. An additional option is the use of heterologous DNA prime/protein boost regimen that leads to potent immune response of a mixed Th1/Th2-type. We are currently testing if transient HCV core gene expression and acquisition of anti-HCV core immunity affect the immune status and functionality of the immune system in gene recipients.
Kuwait: Where Democracy Did Not Bring More Freedom There was a time when Kuwaiti women were burning their abayas and demanding equal rights, when its music and theatre scenes were famous throughout the region, when the country was a haven for journalists and writers. No more. Political dissent is increasingly silenced and personal freedoms limited. Opposition media outlets are being closed down, despite contrary court rulings; the executive is overriding the judiciary. New media laws make matters worse. Although Article 36 of the constitution guarantees some freedom of speech, prosecutions for offending the emir and criticizing the state are increasing, resulting in jail sentences and sometimes the revocation of citizenship. Also, discrimination continues, including the oppression of the bidoon, the large group of stateless people who are accused of hiding their true (Iraqi, Saudi, or Syrian) nationality in order to take advantage of Kuwait’s resources. The debate has now reached the surreal level of “offering” them Comoros Islands citizenship. What happened to the country where the revolutionary Iraqi poet Ahmed Matar once found refuge? Where schools were mixed and whisky was legal? “It is the curse of the Bedouin,” a female artist says, referring to the naturalization of large numbers of tribesmen in the 1960s and 1970s to garner support for the ruling family. This caused a demographic shift from a majority of the old merchant/towns-people class (hadhar) to a majority of Bedouins. “They ended up in parliament—this is the curse of democracy,” an older Kuwaiti says. “Now we are stuck with their backward ideas.” These views are common among the hadhars, who claim to be the only real Kuwaitis, the ones who built the country and consider themselves modern and open-minded. Although the Bedouins did bring more conservative elements into society, it is not necessarily true that this resulted in today’s deteriorating human-rights situation. That probably has more to do with the government’s long-standing divide-and-rule strategy. The rift between hadhar and Bedouin is just one of many in Kuwait. There are divisions between Kuwaitis (both hadhar and Bedouin) and the bidoon, and between Sunni and Shiite, and between Kuwaitis and immigrants. There is a deep mistrust between these different groups, where each group accuses another of being after Kuwait’s wealth and power. The divide-and-rule strategy has backfired: Bedouins are now the government’s fiercest critics. The power of the rulers is based on jelly,” the older Kuwaiti says. So what do the rulers do? They shut up the opposition. Perhaps not entirely unrelated to the influx of Bedouins, Kuwait has become more religious, not less. In a textbook on human rights for law students, we read, “The Islamic concept of freedom of expression is much superior to the concept prevalent in the West. Under no circumstances would Islam allow evil and wickedness to be propagated.” It is not only the emir who is untouchable—so is Allah. The government attempted in the late 1990s to introduce human rights into the school curriculum. The subject was to be tackled from three perspectives: Islam, international conventions, and the Kuwaiti constitution, in that order. An official pamphlet providing information on the new module stated, “There are rights that cannot be accepted as they are in conflict with the Sharia.” Examples were provided: premarital sex, same-sex marriage, and equality between males and females in inheritance laws. The school initiative, which some hoped would solve the problem of citizenship and social divisions, has largely failed. In 2010 the three-year programme was reduced to one year because younger students were deemed intellectually unprepared for information about democracy, the constitution, and human rights. Meanwhile, women are still not treated as equals, and gays have no rights. Although homosexuality is not forbidden by law, debauchery is, and courts have prosecuted homosexuals under this provision. It is the conservative part of society, even more than the courts, that gays and lesbians must fear. The Case of the Domestic Workers All this goes largely unnoticed by Kuwait’s large population of low-paid domestic workers. Their concerns are not with political participation, gender equality, or freedom of expression. Hovering just above the poverty line and unprotected by law, they are exposed to exploitation and violence. Their salaries may or may not be paid, proper food or health care may or may not be provided, they may have a room of their own or sleep in the kitchen, they are part of the family or treated as slaves. Everything depends on the whims of their sponsors, the family for which they work and that holds their passports. For them, though, there is a ray of hope. Because the issue is less sensitive politically and religiously, it is one of the few in which Kuwait seems to have made progress, not in the least because people such as Bibi Nasser Al Sabah, granddaughter of the emir, have been pressing for better laws. Recently, the National Assembly passed a law that will grant domestic workers basic rights, such as a weekly day off, holidays, and a minimum wage. Another law will curtail the commercial recruitment agencies and replace them with agencies controlled by the government. Human-rights organizations have cautiously welcomed the laws, lauding the initiative—the first in the Gulf-region—at the same time pointing out that there are still many unsettled issues. It is unclear when the law will enter into force and how, if at all, it will be enforced. This may very well be the greatest challenge in a country ruled by wasta, the use of connections to get things done, to make police reports disappear. Yet another curse of the tribal Bedouin culture, some say. It is also unclear where Kuwait is headed. It is still the freest and most liberal of the Gulf States. Blogs such as the one referred to above about gay rights are allowed. But for how long will the more liberal parts of society stand their ground? Many of them, young and old, say they have given up and spend as much time as possible abroad. “Anywhere, as long as it is not here,” they say. © Copyright Notice Click on link to view the associated photo/image: We would like to ask you something … Fanack is an independent media organisation, not funded by any state or any interest group, that distributes in the Middle East and the wider world unbiased analysis and background information, based on facts, about the Middle East and North Africa. The website grew rapidly in breadth and depth and today forms a rich and valuable source of information on 21 countries, from Morocco to Oman and from Iran to Yemen, both in Arabic and English. We currently reach six million readers annually and growing fast. In order to guarantee the impartiality of information on the Chronicle, articles are published without by-lines. This also allows correspondents to write more freely about sensitive or controversial issues in their country. All articles are fact-checked before publication to ensure that content is accurate, current and unbiased.
For World Refugee Day today, we're highlighting our Child-Friendly Spaces, which are helping Syrian refugee children play and smile again after the trauma they've been through. Read about a small building tucked into a back street in downtown Irbid, Jordan, where World Vision is helping to bridge the gap between Syrian refugee children and vulnerable kids in Jordan. It’s a sunny Monday morning in the northern Jordanian city of Irbid, located just 15 kilometers from the Syrian border. At a community center in the downtown core, a group of children are drawing maps that feature scenes of their neighborhood, while in another room, younger children delight in playing with toys and putting magnetic letters up onto a small blackboard under the watchful eye of a social worker. This could be a typical scene at a day care center in almost any part of the world. But here in Jordan, where the conflict in Syria has brought more than 600,000 refugees into the country, the youngsters taking part in these activities represent an effort by World Vision to help bring together children from two very different realities. At this Child-friendly Space (CFS), Syrian refugee children play, laugh, and learn together with Jordanian kids from the local community. World Vision and a local partner operate the program. This Child-Friendly Space operates six days per week and serves approximately 200 children from both sides of the border. About 70 percent of the children taking part are Syrian refugees, while the remaining 30 percent are Jordanian. The Society’s executive director, Fadi Dawagreh, says that the joint program has been operating since March and there have been few problems in mixing the two groups of children. He points out that some of the younger Syrian refugees have been in Jordan for up to three years already, and have only limited memories of their homeland. The situation for older refugee children can be more difficult, and Dawagreh says that many of these kids prefer to come to the CFS rather than go to local schools, because here they don’t feel intimidated by their lack of knowledge of the Jordanian curriculum. For those who are facing more serious psychological problems, the Society’s staff has received training to identify symptoms of potential illness and to refer the children to appropriate agencies for help. Looking at the children laughing and playing together, it’s virtually impossible to tell the Syrian and Jordanian children apart. Dawagreh says that his group has worked hard to include vulnerable Jordanian children as well as their Syrian counterparts to help ease tensions in the local community. As the number of Syrian refugees in Jordan has risen dramatically since 2011, pressures have been building in host communities as competition for housing, medical care, and school space continue to grow. While the children at the CFS benefit from the attention they receive and their interaction with other kids, their parents are also among the beneficiaries of the project. Not only does the CFS give their children a safe environment to play and learn, but also provides some much needed distraction during days that can, at times, seem to drag on forever. As one staff member told me, “parents need breaks from their children, especially when we’ve seen families of five or six people living in the same two rooms for months and years with little hope for the future.” Giving parents even a brief respite from the demands of their children helps families endure the separation from their loved ones and friends back in Syria. As for the future, the current group of children will be able to take advantage of the facilities and services of the CFS in Irbid until late July, when a new group of 200 Jordanian and Syrian children will take their places. This helps maximize the number of vulnerable children who receive support, and it’s hoped that more Jordanian and Syrian children can be brought into similar programs at World Vision Child-Friendly Spaces in other host communities in Jordan. Join our #WorldVisionChat on Twitter today at 11am PST (2pm Eastern) with our expert Sevil Omer and blogger Matthew Paul Turner about the refugee children of Syria. Donate to our Syrian refugee crisis fund to help Syrian children return to school and find safe places to be children again. Join us in prayer for the people of Syria, especially the children. See our prayer points here.
|This article needs additional citations for verification. (March 2011)| Fat Thursday (German Fetter Donnerstag, Schmotziger Donnerstag, or in areas where carnival is celebrated Weiberfastnacht; Greek: Τσικνοπέμπτη (Tsiknopempti); Polish: Tłusty czwartek; Hungarian: torkos csütörtök) is a traditional Christian feast marking the last Thursday before Lent and is associated with the celebration of Carnival. Because Lent is a time of fasting, the next opportunity to feast would not be until Easter. It is similar to, but should not be confused with, the French festival of Mardi Gras ("Fat Tuesday"). Traditionally it is a day dedicated to eating, when people meet in their homes or cafés with their friends and relatives and eat large quantities of sweets, cakes and other meals usually not eaten during Lent. Among the most popular all-national dishes served on that day are pączki in Poland or berliner, fist-sized donuts filled with rose marmalade, and faworki, French dough fingers served with lots of powdered sugar. In Italy, Giovedì Grasso (Fat Thursday) is also celebrated, but it is not very different from Martedì Grasso (Shrove Tuesday). It is also similar to the Greek custom of Tsiknopempti (loosely translatable as "Barbecue Thursday"), which involves the massive consumption of charred meat in the evening of Thursday, ten days before the beginning of the Great Lent (the Orthodox stop eating meat a week before Lent starts). In Spain this celebration is called jueves lardero, and in Catalan-speaking areas, dijous gras. In Albacete in central Spain, Jueves Lardero is celebrated with a square pastry called a bizcocho (see also Bizcocho) and a round pastry called a mona. In Aragon a meal is prepared with a special sausage from Graus while in Catalonia the tradition is to eat sweet Bunyols. In the Rhineland (Germany), Weiberfastnacht is an unofficial holiday. At the majority of workplaces, work ends before noon. Celebrations start at 11:11 am. In comparison with Rosenmontag, there are hardly any parades, but people wear costumes and celebrate in pubs and in the streets. Beueler Weiberfastnacht ("washerwomen's carnival") is traditionally celebrated In the Bonn district of Beuel. The tradition is said to have started here in 1824, when local women first formed their own "carnival committee". The symbolic storming of the Beuel town hall is broadcast live on TV. In many towns across the state of North Rhine Westphalia, a ritual "takeover" of the town halls by local women has become tradition. Among other established customs, on that day women cut off the ties of men, which are seen as a symbol of men’s status. The men wear the stumps of their ties and get a Bützchen (little kiss) as compensation. - Poles gorge themselves on Fat Thursday-TheNews.pl, http://www.thenews.pl/1/9/Artykul/90408,Poles-gorge-themselves-on-Fat-Thursday - Petra Pluwatsch: Weiberfastnacht – Die Geschichte eines ganz besonderen Tages. KiWi, Köln, ISBN 978-3-462-03805-7
Buffalo Grove High School this school year redesigned a computer laboratory into a collaborative, flexible space that allows staff to enhance reading, writing and critical thinking skills among students. The room, called the BGenius Lab, is one way Buffalo Grove is focusing on literacy, a skill needed in high school and beyond. Literacy coaches Kate Glass and Jeff Vlk help teachers design literacy lessons and often coteach alongside the curriculum experts or teach the lessons in the lab. The two have connected with every teacher in the building and have seen classes from criminal law to biology to Spanish and world history, among others, stop in to help students in prep, regular, honors and Advanced Placement courses. “Our reading strategies give students the tools to be independent learners,” Glass said. The space also functions as a teaching demonstration lab where teachers can observe or model best practices. A lesson, for example, provides and reinforces pre-reading and reading strategies in specific subjects to help students better understand language used in articles. Lessons also focus on teaching students to write in a complex manner in a specific subject. Along with building students’ literacy skills, the lab has created a stronger culture of collaboration among staff. Not only are students learning content-specific literacy strategies but teachers are also learning new concepts, too, creating ongoing professional development. And often, staff walking by the room, located in the center of the building, stop in to watch a literacy lesson in action. The layout and furniture inside the classroom also help forge collaboration. Furniture can be rearranged for instruction and group work and TV screens in the room allow teachers and students to show work being edited.
From subreddit r/Kubuntu: I used `playerctl` to bind media keys to play, pause, etc through System Settings / Shortcuts / Custom Global Shortcut. Then I found that I needed to go into System Settings/Shortcuts/Global Shortcuts/Media Controller and there, under "Play/Pause media playback," set Global shortcut to Custom/None. Then I went into VLC/Tools/Preferences/Hotkeys and, for Play/Pause, set Global to "Media Play Pause." One problem with this that I found was that if I was running both VLC and Rhythmbox at the same time, the Play/Pause key only worked for VLC. But if only Rhythmbox was running, then the Play/Pause key worked as expected and desired. So now I just make it a point not to run VLC and Rhythmbox simultaneously, which is no great burden. Good luck.
Data science in travel is one of those fields that few people stop to think about most of the time. In fact, most people probably don’t understand what it means or how it could benefit their lives. This lack of understanding is unfortunate, as researchers in this unique field are working to eliminate the dangers of traffic jams by making them easier to predict and avoid. How is it possible to use data to predict the random element of a traffic jam? By synthesizing various strains of evidence and coherently presenting them, data scientists can predict when traffic jams occur, where they are likely to happen, and how to avoid them. Here is what you need to know about this groundbreaking concept and how it will benefit you. Data Scientists are Looking to Eliminate Traffic Jams First of all, it’s worth knowing what data science in travel is before delving too far into how it will help reduce traffic jam occurrence. Data science is the study of information to solve a problem. Scientists and research specialists use this information to create patterns of behavior that can help them predict when something will happen. These predictions will vary, based on the nature of the data. For example, according to this article in Science Daily, data scientists looking to eliminate traffic jams are investigating data surrounding the emergence of traffic jams. They take a look at when they are most likely to occur, including the days of the week and the time of the day. Then, they analyze other elements, such as the areas in which they occur, to better understand how they happen. In many instances, they are likely to look at the design of the roads in an area and talk to a traffic specialist about how poor road design contributes to congestion. By understanding every possible facet of a traffic jam, these scientists can either eliminate its occurrence or find a way to minimize it. Scientists around the world are focusing on using this information to create a variety of techniques and pieces of technology that will eliminate the danger of traffic jams. So in what ways is data science in travel working to reduce traffic congestion for you? How Your Cell Phone May Help There are many different ways that data science could be used to eliminate traffic jams. One of the most promising is the use of cell phone data to collect valuable information about traffic conditions. A study in Germany equipped the cell phones of hundreds of drivers in the country with a data collection app and gathered vital traffic flow information as they drove. What they found was that this information could be used to predict potential traffic jams and help drivers avoid them. How does this data science in travel concept work? By identifying where drivers are located, predicting where they are headed, and giving them a better understanding of the traffic conditions in those areas. The researchers behind this product believed that collecting traffic data in this way could help increase social data on traffic jams, create a more connected driving experience, and reduce travel time, vehicle idling time, and driver fatigue. They believed that these methods could not only help decrease the occurrence of traffic jams but also limit major car accidents. Cell phone data collection has become increasingly attractive to many data scientists as a potential way of avoiding traffic congestion. However, there are also a large number of specialists in data science in travel who are looking to implement GPS systems as a way of predicting and limiting the impact of traffic jams in an area. How Will GPS Help? GPS systems have become one of the most popular ways of traveling because they help plot routes for drivers who have never been to an area. However, data scientists are looking to implement Intelligent GPS Navigation to create a successful way of predicting when traffic jams will occur. For example, one study used over 500 researchers to create an algorithm for GPS systems that would improve their navigation accuracy by up to 60 percent. Even more importantly, this study was looking to find a way to help drivers stay out of traffic jams. The idea behind this system was to help prevent dangerous congestion by redirecting drivers to less congested routes. Data science in travel has embraced ideas like this one because they use already existing systems to successfully manage traffic flow. Rather than develop entirely new devices that drivers need to master, focusing on flexible and modern GPS systems provides researchers and data scientists with an easier way to help drivers stay out of dangerous traffic conditions. One of the primary goals of this study was to learn how to predict traffic congestion on both a short-term and long-term basis. The idea was to create a comprehensive understanding of the flow of traffic in an area using GPS monitoring. This data would then be synthesized to form a complete grasp of how traffic is affecting the area. Predictions Have Gotten Incredibly Accurate So just how accurate could data science in travel methods like this be in predicting traffic congestion and jams? More accurate than you might expect. For example, Microsoft and the Federal University of Minas Gerais in Brazil have teamed up to create an algorithm that could predict traffic jams by as much as an hour in advance. Their system, known as the Traffic Prediction Project, is using many of the methods described above to help better understand global patterns in traffic. They are also using data sources as diverse as videos from road cameras, Bing traffic maps, social network posts from drivers, and even accident reports to get a comprehensive and accurate look at the way traffic affects areas of the world. By using this information, they believe that they can create a precise understanding of traffic patterns and use them to predict traffic jams. Their system would use a variety of information outlets to identify potential congestion and to warn drivers as they are about to form. The time lag would vary, depending on the area, but Microsoft believes prediction times of 15-60 minutes are very possible. What is intriguing about this potential data science in travel breakthrough is the fact that it uses current information and technology to make these accurate predictions. If it is successful, it could open up the world of data science to even stronger advances in traffic jam predictions. Possible Future Breakthroughs Who knows what the future holds for this interesting and potentially life-saving science? Is there a chance that traffic jams could be eliminated from the world entirely? It is a possibility, though it is one that data scientists are understandably not quite ready to confirm just yet. There is still a lot of ground to cover before that happens. For example, data science in travel methods, such as predictive driving, could identify entertainment events occurring in the area, when people are likely to go to them, when they are likely to get out, and how this would affect traffic. Breakthroughs like this would be essential for people who live in major cities with sports teams or concert venues. However, many other possible techniques could be used to predict traffic jams. For example, a real-time view of traffic from drones or satellites could give drivers the chance to predict where traffic will get congested and suggest alternative routes around these difficult areas. Breakthroughs like this are promising but are still some time in the future. Last, but not least, a form of driver communication could be implemented using social media sites. This communication method could let drivers stay in touch with others on their current path. Real-time communication like this would be an essential way of staying in touch and avoiding traffic problems. In this way, they could learn more about the traffic affecting an area, how thick it is getting, whether a jam is likely to occur, and whether or not they should consider an alternative route around a congested road. Even better, this information could be compiled by data scientists and used to improve data prediction techniques. Understanding the World of Data Science The possibilities outlined in this article are just the tip of the iceberg when it comes to data science. While it can’t predict the future in a 100 percent accurate way, it can do so in a manner that is surprisingly spot-on. Data scientists are skilled at sorting through important and unimportant pieces of data and using them to create a better understanding of behavior. While there is still a ways to go before they can predict and prevent traffic jams flawlessly, data scientists are working hard every day to make it a reality. As a result, it is important to stay on top of the latest breakthroughs in this field. With the right advances, it is just possible that data science in travel could eliminate traffic jams forever.
8 Tips to Improve Your Writing Skills - November 23, 2019 - Posted by: E-planet Educational Services - Category: Learning Methods What did you write today? Did you send any emails? Maybe you wrote a note to a classmate or a colleague. Or you completed a written assignment, an essay or a report. And did you just leave a message on a friend’s Facebook wall? Even if you’re not taking exams, there’s just no getting away from writing! That’s why learning to write in English is just as important as learning to speak it is. But how do you practise writing? Well, don’t worry, we are here to help! - Reading Matters It may sound obvious, but this is a great place to start. Regular reading is a stepping stone to better writing. The best writers are also keen readers, and reading on a regular basis is an easy way to start developing your writing skills. Simply read for pleasure and you’ll pick things up subconsciously. Always choose texts that interest you; learning should never be boring! You can choose to read whatever topics you like. Just make sure you read a lot. Expose yourself to a variety of styles, voices and forms of writing. Luckily, you can easily find lots of graded readers and books of various genres created specifically for learners of foreign languages. - Learning New Vocabulary It goes without saying that to write with more confidence and fluency, you need to expand your vocabulary. You may understand the meanings of lots of words in English, but if you want to be able to express yourself clearly, you have to be able to use those words correctly. That’s why word lists don’t help; use example sentences, focus on prepositions, keep your eyes open for phrases, idioms and collocations. For example, learning the word ‘depend’ is easy, but knowing that ‘you depend on something’ is much more useful! - Mastering Spelling English spelling can sometimes be really tricky and confusing! ‘Bare’ and ‘bear’ sound the same. But do you mean naked or a large hairy animal? A ‘desert’ is a hot and dry place like the Sahara, but add one more letter (and some whipped cream!) and you have ‘dessert’, something sweet to eat! There are a lot of words in English that look or sound alike but have very different meanings. Learn the notorious commonly confused words! Keeping a list of words you always confuse is a good idea. And always use a pen and paper; no computer means no autocorrect. This way you have to think about how to correctly spell words rather than relying on technology and smart devices to do it for you! - Playing Devil’s Advocate Imagine that you have to write an opinion or a ‘for and against’ essay. What side would you support if you talked about it to a friend? Write about the topic from the opposite or a different point of view. This is what we call ‘playing devil’s advocate’; expressing an opinion which you may not agree with. And that’s a great way to learn how to convey opinions in English. Did we also mention that you’ll probably practise words and phrases which you don’t normally use, since you’re writing from a different perspective? Well, that’s a win-win situation! - Remembering the Grammar Rules Without good knowledge of grammar, you simply can’t express your thoughts clearly. Make sure to practise your grammar skills from time to time! - Making an Outline Using outlines or diagrams and lists of essential ideas or plot points can give you a precious map to follow as you write. And this can save you time by keeping you focused as you write, so you don’t deviate from the topic. And if you are taking exams, bear in mind that a well-structured essay for an examiner is like adding maple syrup to pancakes; that extra, necessary finishing touch! - Using Transitions & Linking Words To help make your writing flow, you need to use transition and linking words or phrases. These can help you show the relationship between ideas, connect paragraphs and introduce a summary or a conclusion. - Just… Doing It! Get a pencil and paper or sit in front of your computer. The first step to help you improve your writing is to just write and keep on writing. Remember, practice makes perfect! You could also keep a diary in English. It doesn’t matter what you write, just that you’re making a habit of writing every day. Keep in mind that you can’t learn how to write overnight. Writing is actually pretty similar to working out. The first time you do it, you struggle to even finish! And when improving your writing skills seems hard, don’t be disappointed…… Ernest Hemingway, (one of the most influential writers of the 20th century), once confided to F. Scott Fitzgerald in 1934 “I write one page of masterpiece to ninety-one pages of shit” !! Do you have any other suggestions? Have you ever practised your writing in English? Tell us what’s helped you in the comments section below; we’d love to know what you think! Did you find these tips helpful? Don’t keep this article to yourself. Tell your friends; give it a quick share on your favourite social media!
Coast Chart No. 20 New York Bay and Harbor New York. 34.5 x 28 in (87.63 x 71.12 cm) 1 : 80000 This is a rare 1866 U.S. Coast Survey nautical chart or maritime map of New York City, its harbor, and environs. Though variants of this chart appeared earlier, the present example is most likely the first independent issue of this map in its full completed form. This is moreover among the first 19th century charts to depict New York City as we know it today, including Manhattan, Queens, Brooklyn, the Bronx and Staten Island. Adjacent Jersey City, Newark and Hoboken are also included. It offers comprehensive inland detail throughout noting street grids, parks, towns and communities. In addition to inland details, this chart contains a wealth of practical information for the mariner from oceanic depths to harbors and navigation tips on important channels. It also includes tables of light houses and beacons, tides and magnetic declination as well as detailed sailing instructions. The triangulation for this chart was prepared by J. Ferguson and E. Blunt. The topography by H. L. Whiting, S. A. Gilbert, A. M Harrison, F. W. Door, C. Rockwell and J. M E. Chan. The hydrography was accomplished by R. Wainwright and T. A. Craven. The whole was published in 1866 under the supervision of A. D. Bache, Superintendent of the Survey of the Coast of the United States and one of the most influential American cartographers of the 19th century. It is noteworthy that this present example is on thick stock and backed with linen and is not, therefore, the more common Coast Survey annual report issue, but rather an independent production intended for working mariners. The Office of the Coast Survey (1807 - present) founded in 1807 by President Thomas Jefferson and Secretary of Commerce Albert Gallatin, is the oldest scientific organization in the U.S. Federal Government. Jefferson created the "Survey of the Coast," as it was then called, in response to a need for accurate navigational charts of the new nation's coasts and harbors. The spirit of the Coast Survey was defined by its first two superintendents. The first superintendent of the Coast Survey was Swiss immigrant and West Point mathematics professor Ferdinand Hassler. Under the direction of Hassler, from 1816 to 1843, the ideological and scientific foundations for the Coast Survey were established. These included using the most advanced techniques and most sophisticated equipment as well as an unstinting attention to detail. Hassler devised a labor intensive triangulation system whereby the entire coast was divided into a series of enormous triangles. These were in turn subdivided into smaller triangulation units that were then individually surveyed. Employing this exacting technique on such a massive scale had never before been attempted. Consequently, Hassler and the Coast Survey under him developed a reputation for uncompromising dedication to the principles of accuracy and excellence. Unfortunately, despite being a masterful surveyor, Hassler was abrasive and politically unpopular, twice losing congressional funding for the Coast Survey. Nonetheless, Hassler led the Coast Survey until his death in 1843, at which time Alexander Dallas Bache, a great-grandson of Benjamin Franklin, took the helm. Bache was fully dedicated to the principles established by Hassler, but proved more politically astute and successfully lobbied Congress to liberally fund the endeavor. Under the leadership of A. D. Bache, the Coast Survey completed its most important work. Moreover, during his long tenure with the Coast Survey, from 1843 to 1865, Bache was a steadfast advocate of American science and navigation and in fact founded the American Academy of Sciences. Bache was succeeded by Benjamin Pierce who ran the Survey from 1867 to 1874. Pierce was in turn succeeded by Carlile Pollock Patterson who was Superintendent from 1874 to 1881. In 1878, under Patterson's superintendence, the U.S. Coast Survey was reorganized as the U.S. Coast and Geodetic Survey (C & GS) to accommodate topographic as well as nautical surveys. Today the Coast Survey is part of the National Oceanic and Atmospheric Administration or NOAA as the National Geodetic Survey. More by this mapmaker... Very good. Working sea chart. Independently issued edition backed with fresh linen. Some minor stains at top margin. Rumsey 5328.000 (1866 edition), 1234.115 (1877 edition). Phillips, 2141. LeGear, Atlases of the United States, L2307. Phillips (America), p. 492.
It's not about the problem of trust. Ye know... there're a lot of deceivers. If you aren't firm, don't get me into the troubles. Bankruptcy is possible! Laurie's a good boy. Congratulations. When did you become a grandma? OK. That's it. What's up? Tell Helen: Don't send him the cheque. Your friend's not reliable. What are you going to do now? Go back to your room, Laurie. I'm talking to you! Fine. What? I mean what will you do? Calm down. Are you gonna raise that kid? Yeah. You've made up your mind.
Dzian! + Sweatheart @ Tea Bazaar 12/10/10 9pm, FRIDAY, December 10 The Twisted Branch Tea Bazaar 414 E. Main St, Charlottesville, VA Tonight, as you carefully select your attire for the following morn, please don’t forget your dancing shoes! Hide those arctic boots away! It promises to be a balmy 44 degrees! Dzian! opens for Philly-based dress-up band Sweatheart, on their return trek northward after their humble attempt to overthrow Art Basel Miami. As for that thoughtfully curated outfit of yours, you have two equally enticing choices: a) 1980’s Aerobicise: b) 1960’s A G0-G0: …or c) some artfully-constructed combination of the two. As for me, I’ll be rolling tom-boy style on my vintage bike: Now either you’re with me or against me…let’s petition the Tea Bazaar to bust open their ceiling and retrofit a full-size glass display case (i.e. cage) for our Nakashi dancers, just like the Whiskey a’ Go-Go: Righty-O, be there or be something resembling a square. 9pm, FRIDAY, 12/10/10, Tea Bazaar, $5 Tags: , , Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
Study of Canine Vaccine Antibody Responses The canine distemper and parvovirus vaccines are high potency, highly effective biological injections that affect the patient’s immune system. Their judicious use should maximally benefit our patients while minimizing overt and subclinical side effects. This article summarizes a review of canine distemper and parvovirus titer test results for 15 years of patient records. Most dogs remain antibody positive for many years and do not require repeated vaccinations. There are many benefits of titer testing, medically, professionally and financially, for the clinician, the owner and the patient. These results lead to vaccine titer testing and vaccination recommendations. THE REASONS FOR SELECTIVE VACCINATION Canine distemper virus (CDV) and parvovirus (CPV) vaccines are considered core vaccines due to the severity of these diseases.1 They are modified live, high potency injections that significantly impact the immune health of the patient and can cause adverse reactions.2,3 The immune system is extremely complex, with the cellular and humoral components interacting and balancing each other. When we inject a vaccine, a high potency antigenic load, we initiate a series of reactions that initially suppress the immune system over ten to 14 days, then stimulate it over the next two weeks.4,5 In most patients, the immune system rebalances itself back to normal. When the immune system is suppressed, for any reason, there is an increased susceptibility to infections. When the immune system is stimulated, for any reason, there is a risk, in predisposed patients, to the development of allergies or auto-immune disorders.1,3 Current distemper and parvovirus vaccines are modified live virus (MLV), high potency products that are highly effective at immunizing and protecting dogs against these diseases.1,3 These attenuated modified live vaccine viruses multiply in the patient’s body, infecting the dog without causing clinical disease. They do result in the production of viral-specific antibodies which can be measured in serum.6,7,8,9,10 These vaccines are now high potency or high titer, with CPV having more than ten million live viruses in each dose. Our clients are questioning why we vaccinate every dog with every vaccine every year. With a selective vaccination approach at the annual examination, we are frequently not giving any injections. To add value to these consultations, we need to give more attention to a thorough history, nutritional information and physical examination, often including an ophthalmologic examination. We also need to listen to our clients, affirm and compliment their good care of their pets, and show compassion and understanding. Laughing with the client strengthens the vet-client bond. I use the acronym ALL for each appointment: Affirm, Laugh, Learn. As Dr. Ronald Schultz, professor and chair of the department of pathobiological sciences at University of Wisconsin-Madison states: “Be wise and immunize, but immunize wisely”.11,12 Current (2011) American Animal Hospital Association (AAHA) recommendations for canine distemper and parvovirus vaccination are to vaccinate puppies at 12 and 16 weeks of age, booster at one year if the last puppy vaccine was at less than 16 weeks of age, then re-vaccinate greater than or equal to every three years.1 However, AAHA then goes on to say that the efficacy of the vaccines is at least five years: - “Among healthy dogs, all commercially available distemper vaccines are expected to induce a sustained protective immune response lasting at least five years. - “Among healthy dogs, all commercially available MLV-CPV-2 vaccines are expected to induce a sustained protective immune response lasting at least five years.” PERSISTENCE OF DISTEMPER AND PARVOVIRUS ANTIBODIES AFTER VACCINATION Total immunity against viral diseases includes: - Local IgA and IgM - Humoral immunity of IgG antibodies, both those present in the blood and those that can be produced quickly when the antigen is present - Cellular immunity or memory - Other mechanisms. When we measure antibody titers, we are only documenting the IgG antibodies present in the bloodstream. A negative antibody titer does not mean the patient is susceptible to that disease; it means that no antibodies are present in his blood. That patient may or may not be protected.3 At this time in veterinary practice, serum antibody titers are the only tests we can do. We have limited information about the length of vaccine protection as it requires either challenge or large epidemiological studies, both of which are extremely expensive. Although a number of publications have looked at whether dogs have protective neutralizing antibodies present after vaccination, this author was unable to find any that sequentially studied CDV and CPV titers in client-owned dogs with known vaccination histories. Objective: To report on the length of time after vaccination that antibodies persist in dogs. Data collection: The patient records of Aesops Veterinary Care were reviewed from 2000 to 2014 for CDV and CPV antibody titers measured at the annual examination. After collection, the serum samples were refrigerated or frozen (if stored for longer than three days) and sent to commercial laboratories by overnight courier. The titers were done at Idexx Laboratories in Ontario or at Hemopet in California. Both laboratories ran positive and negative controls for each batch of samples tested. The dogs were tested every two to three years. Positive titers can be due to vaccination and/or exposure to virulent virus in the environment. None of the patients had a history of overt disease that was confi rmed as either canine distemper or parvovirus. However, subclinical, mild or undiagnosed disease is possible. For some of these dogs, vaccinations had been given prior to them becoming patients of Aesops Veterinary Care, so the vaccination history came from other veterinarians, shelters, rescues or pound organizations. The vaccination records of new patients were checked; only dogs with known vaccination histories were included. Results: The number of years between the last vaccination and the last titer test varies from one to 12. Patients can be lost to follow-up, may die, or the owner, with veterinary consultation, may decide to cease testing since no further vaccinations would be given even if the patient tested negative. This may occur if the patient develops a serious disease such as an auto-immune disorder or cancer. Interestingly, no owner resumed vaccinations at Aesops Veterinary Care once they started titer testing. Group 1: 15 dogs vaccinated as puppies and not vaccinated since. - All these dogs tested positive for CDV and CPV titers when they were subsequently tested, one or more years later. - One dog was positive at the one-year test, but three years after her puppy vaccination (the last of which was at 12 weeks of age), she was negative for both CDV and CPV. She was an English cocker spaniel diagnosed as a puppy with persistent right aortic arch that had been surgically corrected. However, her severe megaesophagus remained a lifelong problem resulting in variable nutrition. - Of the 14 dogs who always tested positive, tests have been done up to nine years post-vaccination. a) Nine dogs were positive five or more years after their puppy vaccination. b) Five dogs were positive eight or nine years after their last vaccination. Group 2: 31 dogs vaccinated as puppies, given a booster one year later and not vaccinated since. - Two of these 31 dogs tested negative (7%). A Papillion with epilepsy tested negative for CPV five years after his last vaccination. A German shepherd, also with epilepsy, tested negative for CDV six years after his last vaccination. Both these dogs were receiving phenobarbital to control their seizures. - Of the dogs who always tested positive, tests have been done up to 11 years post-vaccination. 27 dogs were positive four or more years after their last vaccination, and all 11 who were tested at eight or more years were positive. Group 3: 30 dogs vaccinated as puppies, given a booster one year later and vaccinated subsequently one or more times. - Three of these 30 dogs tested negative (10%). a) A Weimaraner tested negative for CPV two years after his last vaccination. As a result, he was given a vaccine booster. Two and five years later he tested positive. b) A standard Schnauzer tested negative for CDV six years after his last vaccination. He had inflammatory bowel disease controlled by diet and a low dose of glucocorticoids. c) A French bulldog tested negative for both CDV and CPV seven years after his last vaccination. 2. Of the 27 dogs who always tested positive, tests have been done up to ten years post-vaccination. a) 25 dogs were tested and all were positive four or more years after their last vaccination. b) 16 dogs were protected eight to eleven years after their last vaccination. Summary of results: Of the 76 dogs, 64 have been tested four or more years after their last vaccination. Of these 64 tested dogs, 61 were positive for a protection rate of 95% Some dogs have remained protected for at least eight years (24) and at least ten years (six). Six dogs tested negative for either CDV and/ or CPV. These were all patients with compromised immune systems due either to a medical condition or their breed. There are many reasons for decreased or absent antibody responses. - Maternal antibody overwhelming the vaccine is an increasing risk due to the high potency of our vaccines. This explains the current AAHA recommendation for the final dose of the initial series between 14 and 16 weeks.1,2 - Subclinical viral infection at the time of vaccination, or no antibody response in the rare animal for unknown reasons, results in an unprotected dog.1,2 It is wise to titer test one year after the last vaccine to ensure the patient did respond. - Medications with cytotoxic actions can interfere with modified live vaccines, including tetracycline (doxycycline) and clindamycin. - Cachectic conditions such as malnutrition and major illness result in a poor or absent immune response. These patients should be vaccinated when the problem has been resolved. The same applies to any condition or medication causing immunosuppression. - Chronic disorders decrease the antibody response to vaccination. These include any cause of decreased nutrition such as megaesophagus, maldigestion, malabsorption, liver disease, and protein-losing diseases.2 This study adds epilepsy that’s controlled with phenobarbital to the list (two dogs). Some bloodlines in some breeds have decreased immune systems, including Weimaraners and Rottweilers.1,13 These patients should be titer tested every two years. Findings confirm the updated selective vaccination recommendations of an annual physical examination and health consultation for each dog.1,2,3 If more than one vaccination injection is required at an annual examination, separate these by four weeks in order to minimize the stress on the dog’s immune system and reduce the incidence of adverse reactions.14 Vaccinate at 12 to 13 and 16 to 20 weeks of age, then titer test one year later. Three years after the last vaccination, do a blood titer level to see if this dog requires revaccination. Repeat the titer test every three years. These recommendations are in line with the 2011 AAHA guidelines.1 The benefits of titer testing are that dogs are not being vaccinated when they don’t require it. Owners perceive the veterinarian to be informed, caring and professional. The veterinarian is being rewarded both financially and with the satisfaction of practicing good medicine. PROTECTION CAN LAST MANY YEARS Positive titer tests indicate the patient is protected from that infection and disease. The length of time this protection lasts is several years, and possibly the lifespan of the pet.1,3,11,13 It is pertinent to remember that the IgG antibodies are just part of the body’s immune protection; it is the part we can measure. In the protected patient, vaccination has no advantages, only potential disadvantages.
Washington: Using the NASA/ESA Hubble Space Telescope, an international team of scientists has identified nine monster stars, which are bigger and brighter than our Sun. The team led by University of Sheffield combined images taken with the Wide Field Camera 3 (WFC3) with the unprecedented ultraviolet spatial resolution of the Space Telescope Imaging Spectrograph (STIS) to successfully dissect the young star cluster R136 in the ultraviolet for the first time. R136, which is only a few light-years across and is located in the Tarantula Nebula within the Large Magellanic Cloud, about 170 000 light-years away, hosts many extremely massive, hot and luminous stars. As well as finding dozens of stars exceeding 50 solar masses, this new study was able to reveal a total number of nine very massive stars in the cluster, all more than 100 times more massive as the Sun. However, the current record holder R136a1 does keep its place as the most massive star known in the Universe, at over 250 solar masses. The detected stars are not only extremely massive, but also extremely bright. Together these nine stars outshine the Sun by a factor of 30 million. Lead author Paul Crowther said identifying individual stars in this crowded region of space was only possible because of Hubble and he praised the work done by astronauts who risked their lives in 2009 to repair the STIS. In 2010 Crowther and his collaborators showed the existence of four stars within R136, each with over 150 times the mass of the Sun. At that time the extreme properties of these stars came as a surprise as they exceeded the upper-mass limit for stars that was generally accepted at that time. Now, this new census has shown that there are five more stars with more than 100 solar masses in R136. The results gathered from R136 and from other clusters also raise many new questions about the formation of massive stars as the origin of these behemoths remains unclear. (ANI)
Miękisz wieloramienny Miękisz wieloramienny – rodzaj miękiszu asymilacyjnego występującego głównie u roślin nagonasiennych. Miękisz asymilacyjny liści szpilkowych zbudowany jest z komórek o grubych ścianach tworzących wpuklenia do środka. Komórki tworzą warstwy prostopadłe do osi szpilki. Pomiędzy warstwami znajdują się przestwory międzykomórkowe. Są widoczne jedynie na przekroju podłużnym. Przypisy Tkanki roślinne
The interior of the Earth is a mystery, especially at greater depths (> 660 km). Researchers only have seismic tomographic images of this region and, to interpret them, they need to calculate seismic (acoustic) velocities in minerals at high pressures and temperatures. With those calculations, they can create 3D velocity maps and figure out the mineralogy and temperature of the observed regions. When a phase transition occurs in a mineral, such as a crystal structure change under pressure, scientists observe a velocity change, usually a sharp seismic velocity discontinuity. In 2003, scientists observed in a lab a novel type of phase change in minerals — a spin change in iron in ferropericlase, the second most abundant component of the Earth’s lower mantle. A spin change, or spin crossover, can happen in minerals like ferropericlase under an external stimulus, such as pressure or temperature. Over the next few years, experimental and theoretical groups confirmed this phase change in both ferropericlase and bridgmanite, the most abundant phase of the lower mantle. But no one was quite sure why or where this was happening. Cold, subducting oceanic plates are seen as fast velocity regions in (a) and (b), and warm rising mantle rock is seen as slow velocity regions in (c). Plates and plumes produce a coherent tomographic signal in S-wave models, but the signal partially disappears in P-wave models. Credit: Columbia Engineering In 2006, Columbia Engineering Professor Renata Wentzcovitch published her first paper on ferropericlase, providing a theory for the spin crossover in this mineral. Her theory suggested it happened across a thousand kilometers in the lower mantle. Since then, Wentzcovitch, who is a professor in the applied physics and applied mathematics department, earth and environmental sciences, and Lamont-Doherty Earth Observatory at Columbia University, has published 13 papers with her group on this topic, investigating velocities in every possible situation of the spin crossover in ferropericlase and bridgmanite, and predicting properties of these minerals throughout this crossover. In 2014, Wenzcovitch, whose research focuses on computational quantum mechanical studies of materials at extreme conditions, in particular planetary materials predicted how this spin change phenomenon could be detected in seismic tomographic images, but seismologists still could not see it. Working with a multidisciplinary team from Columbia Engineering, the University of Oslo, the Tokyo Institute of Technology, and Intel Co., Wenzcovitch’s latest paper details how they have now identified the ferropericlase spin crossover signal, a quantum phase transition deep within the Earth’s lower mantle. This was achieved by looking at specific regions in the Earth’s mantle where ferropericlase is expected to be abundant. The study was published on October 8, 2021, in Nature Communications. “This exciting finding, which confirms my earlier predictions, illustrates the importance of materials physicists and geophysicists working together to learn more about what’s going on deep within the Earth,” said Wentzcovitch. Spin transition is commonly used in materials like those used for magnetic recording. If you stretch or compress just a few nanometer-thick layers of a magnetic material, you can change the layer’s magnetic properties and improve the medium recording properties. Wentzcovitch’s new study shows that the same phenomenon happens across thousands of kilometers in the Earth’s interior, taking this from the nano- to the macro-scale. “Moreover, geodynamic simulations have shown that the spin crossover invigorates convection in the Earth’s mantle and tectonic plate motion. So we think that this quantum phenomenon also increases the frequency of tectonic events such as earthquakes and volcanic eruptions,” Wentzcovitch notes. There are still many regions of the mantle researchers do not understand and spin state change is critical to understanding velocities, phase stabilities, etc. Wentzcovitch is continuing to interpret seismic tomographic maps using seismic velocities predicted by ab initio calculations based on density functional theory. She is also developing and applying more accurate materials simulation techniques to predicting seismic velocities and transport properties, particularly in regions rich in iron, molten, or at temperatures close to melting. “What’s especially exciting is that our materials simulation methods are applicable to strongly correlated materials — multiferroic, ferroelectrics, and materials at high temperatures in general,” Wentzcovitch says. “We’ll be able to improve our analyses of 3D tomographic images of the Earth and learn more about how the crushing pressures of the Earth’s interior are indirectly affecting our lives above, on the Earth’s surface.”
Many times it so happens that a project is delayed or needs extra money in order to complete the work. However, after completing the project, when the organization performs the debrief, they see that many of the reasons that cause such problems can be avoided by proper advanced planning- i.e. by adopting risk management principles. A risk management plan is a plan that identifies future risks on a project and prepares a contingency plan to deal with them. Many professionals think that this plan is a separate and isolated plan. However, this is a wrong assumption. A risk management plan is an integral part of the project management plan and it is developed along with it. This plan defines the guidelines on how you will identify project risks and techniques to manage them. The objective of this plan is to minimize the impact of threats and increase the probabilities of opportunities. Three steps are required to develop the project risk management plan. These steps are as follows: 1. Identify Project Risks The first step is to identify the project risks. Here, you will take help from your team members. Together you will identify the project risks. Afterwards, you will go to other project stakeholders to brainstorm more risks. You will also go through any "lesson learned" documents from your files from previous similar projects completed by your organization. Once you complete this step, you will note all this information into the "risk register." 2. Analyze Identified Project Risks In the second step, you will analyze all identified project risks. This step is required so that you can rank and prioritize the risks. It will help you to prepare the contingency plan to manage them. Here you will determine the probability of the risks happening along with its impact on the project. This if followed by ranking and prioritizing the risks. Risk analysis is a people-oriented process and in this process you ask the experts [and other contractors] their opinions Be very careful while considering their opinions into the risk analysis calculations. After completing this process, you will again update the risk register. 3. Plan Responses to Manage Identified Risks In the third and final step, you will plan the responses to manage the identified risks. Risks can be divided into two categories: opportunities and threats. You have to plan response strategies for both. Some common strategies are as follows: - Accept: Here, you simply accept the risk. You decide to manage this risk at the moment it happens. This type of strategy can be applied with both types of risks; e.g. threats and opportunities. - Avoid: In this strategy, you try to escape the risk. You change the project plan or the scope of the work so that this risk could be avoided. This strategy is used with the threats. - Mitigate: Here, you will create a strategy to deal with the risk so that the effect of the risk could be minimized. This type of strategy is used with the threats. - Enhance: In this type of strategy, you try to increase the probability of happening the event so that you could realize it. This strategy is used with the opportunities. The project risk management is an iterative process and you have to continuously look for any new risks throughout the project life cycle. If you find any new risk then you have to repeat all these processes again as described in this article to manage the newly identified risk. By Fahad Usmani Photography by Photo Rack Article author Fahad Usmani, PMP, PMI-RMP is a blogger on Project Management topics. He writes on his blog at http://pmstudycircle.com to help professional to pass the PMP Exam. Visit his blog to learn about various PMP Exam Study Notes.
The biggest Viking exhibition to appear at the British Museum in 30 years reveals treasures and secrets of the Vikings as never before. We went along and these were our favourite remarkable facts: 1. They went beserk and fought naked Ancient legends talk of a particular type of Viking warrior who would bite into their shields before running into battle, often fighting without armour – and even naked. The warriors were given the name ‘Beserkers’ – ‘Bear Skins’ in Old Norse – and this was the beginnings of the phrase ‘to go beserk’. You can almost see the madness in the eyes of these Walrus ivory figures representing the Beserkers: 2. They never wore horned helmets Sorry to crush that illusion but this was a fanciful image popularised in Victorian times. In fact, only plain, conical metal helmets have been discovered from the Viking period. 3. Their weapons were formidable A longhandled axe was unearthed and experts say it could wipe out a horse with just one blow. 4. They got far. Really far The land of the Vikings was covered with impenetrable forests and mountains so they took to the seas, and thanks to their well-designed boats, they forged a global network. Artefacts show that they reached as far west as Canada, as far east as central Asia and as far south as Morocco, bringing an exchange of cultures, customs and material goods that changed the world. 5. They made themselves look scary Warriors would get tattoos, wear eye decorations and file grooves into their teeth and fill them with colour in order to look more formidable as they approached their adversaries. 6. They were an extravagant bunch They showed off their wealth with increasingly opulent and downright impractical jewellery such as silver brooches that could poke an eye out and neck rings such as this one that weighs an almighty 2kg! 7. They weren’t always the victorious, fearless warriors we imagine Excavations in Weymouth, Dorset revealed a mass grave of Vikings that had all been beheaded. The separated skulls and lack of battle wounds to the skeletons suggest that an entire crew from a longship succumbed without a fight. Let us know your thoughts on the Vikings in the comments boxes below! The BP exhibition Vikings: life and legend is at the British Museum 6 March – 22 June 2014. Tickets are £16.50 (concessions available) and can be booked on the British Museum website. If you can’t reach the exhibition, look out for Vikings Live being shown at cinemas across the UK on 24 April.
Energize Your Body and Soul Visiting the spa is one of the best ways of rewarding yourself especially after a week or day of hard work. It’s a place for solace, personal discovery and healing. It is a place for self-restoration away from the everyday world. Let the unique healing effects of highly trained massage therapists and aestheticians help you rejuvenate, treatments mainly focus on pleasing the senses, nourishing and stimulating the skin, while still giving every client a renewed sense of relaxation. Here are the top five Spa services you can pamper yourself with: 1. Facials One of your best assets is your face largely because it’s what everyone sees first. Therefore, it is vital to seek services that will make you look more radiant and healthy. Facials encompass a very wide range of procedures including moisturizing, toning and massage. However one of the most important and effective facial service offered is exfoliation which can be performed through chemical peeling or microdermabrasion. However, all facial procedures are mainly designed to get rid of the excess oil and dirt that gets stuck in the skin pores without damaging and drying out the skin. 2. Body wraps Body wraps are products that are usually around various parts of the body. They contain special ingredients that either enter the body too break down toxins and excess fat or simply draw out these dangerous components. For the best results, it is highly recommended that you get body wraps at least once per week for several consecutive weeks. This procedure is meant to improve your body’s contours. 3. Foot scrubs Walking or standing for the better part of the day can leave your feet sore and/or covered in a thick layer of dead cells. You can get smooth and soft feet by visiting the Spa for a foot scrub to remove the caked dead cells and dirt. The attendants always pay special care to the heels and toes. You will always come out of feet pampering sessions with pinkish, healthy soles. 4. Massage Packages for spa services always contain various types of massage. For instance, a massage can be paired with aromatherapy and relaxation music to create a highly effective way of straightening out nerves. With the help of experienced massage therapists, stress and tension just floats away. A full body massage is the way to go. 5. Waxing Nobody wants to look unkempt and hairy. If you have unwanted hair on your arms and legs, the best and most effective way to get rid of them is waxing. In the waxing process, hot wax is usually applied over the area you have chosen. This results in smooth skin that does not grow hair at least for a couple of weeks. A day at the Spa leaves you feeling refreshed, and ready to face the challenges of another busy day or week. Therefore, it is vital to find the time to pamper and rejuvenate your body and mind and body at Spa Corydon. Leave a Reply
Wood turtle recovery in La Mauricie National Park of Canada Parks Canada crosses park boundaries to help ensure future of unique population of Wood turtles near Shawinigan River, Quebec One of Canada's largest known populations of the Wood turtle (Glyptemys insculpta) inhabits an area near the Shawinigan River at the southern edge of La Mauricie National Park of Canada, in Quebec. Among Wood turtle populations in Quebec, the genetic diversity of the Shawinigan River population is unique. Yet, human activity and natural predators–the same factors that led to designating the Wood turtle as vulnerable in Canada–threaten the Shawinigan River population. Committed to ensuring the future of this fragile population, biologists at La Mauricie National Park surveyed the Shawinigan River Wood turtles in collaboration with the ministère des Ressources naturelles et de la Faune du Québec, a local environmental group, and graduate students from nearby universities. The survey, which included tracking adult turtles through radio telemetry, discovered that 40% of the females use a single nesting site close to the park's boundary. Should anything happen to this site, the pressure on the population would be extreme. The nesting site has been protected since 1996. Park biologists and volunteers locate and protect the nests, laying down wire netting to keep out predators. Over one three-year period, such efforts allowed more than 700 hatchlings to safely reach the Shawinigan River, compared to less than 100 before the project began. The nesting site land was purchased by the Fondation de la faune du Québec in 2000. Protection continues under the Saint-Lawrence Valley Natural History Society, which manages the area. Wood turtle on a rock © Parks Canada / J. Pleau / 2003 A well-targeted public education program was established to protect the turtles and their habitat through stewardship. Riverside property owners were educated about the importance of this rare population. Groups organizing outdoor activities in the area were instructed on how to minimize disturbances to the turtles and their habitat. Where logging activities occur on public lands, forestry practices have been adapted to maintain quality habitat. Using demographic, genetic and habitat-use data, biologists at La Mauricie National Park plan the release of juvenile turtles within the park. By increasing the number of turtles on protected land while continuing to maintain their habitat outside the park, Parks Canada and its partners hope to maintain steady population growth. The future of the La Mauricie Wood turtle population looks bright.
Пењас Алтас Пењас Алтас има више значења: Пењас Алтас (Виља Корзо), насеље у савезној држави Чијапас у Мексику Пењас Алтас (Сан Луис Потоси), насеље у савезној држави Сан Луис Потоси у Мексику
Melanotaenia pierucciae Melanotaenia pierucciae er tegund af regnbogafiskum sem er frá norðaustur Nýju-Gíneu. Tilvísanir Tenglar Melanotaenia pierucciae Froese, Rainer and Pauly, Daniel, eds. (2006). "Melanotaenia pierucciae" in FishBase. Regnbogafiskar
Oldtown, in Penobscot County, lies on the west side of Penobscot River, 12 miles north of Bangor. The towns which bound it are Alton and Argyle on the north, Hudson and Glenburn on the west, Orono on the south and Milford on the east. The last is separated from it by the river. The surface of the town is generally quite even; but 2 hill of the kind known as a "horse back," runs the entire length north and south. Besides the Penobscot, the water-courses are Pushaw and Birch streams. The first is the outlet of Pushaw Lake, which lies on a portion of the west line of the town. Another stream is the so-called Stiliwater River, which is fed by Birch and Pushaw streams, and discharges into the Penobscot by three mouths, two of which are in Oldtown, and one in Orono. Between these and the Penobscot are several islands, of which the largest extends from the middle of the town into Orono on the south. Upon the eastern side of this is situated Oldtown village, and on the west the little hamlet of "Pushaw," and at the southern verge of the town Upper Stiliwater village, and post-office. The other principal islands are Orson and Orono islands, and Oldtown Island. The latter is the property and the principal residence of the remnant of the Penobscot tribe of Indians. In reply to a letter of inquiry, Chas. A. Bailey Esq., State Agent for the Penobscot Indians, has courteously furnished the following statement respecting them. "The Penobscot Tribe of Indians is located on the islands In the Penobscot River between Oldtown and Lincoln, a distance of 35 miles. There are 146 islands in this river between Oldtown and Mattawamkeag, containing in all 4482 acres, which are reserved for their tribe. Their present number is about 245. They live in frame houses and some have very comfortable and tasty houses. They maintain a tribal form of government, electing annually a governor and a lieutenant-governor, also a delegate to the State Legislature which they are aUowed. Politically, they are divided into two parties; the "Old" or conservative, and the "New" or progressive. Schools are maintained among them; and on Oldtown Island they have a convenient house of worship. In religious faith, they are adherents of the Roman Catholic church, having a priest to care for their religious interests. A community of Sisters of Mercy is established among them, and these have a salutary influence upon their moral and domestic condition. The schools are also taught by them. Agriculture receives some attention under the stimulus of State appropriations. The men are employed as rivermen by those engaged in lumbering, also as guides to tourists in the Maine woods, and as boatmen on the lake and streams of Northern Maine. The women find constant employment at basket-making; their wares being unique and ornamental in design and workmanship. They frequent the summer resorts along the coast of New England during the "open season" for the purpose of vending their handiwork, and find it quite profitable. The State annually distributes to the tribe about $10,000 under treaty stipulations, and in specific appropriations for the advancement of their moral, intellectual and industrial interests." For further details respecting these see the article on Indians in the first part of this volume. The European and North American Railway connects Oldtown with Bangor. The Bangor and Piscataquis Railroad forms a junction with the former at the village. The village also occupies the larger part of an island in the Penobscot. An excellent bridge across the river at this point connects with Bradley. The Penobscot River here affords what has been called the finest water-power in the United States. In the broad upper outlet of the Still water River is the Main Boom of the "Penobscot Boom Assqciation," for the storage of logs. The number of logs held in this boom are usually numbered by millions. It is currently stated th4 it originally cost about $100,000. The object is to stop all the lumber coming down the river, letting it out in small quantities that can be controlled, lest great bodies of it should escape to sea in freshets, and be lost. During the rafting season there are often three hundred men employed upon the logs which come into this boom, assorting them according to ownership, and forming them into rafts, to be floated to the various mills upon the river below. In 1855 there were rafted here 181,000,000 feet. At one time it was estimated that there were six hundred acres of logs in the boom. The lower power in this town is the Great Works Falls, of which the natural fall is formed by two ridges of ledge extending across the river about 80 rods apart with a fall of about 3½ feet each. The river at this point is about 700 feet in width. The Oldtown Falls are at Oldtown village, and consist of a wing dam at the upper part of the village, and a dam on the west stream of the Penobscot which separates the island part from the main village. Other powers are at Upper Stillwater, Cooper's Falls, three miles above the last, Pushaw Falls, on the Pushaw Stream in the north-western part of the town near Alton. On these different powers are four large mills for long lumber, threc for shingles and short lumber, and a grist-mill. The size of these mills will be apprehended better by an enumeration of saws. In 1870 two blocks of mills here formerly owned by Samuel Veazie, contained 14 single saws, 5 tang, 3 shingle, 2 clapboard and 4 lath mills. These usually run about seven months in the year, manufacturing in that time, 25,000,000 feet of long lumber, 4,500,000 shingles, 1,000,000 clapboards, 13,500,000 laths, pickets, etc. There are also three steam saw-mills. The smaller manufactures consist of two barrel factories, a batteaux, a brush-wood, a sample case, a saw-filing machine, and an oar factory, together with the handicraft work usually found in our villages. Oldtown village has some handsome residences, and several streets laid out in good style, and beautified with shade and ornamental trees. There is an excellent town hail, with a seating capacity for 1,500 perSons. Other villages in the town merit the same description according to their extent. The roads and bridges are generally in excellent condition. The post-offices are Upper Stiliwater, West Great Works and Pea Cove. As might be supposed, the principal occupations relate to lumber. The inhabitants are now a homogeneous people, but their parentage embraces a great number of nationalities. Hons. Samuel Coney and Geo. P. Sewall, are probably the most distinguished citizens. The central portion of the town has an excellent system of graded schools, from primary to high. The number of public schoolhouses in the town at the present time is nine, valued at $10,000. The churches here arc. Baptist, Congregational, Methodist, Episcopal, Universalist and Catholic. This town was formerly a part of Orono, but was set off and incorporated March 16th, 1840. The population in 1870 was 4,529. In 1880 it was 3,070. The valuation in 1870 was $684,308. In 1880 it was $528,109. The rate of taxation in the latter year was .031, subject to 10 per cent discount.
Sunday, March 29, 2020 Aspect: A Question of Class (1970) The Aspect column from the March 1970 issue of the Socialist Standard Ask any reasonably literate but otherwise typical Lefty for his appraisal of the class structure of capitalist society and he will probably inform you that there are two classes in society — the working class and the capitalist class. There is even a reasonably good chance that he will be able to recite Marx’s definition of these from the Communist Manifesto :  By bourgeoisie is meant the class of modern Capitalists, owners of the means of social production and employers of wage-labour. By proletariat, the class of modern wage-labourers who, having no means of production of their own, are reduced to selling their labour-power in order to live. [1] But give him a couple of minutes more and he will no doubt be at least neck deep in the regular assortment of complex arguments about the relative roles of the working class and the middle class/petty bourgeoisie/salariat which collectively represent his grand design for the socialist revolution. In other words, nearly all left-wing groups exhibit a very definite schizophrenia on the question of social classes. At an abstract, intellectual level they will adhere to the Marxian position, but for all practical purposes (that is for deciding their overall strategy and their day to day tactics) they rely on an entirely different analysis of society. As an illustration of this we could refer to the ways in which such organisations reacted to the upheaval in France in May 1968. They all argued that this episode had an enormous significance not just because millions of ‘workers' showed their dissatisfaction and contempt for Gaullisme through strikes and demonstrations but also because “many sections of French society followed the lead of the workers — footballers, office workers, customs men, hotel workers . . .” [2] Ideas such as these are as dangerous as they are confused. Not only do they lead to a completely false assessment of what constitutes the socialist revolution but they also serve to reinforce the divisive pressure which capitalism is bound to exert on its workers, They encourage the already widespread belief that wage earners in different sectors belong to different social classes and therefore do not have identical interests — a myth which as much as any other helps to keep capitalism secure. By way of contrast, the Marxist approach of the Socialist Party of Great Britain offers a means of uniting workers around an understanding that all “wage-labourers who, having no means of production of their own, are reduced to selling their labour-power in order to live” are members of the working class and therefore have a common interest in getting rid of capitalism. Perhaps we can clarify this by taking a group of workers regarded by most people as impeccably ‘middle class’ and showing how they arc exploited. The dental profesion is particularly interesting in this respect because not only are they a relatively small body of workers on whom a considerable amount of data has recently been published but also because they are in the uncommon position of having been ground down into working directly for wages within the last generation. Up till roughly twenty years ago dentists in Britain really were self-employed people. But the transfer of most dental surgeons to the National Health Service in the immediate post-war years meant that they were now nearly all employed by the state. (Out of 15.000 dentists in Britain in 1967 only 500 were estimated to still be working in private practices). Like many other workers, dental surgeons in the general dental services are paid on a piece rate system. There is a complex mechanism for fixing these rates but basically it amount to the government periodically laying down a target annual net income for the average dentist, providing he works a specified number of hours per year. The Dental Rates Study Group then draws up a set of fees designed to produce average earnings at the target level. This method of payment has been described by many dentists as the ‘treadmill system’ "for, as more treatment is carried out each year by dentists working faster or more efficiently than previously with the help of technological changes, the scale for that treatment will fall, or at least not be increased to the extent that it might." [3] To give an example of how this works in practice we could mention that the fee being paid for a single surface amalgam filling in the mid-sixties (13s.6d) was less than it had been ten years previously (15s.). How this affects the individual dentist was well summed up in the report just quoted:   The system is such that income is in fact related to the total number of courses [of treatment] an individual practitioner undertakes since the more the profession accomplishes, the lower the income per course. Thus if an individual dentist maintains only a constant performance his income falls. [4] The pressures acting on this group of workers over the last twenty years have given rise to enormous increases in productivity— and an equally enormous upsurge in the rate of exploitation. The total number of courses of treatment carried out in the general dental services by roughly the same number of dentists throughout has more than doubled since 1949, while the cost to the government of providing this treatment has fallen from more than £5m. to around the £2m. mark (at constant 1949 price levels). The ways in which dentists react to this situation are no different from those of other workers. Two recent surveys on the attitudes of dentists towards their wages and working conditions [5] showed that 55 per cent objected to the long hours they were forced to work to maintain their incomes, a similar percentage found the pace of the work gruelling, and 78 per cent disliked the restrictions placed on their work by their method of employment. Despite these sort of data, however, many left-wingers would object that although it might be possible by these means to demonstrate that objectively groups like dental surgeons are members of the working class, subjectively they remain intransigently capitalist minded. Such sections, they would say, can be “integrated into the system, to monopoly capital” by the relatively high wages and other benefits which (according to the Leninist theory of imperialism) capitalism can use to buy off parts of the working class in the most advanced countries. And as evidence of the overriding capitalist ideology of professional workers like dentists they would point to the fact that they do not consider themselves working class. Perhaps more than anyone else they differentiate between their own status and that of manual workers and other lower paid strata. But objections such as these can be shown to be sociologically quite naive. The conviction that higher paid workers can be bribed into accepting capitalism by the level of their salaries rests on the assumption that in their attitude towards society they will be predominantly motivated by money or material rewards. If this were the case, it would be clearly shown in their work motivation. Such a theory has a very long pedigree, but at the same time has very little else to recommend it. As Tom Lupton, the professor of industrial sociology at the Manchester Business School, has put it:   It is easy to fall into the error of supposing that because the desire for money is by common consent a compelling motive for working that it is also the overriding motive. Work is a social activity, that is it involves the worker in relationships with others. If the worker is faced with a decision whether to attempt to maximise income or to sacrifice possible gains for the sake of establishing or maintaining satisfying relationships with his workmates, he might well choose the latter course. [6] Researchers like Lupton arrived at these conclusions mainly by studying the working behaviour of piece rate workers in factories. Research into the motivating factors for dentists has given results almost identical to those for other piece rate workers. Thus a recent survey carrying the question ‘What are the things you like most about your work?’ elicited the following responses: These sort of figures, then, do not give a very impressive backing to the contention that higher paid workers like dentists arc primarily concerned with the defence of their supposedly privileged position and must therefore be considered as being distinct from the bulk of the working class. But, for all that, it does remain true that white collar workers (and among them the example we have been using of the dental profession) generally do regard themselves as different from industrial workers. But, of course, this is not a one way process. Blue collar workers reciprocate with similar prejudices about non-industrial workers, regarding them as ‘middle class’ and so on. It is quite illogical to use the common left-wing argument that subjectively white collar workers are outside the working class because they do not identify themselves with their fellow workers. Exactly the same stricture can be applied to blue collar workers. An unbreakable sense of working class solidarity can only spread at the same rate as socialist understanding develops among all sections oft he workers. The Socialist Party constantly attempts to foster this unity in its work of analysing capitalism and its class structure and presenting the socialist alternative to present society. On the other hand, the divisive activity of the Left is entirely symptomatic of their slovenly attitude to Marxist theory. Almost to a man they are committed to ‘leading the working class’. A hard task indeed when they don’t even know what the working class is! John Crump [1.] The Communist Manifesto. SPGB, 1948. p. 60 [2.] Socialist Worker. June, 1968 [3.] The Dental Service. London, 1969. p. 20 [4.] Ibid. p. 27 [5.] British Dental Journal. September, 1969. p. 222 [6.] Industrial Society. Penguin. 1968. p. 297 [7.] Adapted from BDJ. September, 1969 p. 222 1 comment: Imposs1904 said... I understand that before he was an academic the late John Crump was a dentist. PS - That's March 1970 done and dusted.
The riddle is: Why, in so many organisms, do codons turn up at a rate approximately equal to the rate of usage of reverse-complement codons? Take a good look at the symmetry of the following graph (of codon usage rates in Frankia alni, a bacterium that causes nitrogen-fixing nodules to appear on the roots of alder plants). |Codon usage in Frankia alni. Notice that a given codon's usage corresponds, roughly, to the rate of usage of the corresponding reverse-complement codon.| This graph of codon freequencies in F. alni shows the strange correspondence (which I've commented on before) between codons and their reverse complements. If GGC occurs at a high frequency (which it does, in this organism's protein-coding genes), the reverse-complement codon GCC is also high in frequency. If a codon (say TAA) is low, its reverse complement (TTA) is also low. I've seen this relationship in many organisms (hundreds, by now); too often to be by chance. The question is why codons so often occur in direct proportion to the rate of occurrence of corresponding reverse complements. It doesn't make sense. The notion of base pairing should not come into play when an organism (or natural selection) chooses codons, because all of a protein gene's codons are collinear, on one and the same strand of DNA; base-pairing rules do not play a role in choosing codons. Or do they? I think, in fact, base-pairing does a play a role. The answer is obvious, when you think about it. We know that (single-stranded) RNA, if properly constructed, will fold back on itself to form loops and stems: complementary regions will base-pair with each other. Certainly, if secondary structure in mRNA is widespread, it will have consequences for codon selection. Codons in "stem" regions will complement each other. And so it's fairly obvious, it seems to me, that a reasonable explanation for the riddle of "reverse complement codon selection" is that secondary structure of mRNA (or possibly single-stranded DNA) is far more pervasive than any of us might have suspected. It's pervasive enough to affect codon usage in the way shown in the graph above. Is there any evidence that secondary structure is widespread? I think there is. If you go looking for complementary sequences inside protein-coding genes in F. alni, for example, you find many. As a probe, I had a script check for intragenic complementing length-12 sequences ("12-mers") in all 6,711 protein-coding genes of F. alni. (I presented pseudocode for the script in an earlier post.) Based on the known base-composition stats of the organism, I expected to find 5,440 such 12-mer pairs by chance. What I found was 6,319 such pairs located in 2,689 genes. (When I looked for complementing 13-mers, I expected to find 1,467 occurring by chance, but instead found 3,592 such pairs in 2,086 genes.) In a previous post, I showed similar results for Sorangium cellulosum (a bacterium with an enormous genome). Previous to that, I showed similar results for Mycoplasma genitalium (which has one of the tiniest genomes of any free-living microbe). But do these regions of internal complementarity affect codon choice? Indeed they do. When I looked at the top 40% of F. alni genes in terms of the number of internal complementing 12-mers, I found a Pearson correlation between codons and reverse-complement codons of 0.889. Looking at the bottom 60% of genes, I found the correlation to be lower: 0.766. These numbers, moreover, were virtually unchanged (0.888 and 0.763) when I re-calculated the Pearson coefficients using expectation-adjusted codon frequencies. That is to say, I used base composition stats to "predict" the frequencies of each codon, then I subtracted the predicted number from the actual number, for each codon. (Example: The frequency of occurrence of guanine, in F. alni protein genes, is 0.35794, and the frequency of cytosine is 0.37230, hence the expected frequency of GCC is 0.35794 * 0.37230 * 0.37230, or 0.04961. The actual frequency is 0.07802.) The correlation still existed, practically unchanged, after adjusting for expected rates of occurrence of codons. The bottom line is that the correlation between the frequency of occurrence of a given codon and the frequency of its reverse-complement codon, which is otherwise very hard to explain, is quite readily explained by the presence, in protein-coding genes, of a significant amount of single-strand complementarity (of the type that could be expected to give rise to secondary structure in mRNA). On this basis, it's reasonable to suppose that conserved secondary structure is actually a major driver of codon usage bias. Please show this post to your biogeek friends; thanks!
An annular solar eclipse will occur on February 26, 2017. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon’s apparent diameter is smaller than the Sun’s, blocking most of the Sun’s light and causing the Sun to look like an annulus (ring). On 26 February 2017, at 14:58UT, there will be an annular solar eclipse. The eclipse will be visible in its totality in southern South America, across the Atlantic, and into southern Africa. The partial eclipse will be visible in southern South America, and south-west Africa. The eclipse is caused by the conjunction of the Sun and Moon in Aquarius. Countries and geographical locations said to be of the nature of Aquarius are Saudi Arabia, Prussia, Russia, Poland, Sweden, , Lithuania, Westphalia, Wallachia, Piedmont and Abyssinia (Ethiopia). Some say there will be possible effect on Canada and the USA due kendra, angles. One would have to wait and see. Many give directions what to do and not do during eclipses. As we are in a time of escalation to the 4th, 5th and higher dimensions, it is best to “consult your own Brahma” (listen to the spirit within) for best advice with respect to this eclipse. Eclipses involve a diminution of light externally, and there is a corresponding eclipse within, for the planets also have effect within the human. The effect depends on the significance of the sign Aquarius for you, the reader. If Aquarius is important for you, i.e, your ascendant, your Sun or Atmakaraka is here, or you have a lot of planets in this sign, then the eclipse signifies that change is coming for you this year. Life is change, the word Universe in Sanskrit is translated as “coming-and-going” meaning the Universe – and life therein – is perpetually changing. However, the change signified by an Eclipse is a change to something our face to the world, our mask or persona likes to regulate. We like to be settled, have things secure, fixed, and have steadiness. There will be change. The planets may raise us up, they may pull us down; it is up to us to use their energy, as everything is energy of one or another kind. And energy never dies. It may be transmuted, transformed, or take another form, but it continues. Hence, Vedic Astrology has remedies for the energy of the planets, such as gems, mantras, and sacred offerings to the nine planets. Just as signs are movable, fixed, mutable, so also energy is movable, fixed and mutable and may be transformed by the wearing of gems, chanting of mantras, and making of sacred offerings to the nine planets or one planet in particular. This eclipse is caused by the lunar node Ketu in Shatabishak nakshatra, in Aquarius. The nakshatra lord is Ketu, in temporary exchange (parivarthana) with Rahu (who is in Ketu’s nakshatra, and temporary lord there). Of the gunas (types of behaviour, groups of qualities, bundles of attributes) Shatabhishak is sattvic-tamasic-sattvic in nature and tending towards matters of a calm, peaceful nature. The shakti or energy this nakshatra is to oversight all, to support all and to bring about a world free of calamity. The journey to this may seem like Jason and the Golden Fleece: the hero, the protagonist has to undergo trials, darkness, entering the bowels of the Earth in order to bring the great treasure, the wondrous boon to humanity. There may be loss of a sense of self on the journey, but the outcome is the boon for all. Life is active, joyful, embraced by all. When we look to the doshas (kapha, pitta, vata) and the nakshatras, we find that Shatabhishak is of vata, air nature. This somewhat suggests the journey from the world of the 5 senses to the world of the 5th Dimension. This is an inner journey, of vata nature, perceived by the jnanendriyas, the inner instruments of action. Many awoke on 21 December 2012 expecting to see a world adorned in fairy floss or glitter, having light bodies and able to float through the air and visit New York, the North Pole and Mt Everest in a second. The muscles have to be developed: the Source of All Creation, our living Universe and Mother Earth provide the energy, the galactic plane, the magnetism, the vibration of Ascension. The inner muscles of mind management, self control, stillness and steadiness – as well as inner contact with the Soul – must be developed in order to advance into Ascension and the 5th, 6th Dimensions and up. We mentioned before that this eclipse is caused by the lunar node Ketu, the one that seeks the depths of the inner space, the inner dimensions within, and is forever seeking – and bestowing – Moksha, the release from the cycle of birth-death-birth-again-and-again. This eclipse provides some opportunity via the energy and lordship of Ketu to make spiritual advance, perhaps in a mighty leap for some, for others, a steady trekking toward the mountain of Ascension. Take time, do not haste, waste, worry. Remember your favourite form of the divine – or your Atmakaraka – and calmly use your time well during this eclipse for your personal spiritual advance. Recall, spirituality is what works for you, what brings you serenity and peace. Remember that and travel well in this time of eclipse. 446 total views, 2 views today
In a moment when our Asian siblings are being harassed, when people of color are disproportionately being affected by COVID-19, when white protesters are storming capitol buildings putting economy over lives, when white people instinctively call the police on people of color, and when black men are dying because of police brutality, this is the perfect time for white people to do some serious soul searching. Dismantling white supremacy is not only a social must, it is a spiritual need. Here are 6 ways white people can begin the process of dismantling white supremacy: - Do the inner work. White people are conceived, born, and raised into spaces of privilege so the work to dismantle our own perpetuations of white supremacy is the place to start. The author and activist, Layla Saad, provides white people a day to day guide on doing the inner work of dismantling white supremacy. Get the book and begin the work. - Realize that being “woke” is not a trend. It is almost the “in” thing to be “woke” and to recognize white supremacy. However, white supremacy permeates all white people through micro-aggressions, untapped spaces of subconscious racism, and so many other spaces. Recognizing your privilege is good, but surrounding it with selfish motives and the quest for the spotlight diminishes and continues to traumatize and re-traumatize our siblings of color. - Words must be backed by actions. In the world of social media consumption, the letters on the keyboard become our voice that speaks for the moment and fades in the background 2 seconds later. Our words must be moved into actions which takes creativity in this moment of COVID-19. - Call out but don’t take up excess space. Progressive white people love to call out racism and white supremacy, but it is important to be aware of the space white people take up. The craving for society to know that you “have it together” when it comes to privilege awareness can overwhelm spaces and assert power over others. This is still a manifestation of white supremacy because white supremacy tells white people that they are entitled to be seen. - White Supremacy = A White Problem. Organize white people. White people must begin the process of organizing one another to begin the work of dismantling white supremacy because this is a white people’s problem. White people created it and white people have to do the work to un-create it. Many people of color are tired of being re-traumatized through this work. White people, it’s time to get it together. - Empower by risking power. The work to dismantle white supremacy means that white people must risk their power to empower others. Yes, that means things will be lost, friends will walk away from you, family may disown you, and your life will be different. This also means that institutions like churches must risk as well. Resources to begin the work: - Layla Saad, Me and White Supremacy - J. Kameron Carter, Race: A Theological Account - Facing Racism: A Vision of the Intercultural Community Churchwide Antiracism Policy (Presbyterian Church USA) This piece was created by the Unbound team. All materials referenced are not owned by Unbound.
r/AnimalRescue Ive had a similar experience. One day I took in this young rat terrior from someone who needed to rehome her immediately. She was scared of everyone including me. She was especially petrified of men (my brothers). She wouldn't let me go near her for days. One day I felt so sorry for her that I wanted to hug her so badly already as I watched her. Then I started talking to her in the most loving tone possible. I was saying so many sweet things to her and suddenly she just ran up with her tail wagging so hard. She acted as if she hasn't seen me for days! Ever since that day she wouldn't leave my side and would only listen to me, no one else. She would sleep right next to me. She was an amazing little guard dog but she'd bark at my brothers in my house. It made me wonder if she was physically abused by a man with a beard and deep voice.
The COVID-19 pandemic is expected to widen the poverty gap between men and women, according to new data from UN Women and the United Nations Development Program (UNDP). The report “From Insights to Actions,” published on Sept. 2, states that the global economy is expected to contract by 5% in 2020, forcing an additional 96 million people into extreme poverty by 2021 — 47 million of whom are women and girls. In 2015, international leaders gathered at the UN headquarters to develop 17 Global Goals to end extreme poverty by 2030. While measures were taken to achieve Global Goal 5, which aims to achieve political, social, and economic equality for women, the COVID-19 pandemic has severely hindered women’s development around the world. Now, the possibility of achieving global equality within the decade seems increasingly unrealistic. Studies have shown that women often bear the economic brunt of crises — women’s employment is 19% more at risk during crises, compared to men, putting women at heightened risk. Women also generally earn less and save less money, putting them at an immediate disadvantage. Sectors with female overrepresentation are often characterized by low pay and poor working conditions. The UN found that while globally women make up 58% of the informal workforce, this percentage is much higher in developing countries. UN Women reported that women make up 92% of the informal economy in sub-Saharan Africa, 91% in South Asia, and 54% in Latin America. Members of the informal workforces — including women, migrants, youth workers, and other vulnerable groups — are most susceptible to layoffs and job cuts. In the informal sector, workers also have far fewer social protections including health care, paid sick leave, or time off. Women workers have generally suffered the most from the COVID-19 pandemic. The crisis devastated hospitality and food service industry, which is often dominated by women in most countries. Women also make up 80% of domestic workers, and lockdowns have made it difficult to maintain pre-pandemic arrangements, causing many to lose their incomes. What’s more, women from marginalized groups are globally overrepresented in personal care jobs and domestic work — jobs that require closer contact with others and can lead to exposure to the virus. Women are generally more likely to serve on the front lines of the pandemic, too. Women make up 70% of the global health and social care workforce, and infection rates among female health care workers are up to three times higher than male health care workers. The effects of the pandemic will be long-lasting: UN Women and the UNDP found that a total of 247 million women and girls will be living on less than $1.90 a day in 2021. Around the world, people living in less developed countries will face the most extreme gender disparities. Of the number of women and girls expected to be pushed into poverty by the pandemic, 132 million are women in sub-Saharan Africa, the region where the majority of the world’s poorest live. Living in poverty will affect a woman’s access to health care, education, sanitation, and food — factors that will make it harder for women to achieve equality with men.
Two and a half stories underground at First Congregational Church in Kalamazoo, in a room filled with the din of air handlers, Pastor Nathan Dannison points into a recess in the wall. “On the shelf up here you can still see some resources and materials from the Office of Civil Defense. Those are water canisters that store fresh drinking water,” he says. That’s because, while it now serves as utility space, fifty years ago this room was intended to shelter people from the fallout of a nuclear attack. “There’s not a lot of room for other people to stand, though, if there were an emergency,” says WMUK listener Sarah Schneider Koning. Growing up, Sarah says, she used to see signs for fallout shelters. “And now you don’t see them as much and with current political things happening, it’s always kind of a thought, like, what would we do, how would we prepare if something were to happen?" Kalamazoo used to have a plan for sheltering from nuclear fallout. In 1972 the Kalamazoo Gazette published a list of shelters. Some 261 gathering points together had room for most, though not all of the county’s population. Many were downtown, like First Congregational Church. City of Kalamazoo Historic Preservation Coordinator Sharon Ferraro says the city had long banned building with wood in the downtown because of the risk of fire. “So the downtown was the natural place where you were going to find buildings made of masonry,” she says. Masonry that would protect from radiation better than lighter materials. Ferraro says the Civil Defense planners wanted to make the most of existing buildings, such as school and libraries. “Plus the fact that when the alert came, the question was, how much time would you have?” But not everyone would have gone to a public shelter. Ferraro says some people built their own. “I’d be willing to bet that most people that have an unexplained heavy-duty room in the basement might not even realize that that’s what they’ve got.” At a house in the Vine neighborhood, Todd Urness and I walk through the basement to a low doorway in the wall. A few more steps down a narrow cement hallway and we turn into a cubbyhole. It’s got cement walls and a steel roof. “There’s a triple bunk and a small hole in the middle in the floor and there’s a bunch of old booze in here,” Urness adds. Urness says the Latvian family that built the house in the 1950s believed that Russia would not hesitate to use nuclear weapons. “People ask us if we ever hang out down here, and it’s just kind of creepy sort of,” he says. But eventually they did start showing it to friends. Not far away, in Westnedge Hill, Jake and Ashtyn Hunter also live in a house with a bomb shelter. It’s the same idea – a concrete room off the basement – but on a grander scale. The room’s got a water tank, a small cast-iron type stove and a wheel-shaped device probably involved in air circulation. Ashtyn says they were living out of state before they bought the house, so they sent friends to take a look. “And we were FaceTiming with them and they were coming downstairs to check out the basement and they were like ‘oh, what’s this little door?’ and they push it open and it’s like this whole lair, like ‘what’s this?’ And we lost the connection because there’s no service in here,” she says. Ashtyn says the shelter makes a great feature for a Halloween party. “We had probably 10 people in there at some point just because it’s got the spook factor and so everyone was so curious about it.” But she wonders if it would really protect anyone from fallout. The Federal Emergency Management Agency does have basements on its list of possibly worthy shelters. But it recommends looking for a shelter several stories underground if you’re near a target. Kalamazoo County Sheriff Rick Fuller says his office’s list of shelters has fallen out of date. Fuller says FEMA has the most current plans on nuclear events. But FEMA’s website says that when it comes to shelters, you should check with your local officials – or look around for shelters on your own. As the US and North Korea trade threats over nukes, Sarah says it’s understandable that people would worry at least a little. “I made a little post on Facebook the other day that I was doing this, and a lot of people were like ‘yeah, where are the shelters, what do we do? So I think a lot of people are not sure what they’re supposed to do.”
Apex ocean predators and possibly one of the smartest mammals alive on earth today, it is difficult to imagine that some wild orca populations are literally swimming on the brink of extinction. With more than 40 populations of orcas (that we know of) found in almost every single ocean and sea on the planet, they are considered to be the second-most widely ranging mammal, after humans. Despite our ever-increasing understanding of orcas and their habitats, many of these populations are still facing severe conservation threats. Several populations are dwindling in size and recovery looks bleak. The Southern resident orca population found in the Pacific Northwest are down to their last 80 individuals, while the Alaskan transient AT1 and British West Coast Community populations are at serious risk of disappearing from our planet altogether. The New Zealand orca population comprises less than 200 individuals and the Northern resident population (also in the Pacific Northwest) doesn’t have many more members than that. And for many orca populations, we don’t actually know how big (or small) they are. However, it has been estimated that there is a minimum worldwide abundance of 50,000 orcas. So it shouldn’t matter if one or two populations go extinct, right? Wrong. Each orca population is unique, differing genetically and culturally from one another. Different populations (and sometimes pods) use different hunting techniques, specialize in different prey foods, communicate in different dialects, and can even be morphologically diverse. Most importantly, each orca population is fundamental to the ecosystem it inhabits. Let’s take a look at five ways orcas are threatened in the wild: 1. Food Shortages In certain parts of the world, orcas are not finding enough food to eat. For example, over-fishing, habitat degradation, and diseases spreading from salmon farms is leading to a decline in fish stocks in the Pacific Northwest. This is a serious concern for the Pacific Northwest resident orcas who specialize in foraging for salmon species. In the past, reduced prey availability has been linked with a population decline for the Southern resident orcas and is thought to have led to these orcas expanding their usual range in search of food. Food is fundamental to the survival of orcas and without it, they are at higher risk from other conservation threats. 2. Chemical Pollution The higher up the food chain an animal is, the more contaminated it will be, with pollutants bio-accumulating in fatty tissues of the body. The ocean environment is highly susceptible to chemical pollutants, acting as a final sink for compounds. Orcas are at the very top of the ocean food chain and this renders them highly vulnerable to this threat. They are among the most contaminated marine mammals in the world, revealing to us the true state of our oceans. Why are pollutants so dangerous? Firstly, some of these pollutants are persistent and never degrade. Secondly, they target immune and endocrine systems, as well as affecting reproduction and survival rates. The risk of reduced fitness in orcas is greater when they cannot find enough food. Their fat stores begin to break down, releasing the stored chemical pollutants around the rest of the body. The West Coast Community orcas consist of only nine individuals. No new calves have been sighted with this population for over two decades and it is possible pollution could be to blame. Chemical pollutants are dumped from mother to calf when nursing, lowering calf survival rates in the first year and male orcas, who have no way to offload the pollutants from their bodies, may be suffering from reduced fecundity as a result. Orcas are also vulnerable to large-scale oil spills. The 1989 Exxon Valdez oil spill resulted in the death of several Alaskan orcas by oil contamination. Orcas take decades to recover from such disasters and in some cases, like the AT1 transients, they never do. 3. Noise Pollution …from recreational boating and water sport activities, whale-watching trips, fishing vessels, and cargo ships, as well as military exercises. Noise pollution can adversely affect orcas, forcing individuals to alter their behavior and how they communicate with one another. Really loud sounds, such as those emitted during navy sonar exercises, can actually cause physical damage to orcas and may even kill them. Over-crowding by boats can also elicit a change in orca behavior and vocal behavior, as well as cause injury or death through collision and propeller strike. 4. Live Captures Captures of wild orcas for the captive entertainment industry has had a lasting impact on targeted populations. The size and social structure of the Southern resident population has still not recovered following captures in the 1960 to the 1970s. With an increasing awareness of how orca welfare is severely compromised through capture and confinement, it is perhaps shocking to learn that, for the first time in decades, orcas are being taken from the wild again. In 2010, a young female Norwegian orca, known as Morgan, was found alone and emaciated off the Netherlands coast. She was captured with the intention of rehabilitation and release, but was instead sent to a captive facility in Tenerife, where she now performs tricks for fish. In 2012 and 2013, a total of eight Russian orcas were captured for the entertainment industry. Two orcas, a female known as Narnia and a young male, are being held at a facility in Moscow and two of the orcas have been transported to China. Little else is known about the location of any of these orcas. A quota for Russian orca captures in 2014 is still under debate. This conservation concern is not just for the welfare of those orcas captured for display purposes, but for members of the population who are left behind. Captures could not only impact the physical and emotional well-being of the larger population, but could affect population growth (or lack of) in the future. 5. Climate Change Climate change is affecting humans and animals alike, including orcas. A change in temperature, water level and weather is impacting ocean habitats around the world. While it is unclear how well orcas, or their prey, will respond to extreme changes in conditions, it has been suggested that certain subpopulations of orcas will bear the brunt of these environmental changes as a result of reduced prey availability. And so we have come full circle. If you want to help alleviate some of the pressures that face orcas today, here are a few suggestions: - Take the time to learn about different orca populations so that you can help to raise awareness about these magnificent creatures. If you help others to learn about orcas, they will begin to care for them and if they care for them, they will want to help protect them. - Help to reduce ocean pollution by picking up litter at your nearest beach. - Act appropriately around orcas, whether you are swimming, water-skiing or skippering a boat and make sure you go whale-watching with a reputable company. - Respect and abide by the protective laws that regulate conduct around orcas on (or in) the water for the country you are in. - Make a pledge – don’t buy a ticket to a facility displaying captive orcas or other dolphins. Cascadia Research. (2012). Using DTAGs to study acoustics and behaviour of Southern Resident killer whales. (http://www.cascadiaresearch.org/kws/dtagging.htm). Accessed 17 May 2014. Foote, A.D., Osborne, R.W. & Hoelzel, A.R. (2004). Environment: Whale-call response to masking boat noise. Nature 428:910. Ford, J.K.B., G.M. Ellis, L.G. Barrett-Lennard, A.B. Morton, R.S. Palm, and K.C. Balcomb III. (1999). Dietary specialization in two sympatric populations of killer whales (Orcinus orca) in coastal British Columbia and adjacent waters. Canadian Journal of Zoology 76 pp. 1456-1471. Ford, J. K. B., Ellis, G. M. & Olesiuk, P. F. (2005). Linking Prey and Population Dynamics: Did Food Limitation Cause Recent Declines of ‘Resident’ Killer Whales (Orcinus orca) in British Columbia? Nanaimo, British Columbia: Canadian Science Advisory Secretariat. Research Document 2005/045. Ford, J.K.B., Ellis, G.M. (2006). Selective foraging by fish-eating killer whales (Orcinus orca) in British Columbia. Marine Ecology Progress Series 316 pp. 185–199. Hickie, B.E., Ross, P.S., MacDonald, R.W. & Ford, J.K.B. (2007). Killer whales (Orcinus orca) face protracted health risks associated with lifetime exposure to PCBs. Environmental Science & Technology 41 pp. 6613–6619. Krahn, M.M., Hanson, M.B., Baird, R.W., Boyer, R.H., Burrows, D.G., Emmons, C.K., Ford, J.K., Jones, L.L., Noren, D.P., Ross, P.S., Schorr, G.S. & Collier, T.K. (2007). Persistent organic pollutants and stable isotopes in biopsy samples (2004/2006) from Southern Resident killer whales. Marine Pollution Bulletin 54 pp. 1903–1911. Learmonth J.A., MacLeod C.D., Santos M.B., Pierce G.J., Crick H.Q.P. & Robinson R.A. (2006). Potential effects of climate change on marine mammals. Oceanography and Marine Biology: an Annual Review 44 pp. 431–464. Matkin, C.O., Saulitis, E.L., Ellis, G.M., Olesiuk, P. & Rice, S.D. (2008). Ongoing population-level impacts on killer whales Orcinus orca following the ‘Exxon Valdez’ oil spill in Prince William Sound, Alaska. Marine Ecology Progress Series 356 pp. 269-281. Olesiuk, P.F., M.A. Bigg and G.M. Ellis. 1990. Life history and population dynamics of resident killer whales (Orcinus orca) in the coastal waters of British Columbia and Washington States. Report of the International Whaling Commission, Special Issue 12 pp. 209-243. Rayne, S., Ikonomou, M. G., Ellis, G. M., Barrett-Lennard, L. G., Ross, P. S. (2004). PBDEs, PBBs, and PCNs in three communities of free-ranging killer whales (Orcinus orca) from the northeastern Pacific Ocean. Environmental Science & Technology 38 pp. 4293-4299. Ross, P. S., Ellis, G.M., Ikonomou, M.G., Barrett-Lennard, L.G. & Addison, R.F. (2000). High PCB concentrations in free-ranging Pacific killer whales, Orcinus orca: effects of age, sex and dietary preference. Marine Pollution Bulletin 40 pp. 504-515. Taylor, B.L., Baird, R., Barlow, J., Dawson, S.M., Ford, J., Mead, J.G., Notarbartolo di Sciara, G., Wade, P. & Pitman, R.L. (2008). Orcinus orca. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.1. (www.iucnredlist.org). Accessed 17 May 2014. Wolkers, H., Corkeron, P. J., Van Parijs, S. M., Similä, T., Van Bavel, B. (2007). Accumulation and transfer of contaminants in killer whales (Orcinus orca) from Norway: indications for contaminant metabolism. Environmental Toxicology and Chemistry 26 pp.1582-1590. Image source: Martin Linder/Wikipedia Commons
From subreddit mtgcube: This is the kind of card I do not really endorse not because its not good enough (debatable), but more on the aforementioned points that it's purpose is to be good versus aggro. I do not have too many of those cards and refrain from adding a bunch because they can choke out decks. Nighthawk in black is far better as that color needs the help with it's aggro cards being painful and Deathtouch allows it to trade up. It makes sense there. I do like the spell copy portion of the card. But I doubt that it would see play if that was the card's only ability, which it why it's playability can be debatable. The lifelink portion is playable but not something I endorse in my list. Seeker of the Way is a much more balanced card on the lifelink front, something I am happy to play with.
Math can be a tricky business and kids often end up getting in a muddle - find out how slips and misconceptions can arise, and how you can use them as opportunities to improve your child's understanding. When it comes to a child's educational journey, a little parental support can go a long way. If you ever ask yourself, "What can I do (if anything) to help my child do better at math?" Don't panic, we've devised a 3 step plan. Award winning home learning for ages 5 to 11 on tablet and laptop. Free 14 day trial on all accounts Sometimes it’s hard for kids to see the point of math so we’ve put together this handy list of seven ways to help you keep your child motivated at math. Screen time - and the question of how much is too much - is a hot topic at the minute. We've broken down some of the key research and offer useful tips for parents so that you can ensure your children have a healthy relationship with their screens. Award winning home learning for ages 5-11 on tablet and laptop. Free Parents' Ebook Download The Parents' Guide to Primary Maths Whether you’re tucked up around the fire or out travelling to visit friends and family, we’ve got some fun, quick and easy ideas to keep math brains of all ages active throughout the festive period and the break from school.
This app was removed from the App Store. Longman Dict. Conciso Inglés - Español (sin audio) iOS Universal Reference **Perfect Companion for LDOCE users** English-Spanish, Spanish-English Dictionary For Intermediate - Advanced Students of English The best tool for learning English • Over 250,000 words, meanings, phrases and examples • 75,000 examples show English as it is really used, with Spanish translations to help students understand the meanings • Specially-written language notes help students avoid the mistakes most commonly made by Spanish learners • 6000 synonyms, antonyms and thesaurus notes help students build their vocabulary • Covers all varieties of Latin American Spanish • Simple and fast search system • Notes that help you avoid the 100 most common mistakes in English • Comments that explain the differences in words in the same meaning area • Hundreds of colour illustrations • Full contents of the LONGMAN Conciso Dictionary • The top 3,000 most frequent words in spoken and written English are highlighted to show which are the most important to know. • Search history function • Book mark facility • Progressive look-up for quick searching • Zoomable illustrations ● unique three-way cross-referencing search ● real-time progressing look-up ● wildcard pattern searches -with * ? ● support for search from other applications ● OS-independent app localization ● iPod, iPhone & iPad compatible "universal app" ● Full "Retina Display" support ● Bookmarks with folders and editable notations ● Bookmark syncing across devices via iCloud (iOS5) ● supports Facebook APIs. ● Twitter & Evernote posts. ● Full layout for definition emails and Evernote posts. Longman Diccionario Conciso © Pearson Education Limited, 2009
Nearly every major water body in Indiana has the potential to be impacted by agriculture. Nonpoint source pollution from these lands can affect our waterbodies to varying degrees as sediment, nutrients, and bacteria flow overland and through drainage tiles into ditches, streams, and rivers. As consumers, we depend upon Indiana’s producers for our food. We also depend upon them to use conservation best management practices to help keep our water resources fishable, swimmable, and potable. Personnel at conservation offices like Soil and Water Conservation Districts, the Natural Resources Conservation Service (NRCS), the Farm Service Agency, Resource Conservation and Development Councils, and nonprofit watershed organizations are available to assist producers in identifying best management practices that are both environmentally and economically practical. In some cases, funds may be available for the implementation of best management practices (BMPs) from USDA Farm Bill, U.S. EPA Clean Water Act, and Indiana Department of Natural Resources Lake and River Enhancement programs. Tools for On the Farm In order to ensure that a BMP is efficient in protecting and/or improving water quality, BMPs must be installed according to a recognized standard or specification. The Natural Resources Conservation Service (NRCS – a part of USDA) is one of the lead technical agencies involved in the implementation of conservation practices through the Farm Bill. Their Electronic Field Office Technical Guide (EFOTG) is a national manual of specifications for conservation BMPs, which has become the standard design reference for the planning and installation of agricultural BMPs. You can use this tool to learn about specific BMPs, or you might use it to determine specifications on a project to implement. This link will take you to the state map for Indiana, but you will need to click on the "Section IV" link after you've selected your county to find the most relevant information. In addition to NRCS, the U.S. EPA has developed the National Management Measures to Control Nonpoint Source Pollution from Agriculture manual outlining agricultural best management practices that reduce sources of pollution, categorized by pollutant to be reduced. This is a great resource for those folks who are choosing BMPs and developing cost-share programs. The Field Assessment for Water Resource Protection is a practical booklet by Purdue University that takes producers (and watershed coordinators!) through an assessment of current practices on a specific farm. Recommendations for water quality improvement are then presented for areas that need improvement. The Clean Water Act Section 319 Agricultural Guidance for Indiana provides general program information, suggested BMPs, funding restrictions, definitions of basic terminology, and frequently asked questions related to the distribution of cost-share and demonstration funds for BMPs implemented on agricultural land. While geared toward Section 319 grantees, others may find it useful, too. Tools for Drainage At the time of European settlement, much of the landscape in northeastern and east-central Indiana was occupied by the Great Black Swamp and in the northwest, the Grand Kankakee Marsh. In order to make the land habitable for settlers and productive for agriculture, Indiana governments and individuals were encouraged to drain swamplands across Indiana through ditching and tiling. While hydromodification provides many challenges to water quality, there are ways to marry drainage and water quality. The Indiana Drainage Handbook describes how to perform drainage operations in the most environmentally friendly manner. While the handbook is advisory in nature and does not supersede the powers granted to government agencies by Indiana statute, it provides a conversation-starter for watershed groups and their county surveyors. Two-stage ditches are an innovation in drainage management. These engineered channels provide an in-place floodplain [PDF] to slow water velocity, process nitrogen, trap sediment and prevent flooding of the adjacent farm fields. These ditches are virtually self-maintaining – sediment build-up and erosion are prevented by the benches created within the ditch. In addition, less maintenance means less disturbance of the aquatic life inhabiting the watercourse, creating a win-win situation for producers and the ecosystem.
Stress in an Organization What you’ll learn to do: Discuss stress and the consequences of stress in an organization Stress has become an ever-increasing focal point in the world of business. As an employee, you hear about it all the time. Downsizing at a company creates stress among the remaining workers when workloads, and time at work, increase. Surveys show us that employees often struggle to find a balance between job responsibilities and family responsibilities. Companies go out of business in this competitive environment, and because of that job security is not what it once was. Understanding what stress is, where it comes from, and what it means to an organization are a manager’s first steps to alleviating some of the havoc it wreaks. Learning Outcomes • Discuss various elements and types of stress • Discuss potential sources of stress • Describe the consequences of stress and its cost to an organization What is Stress? Like motivation, stress is a very individual experience. One person can feel extreme pressure and anxiety over a task that is looming, and another might look at the same task and see it as an exciting challenge. In spite of that, we’ve seen an overall jump in the number of people that report stress on the job, and we can see how it’s taking its toll. Stress is a dynamic condition, and it exists when an individual is confronted with an opportunity, constraint or demand related to what he or she desires, and for which the outcome is perceived to be both uncertain and important. Stress isn’t necessarily bad, even though it’s usually discussed in a negative context. There’s opportunity in stress, and that’s a good thing because it offers potential gain. For instance, consider Luke Skywalker, piloting his X-Wing fighter, trying to blast his torpedo into that small, little space that was the Death Star’s only weakness. There was plenty of stress, provided by stormtroopers and Darth Vader himself via bullets and explosions, but Luke concentrated, used stress to his advantage, and shot that torpedo right into the exhaust port. Okay, maybe it was the Force, too. Athletes and performers use stress positively in “clutch” situations, using it to push themselves to their performance maximums. Even ordinary workers in an organization will use an increased workload and responsibilities as a challenge that increases the quality and quantity of their outputs. Photograph of a person sitting on the ground covering their face with their hands. They are leaning against a couch.Stress is negative when it’s associated with constraints and demands. Constraints are forces that prevent a person from doing what he or she wants. Demands represent the loss of something desired. They’re the two conditions that are necessary for potential stress to become actual stress. Again, there must be uncertainty over the outcome and the outcome must be important. Kevin, a student, may feel stress when he is taking a test because he’s facing an opportunity (a passing grade) that includes constraints and demands (in the form of a timed test that features tricky questions). Salomé, a full-time employee, may feel stress when she is confronted with a project because she’s facing an opportunity (a chance to achieve something, make extra money and receive recognition) that includes constraints and demands (long hours, time away from family, a chance that his knowledge and skills aren’t enough to complete the project correctly). Stress is highest for those who don’t know if they will win or lose and lowest for those that feel that winning (or losing) is an inevitability. Even so, the individual can perceive the winning (or losing) as an inevitability, but if it’s important, the individual is still likely to experience a level of stress. PRactice Question What does stress feel like? The symptoms of stress for a person are as individual as the conditions that cause it. Typically, when presented with stress, the body responds with a surge of hormones and chemicals that results in a fight-or-flight response. As the name would indicate, this response allows you to either fight the stressor or run away from it. The general adaptation syndrome (GAS) describes the three stages that individuals experience when they encounter stressors, respond and try to adapt: • Alarm. The physical reaction one experiences when a stressor first presents itself. This could include an elevation of blood pressure, dilated pupils, tensing muscles. • Resistance. If the stressor continues to be present, the person fights the threat by preparing to resist, physiologically and psychologically. At first, the stressor will be met with plenty of energy, but if the stressor persists, the individual will start to experience fatigue in fighting it and resistance will wear down. • Exhaustion. Continuous, unsuccessful resistance eventually leads to the collapse of physical and mental defenses. When stress is chronically present, it begins to do damage to a person’s body and his mental state. High blood pressure, higher risk of heart attack and stroke are just some of the physical ramifications. Anxiety and depression are the hallmarks of psychological symptoms of stress, but can also include cognitive symptoms like forgetfulness and indecisiveness. Behaviorally, a person suffering stress might be prone to sudden verbal outbursts, hostility, drug and alcohol abuse and even violence. Another result of chronic stress and overwork is burnout. The term “burnout” is tossed out by people quite a bit to describe the symptoms of their stress response, but burnout is an authentic condition marked by feelings of exhaustion and powerlessness, leading to apathy, cynicism and complete withdrawal. Burnout is a common condition among those who have chosen careers that serve others or interact heavily with other people—healthcare and teaching among them. This Wall Street Journal report from 2017 features an interview with a doctor who talks about the symptoms and repercussions of burnout: Stress is a significant issue for businesses. Now that we know what it is and what it looks like, let’s take a look at the most common causes. Sources of Stress If you poll a group of individuals about what their biggest stressors are, they’re likely to give you these four answers: • Money • Work • Family responsibilities • Health concerns In most surveys on stress and its causes, these four responses have been at the top of the list for quite a long time, and I’m sure you weren’t surprised to read them. But managers should take pause when they realize that all four of these are either directly or indirectly impacted by the workplace. Still, there are so many differences among individuals and their stressors. Why is one person’s mind-crippling stress another person’s biggest motivation and challenge? We’re going to attempt to answer this by looking at the three sources of stress—individual, organizational, and environmental—and then add in the concept of human perception in an attempt to understand this conundrum. Chart mapping out the various factors of stress, individual differences, and how people experience stress. There are three types of factors of stress: Individual, Organizational, and Environmental. The individual factors of stress listed are family issues, financial issues, and individual personality. The organizational factors of stress listed are task and role demands, interpersonal demands, organizational structure, leadership, and organizational life stage. The environmental factors of stress listed are economic environment, political environment, and technology. The chart then lists individual differences, which impact how people experience stress. These differences are perception, job experience, social support, belief in locus of control, self-efficacy, and hostility. The chart then maps out different symptoms of stress, which are separated into three categories: physiological symptoms, psychological symptoms, and behavioral symptoms. The physiological symptoms of stress listed are headaches, high blood pressure, and heart disease. The psychological symptoms of stress listed are anxiety, depression, and less job satisfaction. The behavioral symptoms of stress listed are loss of productivity, absenteeism, and turnover. Individual Factors Let’s start at the top. The first of three sources of stress is individual. Individuals might experience stressful commutes to work, or a stressful couple of weeks helping at a work event, but those kinds of temporary, individual stresses are not what we’re looking at here. We’re looking for a deeper, longer-term stress. Family stress—marriages that are ending, issues with children, an ailing parent—these are stressful situations that an employee really can’t leave at home when he or she comes to work. Financial stress, like the inability to pay bills or an unexpected new demand on a person’s cash flow might also be an issue that disturbs an employee’s time at work. Finally, an individual’s own personality might actually contribute to his or her stress. People’s dispositions—how they perceive things as negative or positive—can be a factor in each person’s stress as well. Organizational Factors There’s a plethora of organizational sources of stress. • Task or role demands: these are factors related to a person’s role at work, including the design of a person’s job or working conditions. A stressful task demand might be a detailed, weekly presentation to the company’s senior team. A stressful role demand might be where a person is expected to achieve more in a set amount of time than is possible. • Interpersonal demands: these are stressors created by co-workers. Perhaps an employee is experiencing ongoing conflict with a co-worker he or she is expected to collaborate closely with. Or maybe employees are experiencing a lack of social support in their roles. • Organizational structure: this refers to the level of differentiation within an organization, the degree of rules and regulations, and where decisions are made. If employees are unable to participate in decisions that affect them, they may experience stress. • Organizational leadership: this refers to the organization’s style of leadership, particularly the managerial style of its senior executives. Leaders can create an environment of tension, fear and anxiety and can exert unrealistic pressure and control. If employees are afraid they’ll be fired for not living up to leadership’s standards, this can definitely be a source of stress. • Organizational life stage: an organization goes through a cycle of stages (birth, growth, maturity, decline). For employees, the birth and decline of an organization can be particularly stressful, as those stages tend to be filled with heavy workloads and a level of uncertainty about the future. Environmental Factors Finally, there are environmental sources of stress. The economy may be in a downturn, creating uncertainty for job futures and bank accounts. There may be political unrest or change creating stress. Finally, technology can cause stress, as new developments are constantly making employee skills obsolete, and workers fear they’ll be replaced by a machine that can do the same. Employee are also often expected to stay connected to the workplace 24/7 because technology allows it. Practice Question As a side note, it’s important to understand that these stressors are additive. In other words, stress builds up, and new elements add to a person’s stress level. So a single element of stress might not seem important in itself, but when added to other stresses the worker is experiencing, it can, as the old adage says, be the straw that broke the camel’s back. Individual Differences icon of several people standing in a circle. One person is highlighted in orange.Those are the sources of stress, but differences within an individual determine whether that stress will be positive or negative. Those individual differences include • Perception. This is what moderates the individual’s relationship to the stressor. For instance, one person might see a potential layoff as a stressful situation, while another person might see that same layoff as an opportunity for a nice severance package and the opportunity to start a new business. • Job Experience. Because stress is associated with turnover, it would stand to reason that those employees with a long tenure are the most stress-resistant of the bunch. • Social Support. Co-workers, especially those who are caring or considered to be friends, can help protect a fellow employee against the affects of stress. • Belief in locus of control. Those who have a high internal locus of control (those that believe they are in control of their own fate) are, unsurprisingly, not as affected by stress as those who feel they are not in control. • Self-efficacy. Self-efficacy is an individual’s belief that he or she can complete a task. Research shows that employees who have strong levels of self-efficacy are more resistant to the effects of stress. • Hostility. Some employees carry around a high level of hostility as a part of their personalities, and they’re often suspicious and distrustful of their co-workers. These personality traits make a person more susceptible to stress. If those potential sources of stress sneak through the individual difference filters and manifest themselves as stress, they will appear in a variety of physiological, psychological and behavioral symptoms. We reviewed the physiological symptoms when we talked about the definition of stress. Add to that psychological symptoms, like tension and anxiety, but also job dissatisfaction and boredom, and behavioral symptoms, like turnover and absenteeism, and you can see how stress can become an organizational problem. How much of an organizational problem is stress? Well, stress can cost an organization a lot more than money. We’ll take a look at that next. Consequences and Costs of Stress Today’s typical workplace expects quite a bit from its employees. In a climate of layoffs and downsizing, employees are typically expected to do “more with less”—that is, additional work for the same pay, often without updated resources and in a short amount of time. Demands for increased efficiency, quality and innovation can come at quite the cost, and employees are caving under the pressure. A study conducted by Mental Health America (formerly the National Mental Health Association) suggests that stress costs US employers an estimated $500 billion dollars in lost productivity annually. What does lost productivity mean? Let’s take a look at how employees responded to that 2017 survey, and talk about how it can directly (and indirectly) impact a company’s bottom line. Picture of a puzzle with a missing piece What employees are saying:[1] • A third of employees surveyed reported staying away from work at least two or more days a month because their work environments were so stressful • Of those that responded that they missed two or more days of work • 35% said they missed between three and five days a month • 38% said they missed six days or more According to the US Centers for Disease Control and Prevention (CDC), absenteeism alone costs US employers $225.8 billion annually, or about $1,685 per employee. This cost, they say, comes from[2] • Wages associated with unreported paid time off • High cost of replacement workers • Overtime pay for employees picking up their additional work • Overall administrative costs of managing absenteeism It isn’t just the loss of productivity of the absentees, but their co-workers who are affected by this. In an article for, Mental Health American CEO Paul Gionfriddo said, “Overstressed and unhealthy employees contribute to unhappy workplaces. This means that the indirect effects on everyone else—the people who dread coming to work—may not show up in the calculated productivity losses, but contribute to them nevertheless.”[3] Indeed, this low morale, combined with possible safety and quality issues that can result, are uncalculated effects. Here’s what employees are saying about the effects of stress on their workplaces:[4] • Two-thirds felt they worked in an unsupportive or even hostile environment • Two-thirds said they didn’t often trust their coworkers to support them at work • Two-thirds said their supervisor was unsupportive • More than eight in 10 said the stress at work directly caused stress with family and friend relationships • More than seven in 10 admitted they bad-mouth their employer outside of work It’s easy to see why, considering these sentiments, that nearly three quarters of the employees surveyed are either actively seeking new employment or thinking of doing so. The Work Institute’s 2017 Retention Report suggested that replacing an employee costs about 33% of that employee’s salary, meaning that the average worker making $45,000 a year will cost about $15,000 to replace, when you consider advertising, screening and testing applicants, training, and onboarding costs (among others). For some harder-to-fill positions, this cost could increase to 50% of the worker’s salary.[5] Turnover also lowers productivity in that there is a shift of work while the position is empty and even after when the new employee is learning her position, and the employee leaving takes with him knowledge of the company that may not be recaptured. Sadly, the Work Institute’s 2017 Retention Report also captured data that led them to determine that roughly 75% of all turnover could be avoided. When surveying their 34,000 respondents, the top reasons for turnover were cited as career development, compensation and benefits…and then three that are directly related to stress: work-life balance, manager’s behavior and well-being.[6] Workplace Violence Workplace violence is on the rise, and it is the third leading cause of death for workers on the job. Of course, some workplace violence, like an active shooter or even an angry retail customer who takes a swing, is not due to workplace stress. Still, this kind of activity takes a toll on businesses, adding yet another layer of stress and a price tag of about $55 million in lost wages for the 1.8 million work days lost each year due to workplace violence (according to a study by Lower & Associates, a risk management firm).[7] Photograph of a person holding up a yellow square sign with an angry face drawn on itBut workplace violence rears its ugly head on a smaller level as well. “Desk rage” is a term used to describe extreme or violent anger shown by someone in an office, especially when this is caused by worry or a difficult situation. This can manifest itself in screaming and shouting, throwing or angrily destroying office equipment, or it can be more subtle, like damaging water cooler gossip, theft or abuse of sick time. The people who work with someone experiencing desk rage are as much victims of workplace stress here as the “desk rager.” These are some of the results of stress that drive down productivity, but stress also affects the cost of health benefits and medical needs that an employer will pick up by providing health insurance. Stress factors into five of the six leading causes of death in the US, and a staggering number of medical office visits will, in part, address symptoms related to stress. It’s no surprise to hear that a company like General Motors spends more money on healthcare than it does on steel. And (surprise!) workplace stress is responsible for up to $190 billion in annual US healthcare costs. Joel Goh, Harvard Business School associate professor, tackled the subject of healthcare costs and stress in his paper, “The Relationships Between Workplace Stressors and Mortality and Health Cost in the US,” co-authored with Stanford University professors Jeffrey Pfeffer and Stefanos Zenios. The three researchers cited ten major factors of workplace stress and then mathematically examined their occurrences (and co-occurrences), concluding that workplace stress contributes to approximately 120,000 deaths each year. That, and additional healthcare expenses related to addressing stress related problems, accounted for $125 to $190 billion in healthcare costs, or about 5% to 8% of the nation’s total expenditure.[8] Practice Question The bottom line is that stress in the workplace has a huge effect on an employer’s bottom line. 1. Hellebuyck, Michele, et al. “Mind the Workplace.” Mental Health America, 2017, the Workplace - MHA Workplace Health Survey 2017 FINAL.pdf. 2. “Worker Illness and Injury Costs U.S. Employers $225.8 Billion Annually.” CDC Foundation, 28 Jan. 2015, 4. Ibid. 5. Sears, Lindsay, et al. 2017 Retention Report. Work Institute, 2017, Retention Report Campaign/Work Institute 2017 -Retention Report.pdf. 6. Ibid. 7. Lowers & Associates. "[Infographic] The Impact of Workplace Violence." The Risk Management Blog. May 19, 2016. Accessed April 26, 2019. 8. Goh, Joel, et al. “The Relationship Between Workplace Stressors and Mortality and Health Costs in the United States.” Management Science, vol. 62, no. 2, 2016, pp. 608–628., doi:10.1287/mnsc.2014.2115.
Tobacco plant researched as a cure for cancerhttp://www.dispatch.co.za/2001/08/06/features/TOBACCO.HTM Tobacco used to fight cancer COULD the much-maligned tobacco plant be used to help cancer patients? A California biotech company says it can, and it has set up shop in tobacco country to prove it. Large Scale Biology of Vacaville, California, has built a commercial "biopharmaceutical production facility" in Owensboro, Kentucky. It is one of a handful of companies harnessing plants to produce useful human proteins. Genetic engineers already use many different ploys to manufacture human proteins, such as insulin and growth hormones. Often, they isolate a human gene that carries the code for making a protein and splice it into yeast or bacteria, which multiply in fermentation vats. Other methods include putting genes into cancer cells, which grow endlessly in lab cultures, or into farm animals, which make the proteins in their milk. Now, companies are doing the same thing by the acre. They hope molecular farming, as some call it, will be cheaper and more efficient. "We borrow the plant's cellular machinery," said Barry Bratcher, Large Scale Biology's biomanufacturing director. "The plant is just a host for us." Tobacco is a big bulky plant that produces lots of greenery, and it is one scientists have already had plenty of practice genetically manipulating in the lab. Large Scale Biology has contracts with four local farmers to grow a combined 27 acres of tobacco for research. Tobacco is also grown in the company's five greenhouses in Owensboro. "It is ironic that tobacco might actually be used to create health instead of reducing health," said chief executive officer Bob Erwin. Already, the company has begun early-stage testing of a tobacco-produced vaccine intended to trigger the body's immune system to fight non-Hodgkin's lymphoma. Each dose would be a customised protein, made by mutant genes taken from the patient's own cancer cells. In theory, the proteins should stimulate the body to turn against the cancer. If the vaccine works, the company says it will produce a plant that can make 15000 individualised doses a year. The company also is considering human testing of a treatment for Fabry's disease. The therapy is a tobacco-made copy of a normal human enzyme, needed to break down fats, that is missing in victims of the disease. In earlier stages is a collaboration with the US Navy and the National Institutes of Health to use tobacco to make stem cells grow. The goal is to find a natural human protein that will multiply blood-forming stem cells that have been isolated from the bone marrow. Stem cells are the source of all human tissue. Those taken from early-stage embryos can grow into any cell in the body, and they will divide forever in test tubes. However, because they are derived from embryos discarded during in vitro fertilisation, many people believe their use is unethical. Adults also have stem cells. Even though they can be isolated from the brain and other organs, they are difficult to grow on demand. A team of Navy and NIH researchers, led by Dr John Chute, is attempting to produce a protein that will make blood stem cells divide repeatedly in a test tube. They already have evidence that the body makes such a protein. The collaboration with Large Scale Biology is intended to find the gene responsible so it can be manufactured in quantity. Chute said the protein could be extremely useful for conducting gene therapy to correct inherited blood diseases, such as sickle cell anaemia. The idea: isolate a few of the exceedingly rare stem cells from the victim's marrow, then use the protein to produce many more copies of them. This will leave doctors with enough stem cells to attempt gene therapy, replacing the disease-causing genes with healthy copies. The repaired stem cells would be returned to repopulate the patient's marrow. The work could also have wartime applications. Radiation and chemical weapons can destroy the bone marrow, leaving only a few stem cells. On their own, these cells may reproduce too slowly to prevent death. But victims might be rescued by removing some of the remaining cells, building them up in a lab dish, and then returning them to restore the marrow. Dr Larry Goldstein, professor of cellular and molecular education at the University of California, San Diego, cautioned that such research is difficult. "I hope they'll be successful, but I think it's unrealistic to expect rapid success," Goldstein said. "Lots of companies and labs have been working on this for years and it's a painstakingly slow process to do this sort of thing." However, Erwin said his company hopes to manufacture the stem cell factor soon, using tobacco plants in Owensboro."We would like to get the gene identified in the next year and start clinical trials with the product in two years," he said. -- Sapa-AP
Macropodus Macropodus est un genre de poissons de la famille des Osphronemidae. Étymologie Vient du grec makros, grand et de podos, pieds. Liste des espèces Macropodus erythropterus Freyhof & Herder, 2002. Macropodus hongkongensis Freyhof & Herder, 2002. Macropodus ocellatus Cantor, 1842. Macropodus opercularis (Linnaeus, 1758). Macropodus spechti Schreitmüller, 1936. Références taxonomiques Genre de perciformes (nom scientifique) Osphronemidae
Thornton (autor) Thornton se poate referi la următorii autori care au denumit cel puțin o specie: Ian W.B. Thornton Robert John Thornton
How To Lighten Up Large Polymer Clay Beads By Making Them Hollow: Q: I enjoy reading your comments and instructions. I have a question how do you make a 1″ round hollow bead? ~Rezvan Kline A: One way to make a hollow bead is to roll out a flat sheet of polymer clay and use a circle cutter to make two equal sized rounds. You can then bake each of these clay pieces separately over something spherical and heat safe, like a small light bulb, glass ball ornament or marble. After both of the half spheres are baked hard, you will need to sand the edges a bit so that the two halves of the ball fit together nicely. These two halves will be glued together using a super glue like Krazy Glue. The seam can then be disguised with a snake of clay or can be left alone if you like. Now this technique takes a little practice to get the spheres even. In order to make a nicely formed bead you will need the baking surface to be round enough. How well you match the two halves will determine how round your bead turns out. I have heard of people using demi-sphere (half ball) molds for making hollow beads. You can sometimes find these as chocolate molds or specialty food molds used by the catering industry. Some of those demi-sphere molds are convex and some are concave. Either would work as long as they were heat safe. For large hollow beads you will need to make sure they are baked properly for strength. Read this article for some info about how to do that: Rules For Baking Beads Also it is important to use a strong brand of clay. Here’s a link to an article that will help you pick the right clay for the job: Best Polymer Clays After the bead has been baked, glued, sanded and buffed, you can carefully drill it or glue on a bail to make your pendant or other jewelry item. Hopefully Rezvan, this hollow bead tutorial was helpful for you. If you or anyone else has further questions about the techniques, just leave me a comment below!
Permission & Request If you need to use any article, image, figure, illustration, matter etc. from our printed book(s), you are requested to fill the following particulars: Name: Book Title: Address: Author: Affiliations: ISBN: Contact: E-mail: Please mention the detail you need: Note: Provided information should be genuine and requested matter must be used for legitimate purpose as per the given detail, if we permit to use the same. We will revert you back within next 2-3 working days.
Often, employees feel powerless when faced with what they believe to be discrimination in the workplace. Out of fear of losing their jobs in the current economic climate, many people feel that they have no choice but to tolerate certain behaviors, including harassment, jokes, and coworker favoritism, among other sometimes difficult to prove forms of discrimination. Fortunately, however, the law is squarely on the side of the employee when it comes to workplace discrimination. Protecting the Employee Federal law prohibits discrimination of any kind based on race, genetic information, national origin, pregnancy, religion, gender, disability, age, or skin color. Some state laws expand on federal laws to include the LGBTQ community, and other groups. Although the laws are clearly written and the information is accessible to everyone (federal law even requires posting the information in the workplace), that doesn’t mean that everyone abides by them. Discrimination can be subtle, and sometimes unintended, but, that doesn’t make it any less damaging to the victims. Discrimination also creates a hostile work environment for everyone. But, what does discrimination look like? Exactly what constitutes discrimination in the workplace? Types of Workplace Discrimination Discrimination in the workplace doesn’t have to be blatant, and often it’s not. That’s one of the characteristics that makes it so difficult to prove and identify to the outside observer. However, to those on the receiving end, discrimination is painfully obvious. Here are a few types of workplace discrimination: - If an employer denies an employee workplace accommodations that he or she needs because of a disability or religious beliefs, it is a form of workplace discrimination. - An employer, or his employees, are guilty of discrimination if they retaliate against an employee who complained about on-the-job discrimination, or aided in a lawsuit or investigation. - If an employee faces wrongful termination, or is denied employment or a promotion, because he or she falls under one of the federal or state protected categories (gender, race, etc) then he or she may have a case against the employer. - Harassment, obvious or not, against any employee, regardless of whether it’s by a coworker or manager, is illegal if the harassment is aimed at the victim because he falls under a protected category. - Equal pay for equal work has been in effect since the Equal Pay Act of 1963 was made law. For example, if an employer pays one person less than other because the lower paid employee is a female, then the employer is guilty of discrimination. - The employee doesn’t have to fit into a protected category to be covered by discrimination laws. All laws also cover employees who may be discriminated against because he or she is married to someone who fits into a protected category as well. What You Can Do If you believe that you are a victim of workplace discrimination, you have 180 days to file a complaint with the federal government’s Equal Employment Opportunity Commission (EEOC). If local laws have also been broken, the deadline for filing a claim extends to 300 days. However, it’s strongly advised that you file a claim as quickly as possible while memories remain clear and the evidence is still available. For more information on deadlines and other requirements, contact either an attorney that specializes in workplace discrimination law, or the EEOC. Every work environment has its share of stresses and anxieties that employees and employers alike are required to work through together. However, even amid the pressures involved with working around others, meeting deadlines, and dealing with difficult circumstances, in no way does that ever justify discriminatory acts against coworkers or subordinates. If you feel as if you have been a victim of workplace discrimination, either while applying for a job, during the course of employment, or if you believe you were unjustly terminated, understand that you do have rights. One of those rights is to file a claim, either through the EEOC, or with an attorney who will advise you of your legal standing, and will help you fight back against workplace discrimination.
Is Happy Hour Hurting Your Health? A Few Ways to Drink Smarter2 years ago | Alcohol By Joy Stephenson-Laws, JD, Founder April is Alcohol Awareness Month. The Centers for Disease Control and Prevention (CDC) conclude that excessive alcohol use is responsible for approximately 88,000 deaths in the U.S. each year and cost $249 billion in economic expenses in 2010. According to the National Institute on Alcohol Abuse and Alcoholism (NIAAA), genetics may play a factor in whether a person develops alcoholism or not. “Research shows that genes are responsible for about half of the risk for alcohol use disorder,” NIAAA reports. For example, some people of Asian descent carry a gene that affects the way they metabolize alcohol and may cause an unpleasant drinking experience with nausea, flushing and rapid heartbeat. A person with this gene may be less inclined to drink and if they do, they may develop alcoholism. There is even a gene that is associated with the tendency to consume alcohol during pregnancy. Expectant mothers who carry a variant of this gene tend to consume more alcohol and are recommended to stay away from alcohol during pregnancy because of the increased risk of premature birth. Aside from genetics, there are environmental factors, including traumatic life events and depression, that may lead to alcohol abuse. How much is too much? Just like how you may measure proportions of food, you should do this with alcohol as well. According to NIAAA, one standard drink has about 0.6 fluid oz or 14 gm. A glass of wine should be about 5 oz, and if you are drinking distilled spirits you should keep it to about 1.5 oz in one drink. Recommended alcohol consumption amounts are also different for males and females. NIAAA says men should have no more than four drinks on a single day and no more than 14 drinks per week. Women should have no more than three drinks on a single day and no more than seven drinks per week. You need to be especially careful if you are over the age of 65. Older adults should have no more than three drinks on a single day and no more than seven drinks per week, according to NIAAA. You might be telling yourself, these limits are high! I don’t nearly drink that much on a single day or in one week. However, are you pouring more than a five oz glass of wine when you unwind after a long day? According to the National Institute of Health (NIH), “[d]rinkers have difficulty defining and pouring standard drinks with over-pouring being the norm such that intake volume is typically underestimated.” Bars and restaurants tend to over-pour as well. A study published in NIH did a focus group with bartenders in 80 different establishments in Northern California counties. The study revealed, “[t]he average wine drink was found to be 43% larger than a standard drink with no difference between red and white wine. The average draught beer was 22% larger than the standard. Spirits drinks differed by type with the average shot being equal to one standard drink while mixed drinks were 42% larger.” You are probably drinking more than you realize. We previously blogged about how excessive alcohol consumption can increase your risk of certain diseases including cancer, but you may not know much about how alcohol depletes nutrients from your body. And did you know the process of metabolizing alcohol requires nutrients? An in-depth report on alcoholism from the New York Times says, “[p]eople with alcoholism should be sure to take vitamin and mineral supplements. Even apparently well-nourished people with alcoholism may be deficient in important nutrients.” Even if you are not an alcoholic, regular alcohol consumption can aid in depleting your body of vital nutrients. On top of that, you may already have mineral deficiencies you are not aware of. The combination of alcohol and existing deficiencies could prevent you from feeling your best. What are some of the minerals alcohol depletes? Alcohol depletes the body of iron, a mineral that is very important for mental health. People with low iron are at a greater risk of developing depression. Furthermore, without adequate iron your body cannot carry enough oxygen to your vital organs. Low iron can also cause you too feel fatigue. That hangover you may have after too much drinking is from the alcohol depleting minerals and other nutrients! To find out just how critical these minerals are and how they can protect you from disease and help you live as your healthiest self, read Minerals - The Forgotten Nutrient: Your Secret Weapon for Getting and Staying Healthy. Enjoy your healthy life! The pH professional health care team includes recognized experts from a variety of health care and related disciplines, including physicians, health care attorneys, nutritionists, nurses and certified fitness instructors. To learn more about the pH Health Care Team, click here.
Why did Huguenots leave France? Huguenots were ordered to renounce their faith and join the Catholic Church. During the entire period between the early part of the sixteenth century to 1787, thousands of Huguenots left their homes in France for other countries because of recurring waves of persecution. When did the Huguenots come to England? Stigmatized by oppressive laws and facing severe persecution, many Huguenots (Protestants) fled France. In 1681, Charles II of England offered sanctuary to the Huguenots , and from 1670 to 1710, between 40,000 and 50,000 Huguenots from all walks of life sought refuge in England . Are there still Huguenots in France? Today, there are some Reformed communities around the world that still retain their Huguenot identity. In France , Calvinists in the United Protestant Church of France and also some in the Protestant Reformed Church of Alsace and Lorraine consider themselves Huguenots . Where did the French Huguenots settle in England? What happened to the French Huguenots? Huguenots were French Protestants in the 16th and 17th centuries who followed the teachings of theologian John Calvin. Persecuted by the French Catholic government during a violent period, Huguenots fled the country in the 17th century, creating Huguenot settlements all over Europe, in the United States and Africa. What does the Huguenot cross mean? Symbolism. The symbolism of the Huguenot cross is particularly rich. The cross as an eminent symbol of the Christian faith, represents not only the death of Christ but also victory over death and impiety. This is represented also in the Maltese cross . Did Huguenots own slaves? When the Huguenots arrived in the Hudson River Valley in the 1660s, they entered a slave – owning society. The Huguenots did not enslave people in France or Germany, but they soon took up the practice in their new homes. Why did the Huguenots migrate to Britain? Many Huguenots had difficult and dangerous journeys, escaping France and crossing to England by sea. They were suffering under French Catholic landlords and very poor harvests. They came because of a 1708 law, the Foreign Protestants Naturalisation Act, which invited European Protestants to come and settle in Britain . Did Huguenots settle in Scotland? 1609 Group of Flemish Huguenots settled in Canongate, Scotland . By 1707 400 refugee Huguenot families had settled in Scotland . Helped establish the Scottish weaving trade. Is France Protestant or Catholic? In 2017, the Pew Research Center found in their Global Attitudes Survey that 54.2% of the French regarded themselves as Christians , with 47.4% belonging to the Catholic Church, 3.6% were Unaffiliated Christians , 2.2% were Protestants, 1.0% were Eastern Orthodox. What special skills did the French Huguenots have? The Huguenot refugees who left France were generally merchants, artisans, craftsmen, weavers or were skilled in specific trades. Many were well-educated, and some were able to establish new roles as entrepreneurs or professionals where they settled. What caused a major influx of French Huguenots to North America after 1685? Huguenots were French Protestants who held to the Reformed, or Calvinist, tradition of Protestantism. Due to persecution, n 1685 , about 200,000 Huguenots fled to foreign nations, including Germany, the Netherlands, England, and America . Often Huguenot families would settle in one country, then move to another. What did the Huguenots bring to Britain? In places like Canterbury and Spitalfields in East London, Huguenot entrepreneurs employed large numbers of poorer Huguenots as their weavers. They also introduced many other skills to England , such as feather and fan work, high-quality clockmaking, woodcarving, papermaking, clothing design and cutlery manufacture. What was the biggest industry the Huguenots introduced in Britain? The Huguenots had a huge economic impact on Britain . They revitalised the silk weaving trade, kick- started various manufacturing industries , such as cutlery making in Sheffield, and invested heavily in growing businesses. Who are some famous Huguenots? Arts and entertainment James Agee , American screenwriter, Pulitzer Prize-winning author. Earl W. Pierre Bayle, French author, philosopher. Frédéric Bazille, French Impressionist painter. Marlon Brando, American actor. Sébastien Bourdon, French painter. Hablot Knight Browne (“Phiz”), British illustrator of Charles Dickens.
카리바 댐 카리바 댐(Kariba Dam)은 남부 아프리카의 잠베지강 중류의 카리바 협곡에 만들어진 댐이다. 댐에 의해 탄생한 인공호는 카리보호로 세계유수의 인공호이다. 카리바호는 길이 약220km, 폭 약40km, 담수면적 5,580 평방킬로미터, 저수량은 185입방킬로미터이다. 같이 보기 카리보호 잠비아의 댐 짐바브웨의 댐 잠비아의 관광지 짐바브웨의 관광지 서마쇼날랜드주의 건축물 잠베지강 아치댐 1959년 완공된 건축물 잠비아-짐바브웨 국경
“In January, 2021, the world’s overall population was 7.83 billion, with 5.22 billion using a mobile phone, 4.66 billion using the internet, and 4.02 billion actively using social media”. – Wearesocial The digital era is here, from mobile and web applications to portable smart devices — technology is all around us. With the existence of such innovations, it comes as no surprise that it has now become a crucial part of the human experience. Over the years, technology has had many defining moments. None more apparent than the recent Covid-19 global crisis that brought the world to a grinding halt. To stay connected, conduct business and develop the vaccine, the digital industry came together to transform life as we know it. We saw technology shift from optional to essential in the space of a year. Companies such as Zoom, Clubhouse, Hopin (a new online conferencing platform that went from 0 to 5 billion evaluation in eighteen months) and other digital platforms rose rapidly. With their success in mind, we must consider the usability and accessibility of digital products. What really separates one product from another? What appeals to the end user? The development and use of digital products requires multiple stages, including discovery and market research, design, development, introduction and maintenance. Product Design – What is it? Product design entails creating a useful product that meets the needs of a group of people, identifying user problems, and providing innovative solutions to those challenges. There are two basic elements involved in achieving this — UI and UX design. These two acronyms share the common term “user” because they are essential elements in creating a successful user-friendly experience for a particular product. What is UI (User Interface) Design? User interface refers to where the human and the digital product interact. Its purpose is simply to visually guide users to allow a human to easily communicate with a device in order to complete a task. Good UI should feel instinctive, coherent and user-friendly. This is typically represented by the graphic and artistic components through its layouts, instructions, images, text, animations and sliders. Here’s an example of one of our digital products at Studio14. Whichride, a platform for all existing ride-hailing services, offers users a wide range of features that support seamless comparison and booking of rides from different ride-hailing service providers. What is UX (User Experience) Design? User experience design is concerned with building a relationship between the product and the user; specifically how the user interacts with the products and the required functions it performs. A great example of UX design is Whichride’s ability to interact with users by providing them the exact information needed to perform the required actions. The features are clearly mapped out to enable seamless ride-booking and price comparisons. How Do UI and UX Designs Work Together in a Product? User interface and User experience are intertwined elements coherently existing to define the usability and functionality of a digital product. As a result, it has become crucial for designers to acquire the UI and UX design skills required to implement and develop products successfully. UI vs UX: What’s the Difference? |UI Design||UX Design| |Specialises in the presentation and appearance of a product.||Specifically focuses on the functionality of a product.| |Attention is paid to the visual style structure of a product.||centers on the responsiveness of a product, in relation to the needs of a user.| |Emotionally connects a user to a product using required icons, layouts, font, colours, and actions.||Facilitates a user to accomplish specific actions.| Why Is UI/UX Design Essential in Building a Successful Product? The success of a product is measured based on the value offered and the experience of users. Explained below are elements by which the success of a product is measured: - Simplicity and Usability: A product’s ease of use and simple information presentation are vital determinants of a product’s relevance. UI and UX design play a pivotal role in ensuring a product offers solutions to a user’s problems and, most importantly, provides an exciting and hassle-free experience. - Accessibility and Availability: No doubt, technology has shaped the world and provided tools to encourage ease of living. As an advantage to businesses, UI and UX design elements are one out of a million technological transformations, existing to create roadmaps of product development, making products obtainable to users by understanding individual needs and wants. Thus, becoming a requisite for a product’s market positioning. - Responsiveness and Correctness: A product’s function to swiftly respond to a user’s commands is critical. Hence, the importance of UX design in providing users with quick and seamless navigation while interacting with a product. - Customer-centricity: It is pertinent to note that UI and UX design elements are solely responsible for compelling a user to make decisions to embrace a product’s offerings and values. Without the required UI and UX design elements with the user as a centre of attraction, a product can quickly lose its relevance and positioning in the target market. Importance of UI/UX Design to Businesses For businesses to improve and achieve set goals, UI/UX design plays a crucial role in developing digital products that address a user’s needs directly. Improving sales and product quality are unachievable when zero attention is paid to a user. Users are a salient part of a business, with a noticeable impact on driving revenue and gearing a business’s products and offerings to achieve great success. In business,UI/UX design helps businesses in a number of ways : - Creates a positive impression of a product- A user’s impression and experience determine the acceptance level a product would attain in the target market. - Indispensably attracts and sustains a user’s attention within a short period. - Assures users and gains trust through seamless experiences with a product. - Enables businesses to convert prospective customers into loyal customers. - Ultimately, it improves a product’s usability and responsiveness. In short, the correlation between digital product success and UI/UX literacy is closely aligned.To find out more, email [email protected].
Why did US bomb Japan? Germany surrendered to Allied forces in May 1945, but World War Two continued in Asia as the Allies fought imperial Japan. The United States believed that dropping a nuclear bomb – after Tokyo rejected an earlier ultimatum for peace – would force a quick surrender without risking US casualties on the ground. How long after Pearl Harbor did the US bomb Japan? ON DECEMBER 21, 1941, only two weeks after Pearl Harbor, President Franklin Roosevelt, intent on bolstering America’s battered morale, summoned his armed forces commanders to the White House to demand a bombing raid on Japan as soon as possible. Why did the US decide to bomb Hiroshima and Nagasaki? The Allies feared that any conventional attempt to invade the Japanese home islands would result in enormous casualties, and the bomb was seen as a way of bringing the war against Japan to a swift conclusion. In addition, it may also have been a way of demonstrating American military superiority over the Soviet Union. When did the US attack Japan? “Yesterday, December 7, 1941—a date which will live in infamy—the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan.” Is Hiroshima still radioactive? Among some there is the unfounded fear that Hiroshima and Nagasaki are still radioactive; in reality, this is not true. Following a nuclear explosion, there are two forms of residual radioactivity. In fact, nearly all the induced radioactivity decayed within a few days of the explosions. Did the US have a third atomic bomb? On August 13, 1945—four days after the bombing of Nagasaki—two military officials had a phone conversation about how many more bombs to detonate over Japan and when. According to the declassified conversation, there was a third bomb set to be dropped on August 19th. Why did the US not bomb Tokyo? The U.S. likely did not target Tokyo for the atomic bomb strikes as it was the seat of the Emperor and the location of much of the high ranking military officers. The U.S. decided to drop the bombs onto military industrial targets and centers that had significant military utility such as ports and airfields. Does US regret bombing Japan? So: Yes, there is little evidence that Truman ever truly regretted his order to utilize the bomb. What did America do after Pearl Harbour attack? Less than five months after the Japanese bombed Pearl Harbor, the U.S. Army Air Force launched B-25 bombers from the deck of the USS Hornet (something that was supposed to be impossible) and bombed Tokyo. The raid was more a psychological victory than a tactical one, but psychology is important in winning a war. Did the Japanese know the atomic bomb was coming? The Japanese were warned before the bomb was dropped. After the Potsdam Declaration of July 26, 1945, which called on the Japanese to surrender, leaflets warned of “prompt and utter destruction” unless Japan heeded that order. How many people died from Hiroshima? “The United States military estimated that around 70,000 people died at Hiroshima, though later independent estimates argued that the actual number was 140,000 dead. In both cases, the majority of the deaths occurred on the day of the bombing itself, with nearly all of them taking place by the end of 1945.” Why wasn’t the atomic bomb dropped on Germany? The only reason that the US did not use the atomic bomb against Germany was because the A-bomb was not ready when they officially surrendered. Germany surrendered to the allies on May 7, 1945. The first atomic bomb test didn’t happen until July 16, 1945, several weeks later. Germany surrendered in May 1945. How many Japanese died in World War II? Total deaths by country |Country||Total population 1/1/1939||Total deaths| |Japan||71,380,000||2,500,000 to 3,100,000| |Korea (Japanese colony)||24,326,000||483,000 to 533,000| |Latvia (within 1939 borders)||1,994,500||250,000| |Lithuania (within 1939 borders)||2,575,000||370,000| Did the US attack Japan first? On 12 December 1937 the attack on the United States gunboat USS Panay by Japanese forces in China (usually referred to as the Panay incident) could be considered as the first hostile American action during World War II. What if the US never entered ww2? Without the American entry into World War II, it’s possible Japan would have consolidated its position of supremacy in East Asia and that the war in Europe could have dragged on for far longer than it did.
Tom Ford Velvet Orchid - To be Faithful yet Unfaithful (2014) {Perfume Review & Musings} From Black Orchid to Velvet Orchid or Black Fading into Purple The original sultry tropical signature of Black Orchid by Tom Ford (2006) is immediately recognizable in the new Velvet Orchid, which turns out to be a flanker - only with a special conceptual twist. There it is this by now familiar accord, with its over ripe pineapple and coconuty nuances. For a moment, you wonder if you are not faced with a novel paradox: the flanker which is so faithful to the original that it is not really a flanker anymore but a sligh rebottled affirmation of the matrix fragrance. In the land populated with forests of flankers, the reverse usually takes place: trees echo the name of a matrix-oak-perfume while letting out unrelated sillages for brand recognition reasons. This means that often people do not even start bothering to craft a coherent flanker. There is no philosophy of flankers that we know of, but rather plentiful marketing opportunities... As in a forest where you are standing at a bifurcation waiting to find a sign telling you which way to go, the fragrance then moves away from Black Orchid while keeping some white pebbles in hand to be able to retrace its steps back towards the entrance of the familiar woods. Velvet Orchid is now entering drier territory than its predecessor. Where the film-noir orchid took you to the jungles of Malaysia and their muggy air, the orchid made of velvet takes you to a sandier place at first - a desert. It makes you think of Tom Ford Sahara Noir for this parched sensation. Black Orchid the original then becomes the oasis in which you rested before engaging your steps into a new geography of desire. The newly drawn map of seduction is drier but also muskier. The musk surfs on the crest of a wave which smells urine-like on the other side. The range of notes explored privileges a shrill, high-pitched tone. All the while, Black Orchid lurks underneath the new story line like a crouching black panther until the two story lines, the two eco-systems merge. gigi-hadid-tom-ford-velvet-orchid-fragrance-ad-photo-350x452.jpg Then, the fatness and creaminess of coconut becomes counterpoised with sharp musk. This is probably what the brand call an impression of "corporeal floral notes". The almondy sweetness of Heliotrope permeates the perfume's atmosphere. It brings a pillowy, comfortable feel to the fragrance while the whole mix informs you this is no cocooning time but purring time. The velvety sensation rests on synthetic Heliotropin and natural vanilla reportedly sourced from the Comoro islands. The artificial ambergris supports a sense of animalic intensity which is developed in the composition. Juxtaposed against the backdrop of spices and lush exotic floral notes, Velvet Orchid is a slightly disquieting brew like the unfamiliar can smell of. Once you have come to familiarize yourself with the unfamiliar sensation cultivated by the fragrance creators, the opening of the fragrance is a renewed moment of surprise. You better discern the impact of the absolute of Succan - an ultra purified extraction of rum - which together with the honey and the rest of the composition manage to recreate a vintage oakmoss base feel. What the fragrance committee of four perfumers -Yann Vasnier, Calice Becker, Shyamala Maisondieu and Antoine Maisondieu - together with designer Tom Ford have succeeded at creating is a paradox: a perfume which smells both factually familiar and yet unfamiliar in spirit at the same time. velvet_orchid_bottle.jpg Stylistically, the creative team have managed to give birth to this contradiction by working on a certain, well-accepted idea of what the unfamiliar can mean in olfactory terms. If someone chisels out olfactory nuances which leave you feeling like you are stranded in a thatched house at the liminal point of entry into an obscure rain forest, then you are going to experience a sense of spatial desorientation even if the familiar effluvia of Black Orchid are recognizable on the Art Deco bamboo vanity table brought back from the city to decorate this house set in the wilderness. The sophisticated floral accord of magnolia, daffodil, hyacinth, Turkish rose, jasmine absolute, orange blossom, and Cattleva Leopoldii Scent Trek, said to be accented with fleur de sel, is just a bit strange but not too much. The four perfumers have used the familiar as a scent to lure you in - in a hunting sense - and then to throw you off path with the irruption of new olfactory events, but also by blending in cues that signal subconscious prowling danger. You are now smelling a perfume which suggests an exotic potion made of danger-laden flowers, animals and filled with the experience of unpredictable women. A woman who wears such a perfume doesn't talk much. She lets her unplaceable fragrance do the talking leaving her free to act in the space opened to expectations by her scent. It is the complex mix of emotions she suscitates with her perfume which leaves you wondering what will be her next move. velvet_orchid_bottle_2.jpg Velvet Orchid, like a well-scripted story manages to sucessfully stage a heroin of film noir. Just don't do this at home. It's great for movies, myth-making and perfume-making but of course would be distateful in real life. I always remain aware of possible impact of perfume and beauty reviews on adolescents as a former tutor and so am quick to add a note of caution as I feel enough damage is done by the beauty and fashion industries in their most extreme endeavors. So, the perfume smells dangerous as in intriguing and as in abstract danger. It's playfully dangerous. If Black Orchid created that improbable feeling of the unfamiliar followed by the extra familiar, as we said then in our review, Velvet Orchid does exactly the reverse, it starts with the extra familiar - Black Orchid uncensored - and then segues into the terrain of the unfamiliar. I wrote then about Black Orchid that, "The action of the perfume on the wearer is both dramatic and subtle. Black Orchid by Tom Ford, composed by perfumers David Apel and Pierre Negrin, initially imposes its presence with the help of what could be described as a shock-and-awe tactic. It then slowly ensnares you to the point of making you forget it just barely, a few hours ago, entered your life. This is quintessentially what a real femme fatale is supposed to do. It is also what a dream come true may feel like, one moment foreign, the next, intensely familiar." What you can appreciate from Velvet Orchid is real thinking devoted to the creation of a faithful yet unfaithul flanker at the same time - a first - so as to maintain dramatic tension. You are left to imagine what the next flanker in this series is going to smell like as this elusive game of shadows continues to be played out overtime, or not. Related Posts Leave a Comment
I'm not sure... that I can protect you. Can't... or won't? I'm gonna go get a drink. Don't go easy. Do we want to sell him or bury him? Okay, look, I've told her everything I know, but we've still got a lot of questions. Like why you're both here. It's 'cause the nano told you to come, am I right? How do you know that? I was always afraid that maybe one day they could wake up. How'd they wake up? What's the brain, besides a cloud of neurons all talking to each other? You get enough of 'em together... You cross into consciousness. There are more nanites than there are neurons in a brain... in a million brains. So, if they learn to network... That's a big-ass brain. But it still doesn't explain why us. It's the code you wrote, Aaron... The one we stole. We didn't just use it for the tower. We put it in the nano too. You what?
Common name: Jimson weed Datura Origin: South America Height at maturity: 1 to 2 meters Plant type: flowering plant Flowering: June to frost Flower color: white Type of soil: rich in humus and draining Watering: normal, without excess Use: isolated, clump, pot or tray Diseases and pests: scale insects, red spider mites Toxicity: all parts of the plant are poisonous Storage of seeds: 1 year in a dry place away from light at 3/4°C (refrigerator) Jimson weed can reach 2 meters in height under the right conditions. This plant produces superb, solitary, white flowers. They are large, 10 to 12 cm long, with corollas fused into a tube opening into a funnel with five slightly marked lobes, the calyx, also fused into a tube, is pale green, shorter than the corolla and terminated by five lobes. The flowering period extends from May-June until frost. The leaves are large (20cm), oval, strongly sinuate and have sharp teeth. The fruits have a size of 5 to 7cm in diameter covered with a down of slender prickles. The datura loses its foliage in winter but the plant will come back the following spring. Be careful because all varieties of Datura are toxic and poisonous.
- Fish students should be four years old by August 31. - Class meets Monday, Tuesday, Wednesday, and Thursday from 9 a.m. to 12 noon. - Student-teacher ratio: 7:1. Our active, energetic Fish class makes use of three big classrooms for discovery, play, cooking, and art. Designed for children 4-turning-5-years old during the school year, this is the typical class that prepares children for Kindergarten. Many children finish the year with beginning reading, writing, math, and science skills. Our curriculum uses developmentally appropriate themes to help students - Recognize the shape and sounds of letters - Recognize numerals and count by ones to thirty-one and tens to one hundred - Begin handwriting - Develop fine motor skills with cutting, lacing, and tracing activities - proper pencil and scissor grips - Explore and discover God’s creation - Learn empathy for others - Grow in self-confidence as they are encouraged to try new things - Grow in physical strength and coordination with lots of outdoor play - Gain self-control as they enjoy interactive large-group, small-group, and individual learning Our Typical Schedule 9:00-9:40 Play in centers 9:40-9:50 Welcome and calendar activities 9:50-10:00 Bathroom break 10:15-10:30 Large group story and activity 10:30-11:00 Courtyard/bike time 11:00-11:30 Music (M,T,W) or Cooking (Th.) 11:55-12:00 Wrap up
The Second Palomar Observatory Sky Survey (POSS II) got underway in August 1985, with the first of the 14" photographic glass plates being pulled off the Palomar's Samuel Oschin telescope (then called the 48-inch Schmidt Camera). Jean Mueller was hired as the 48-Inch Night Assistant in July of that year, and worked in the capacity as observer and telescope operator for the duration. She took over 5500 photographic plates and had the honor of setting the telescope and removing the final plate from the historic Schmidt Camera on June 3, 2000, as well as discovering her last supernova, 2000cm, on that same night. Jean Mueller spent hundreds of hours (in her spare time) scanning POSS II plates under high magnification looking for comets, fast-moving asteroids, and supernovae on an X/Y stage that held the 1 mm thick glass plates. Mueller would sometimes mark over a hundred galaxies recorded on a single POSS II plate to hunt for supernova candidates. She would then compare these plates with the first Palomar Sky Survey (POSS I) of similar fields. It was during the years of the POSS II project that Jean Mueller made all of her discoveries. |4257 Ubasti||August 23, 1987| |4558 Janesick||July 12, 1988| |6569 Ondaatje||June 22, 1993| |9162 Kwiila||July 29, 1987| |(11028) 1987 UW||October 18, 1987| |11500 Tomaiyowit||October 28, 1989| |12711 Tukmit||January 19, 1991| |16465 Basilrowe||March 24, 1990| |19204 Joshuatree||June 21, 1992| |24658 Misch||October 18, 1987| |(360191) 1988 TA||October 5, 1988| |(408752) 1991 TB2||3 October 1991| |(412976) 1987 WC||21 November 1987| Working at Palomar Observatory, she discovered a total of 15 comets, including 7 periodic comets 120P/Mueller, 131P/Mueller, 136P/Mueller, 149P/Mueller, 173P/Mueller, 188P/LINEAR-Mueller, 190P/Mueller, and 8 non-periodic comets. She is cred by the Minor Planet Center with the discovery of 13 numbered minor planets during 1987–1993, including several near-Earth objects such as the Apollo asteroids 4257 Ubasti, 9162 Kwiila, and 12711 Tukmit and the Amor asteroid 6569 Ondaatje. The inner main-belt asteroid of the Hungaria family, 4031 Mueller, was named in honor of Jean Mueller for her astronomical discoveries. Discovered on February 12, 1985, by Carolyn Shoemaker at Palomar Observatory with the 18" Schmidt Camera, it was originally designated 1985 CL. The official naming citation was published by the Minor Planet Center on 12 December 1989 (M.P.C. 15576). Jean Mueller is an Advisor of the Meade 4M Community who supports her outreach activities. |Wikimedia Commons has media related to Jean Mueller.|
From reddit r/ACMilan Hit and miss , he oozes class but sometimes struggles with the physicality of the league . His decision making is not the best (there was a chance in the last game he could've assisted to Leao who was open and instead forced his way towards the CBs in front) . His low centre of gravity helps him in keeping the ball in tight situations but he over does it. He's not the finished article by no means , just needs more game time and better decision making . (I'm sure others have a more in-depth comment)
What is a Bridge? A bridge is essentially a construction that is built to cover a road, valley, water body, or other natural obstacles to provide a route over the barrier. Several bridge designs are used that depend upon their function and the soil conditions of the site for bridge construction. A bridge is described normally by its form of construction, like beam, truss, arch, etc. A bridge may also be characterized by the construction materials used, like concrete, stone, and metal. A bridge may have different types of spans that include simple, cantilever, continuous etc. Beam bridges can be simple and made of wood beams. Heavy beams are carried by a bridge crane using a beam clamp to hold the beams. Bridge load rating is executed to establish the loads that can be safely carried by the beam bridges. It is essential to calculate the bending moments in a beam to establish a safe design of the beam bridges. Characteristics of Beam Bridges – Types of Beam Bridges Beam bridges basically consist of beam that is laid across the piers or supports. The beam should possess the strength to bear the loads that are expected to be placed on it. These loads are borne by the bridge piers. The loads cause the beam top edge to be compressed, while the lower edge is being stretched and is under tension. Existing beam bridges are formed by girders, normally box girders, trusses or I-beams, that are supported on strong piers. - Box girders are stretched, box shaped elements that are more suitable to bear the twisting loads. - Trusses consist of one or more triangular units connected at joints or nodes. - I-beams are economical and simple to fabricate. They are simply beams with an I-shaped or H-shaped cross-section. The horizontal elements of the "I" design are flanges and the vertical is the web of the construction. Other beam bridges may be fabricated from concrete beams that are pre-stressed. These materials possess the steel characteristics to endure loads in tension, and concrete strength to bear the compressive loads. The beam bridge's strength is largely influenced by the distance between the piers. Therefore, the beam bridges are normally not suitable for longer length, unless several such bridges are connected with each other. The beam bridge's span is dependent upon the beam weight and the materials strength. As the bridge material thickens, its capacity to hold the loads increases. Therefore, the span could also be increased. However, a sturdy beam may become too heavy, and sag. The beam bridges can be supported by the utilization of trusses. Beam Bridges Materials With the advancement in technology, materials science has also advanced considerably. Beam bridges materials being used are strong, light, and durable. The advanced materials for bridge construction have good operational characteristics. Such materials include reformulated concrete, composite materials that are reinforced with fiber, steel, and pre-stressed materials. Pre-stressed concrete is well suited for beam bridge construction since it can endure excessive compressive stresses. Steel rods are fixed in the concrete that can bear the tensile loads. Furthermore, pre-stressed concrete is cheaper. The current techniques include use of finite element analysis to improve the design of beam bridges. Distribution of stresses on different bridge elements is analyzed to ensure strong beam bridges that can endure the bridge loads. The beams should be held by piers at the ends to increase the bridge load bearing capacity. Concrete, steel, or stones are normally used for the construction of piers. Since stones and non-reinforced concrete are weak in tension, they are normally used for beam bridges that are designed for lighter loads. https://commons.wikimedia.org : Concrete Beam Bridge https://commons.wikimedia.org : Iron Truss Beam Bridges https://commons.wikimedia.org : Niagara Falls Railway Beam Bridge
Esther Anabo drags her bare feet, occasionally biting her lower lip as she struggles away from the new borehole, carrying a 10-litre jerrycan of water under the oven-hot midday sun. Fifteen minutes and 800 metres later, she is back in her small, grass-thatched house at Ominit village in Katine sub-county. Her 12-year-old daughter, Gladys, pours some of the clear water into a blue plastic mug. Anabo, 27, sighs with weary relief and gulps it down. 'This borehole has helped us greatly,' she says, drying her sweaty palms on her floral dress. 'We used to suffer stomach upsets all the time because we had no safe water in our village. Now we drink clean water, and it is nearer, too.' Esther and her neighbours are among the first to benefit from the Katine project. The borehole at Ominit, dug three weeks ago, is the first of eight to be delivered by the African Medical and Research Foundation (Amref) as part of a three-year project. Amref is working in five areas in this poor sub-county of 66 villages containing 25,000 people: water and sanitation, health, education, livelihoods and community empowerment. Unlike much development work, this project is integrated - with work across all five areas at once. The standard of teaching in Katine is frustrating Esther. Her eldest child, Titus Ebola, and his sister, Gladys Agemo, attend Ochuloi primary school, about 2km away. Titus, 14, is in 'primary four', while Gladys is 11 and in the class below. Their mother has grand hopes for both, dreaming of Titus becoming a doctor and Gladys a teacher - 'if they study well'. But the standards in Katine's 13 primary schools are desperately poor - as they are in most rural Ugandan schools. Last year in the sub-county 314 candidates - 205 boys and 109 girls - sat the national primary leaving exams and not one passed Grade One, seen as the minimum indicator of good performance. This has been the same for the past four years. When asked, in English, his name and age, 14-year-old Titus looks embarrassed. Gladys can't state her age or name in English either, but English is the language of instruction and for national examinations. Anthony Otim Onyang, Ochuloi school's head teacher, admits the results are poor, but blames a lack of supervision from parents and the school management committee. He also points to the state of the classrooms: some are simply shelters - iron roofs on brick pillars. When it rains the class is flooded and some exercise books are, inevitably, damaged or destroyed. The Katine project has started to change that. New textbooks have been bought and kits provided for digging pit latrines in schools. Training has been organised for teachers and school management committees. The list of challenges is long, but for Amref's Katine education officer, Lillian Viko, any attempt to fix schools has to start with good management. 'We hope that, if the schools are managed well, then we will see good performance,' she says. The Katine project is run in partnership with Amref and Barclays. Readers have so far donated more than £722,000 which has been matched by Barclays. The bank has also donated an additional £500,000. The livelihoods part of the project is run in partnership with Farm Africa. · Richard M Kavuma is an award-winning journalist with the Weekly Observer in Kampala. He is spending two weeks every month reporting on Amref's work in Katine sub-county.
Some new research reported last week suggests that all is not well in Low Carb Diet land. According to popular belief, low carb/high fat/high protein diets lead to faster and easier weight loss, and improvement in some health risk measures. In this study from the University of Alabama at Birmingham, obese rats were fed the typical high fat/low carb diet recommended for weight loss. The rats had more damaging and deadly heart attacks compared to rats fed a low fat control diet. The reasons for this effect appear related to increased oxidative stress on heart tissue, and an inability of heart muscle cells to adjust to using ketones for energy, instead of glucose. A high fat diet forces cells to burn more ketones, which are fat metabolism byproducts, for energy. After a heart attack, insulin and glucose would normally help heart cells recover, but the cells’ ability to respond to insulin is impaired by a high fat diet. The message: if you’ve got heart disease, a low carb/high fat, meat-heavy diet may not be your best choice, whether or not you need to lose weight. And more bad PR for low carb. An editor for MedPage Today examined evidence that low fat/high carb diets are actually superior at improving glucose tolerance for Type 2 diabetics. Dr. George Lundberg notes that the typical advice these days might be for Type 2 diabetics to avoid starch and sugar. But some studies of glucose levels dispute this conventional advice. Dr. Lundberg discusses one study from 2006, comparing a low fat/high carb vegan diet to the standard American Diabetes Association recommendations. The vegan dieters lost more weight and were more likely to lower their medications compared to the official ADA diet plan. Additionally they were more likely to stick with the diet, because they could eat as much as they wanted. This is a key point, because diets that look good on paper are worthless if people can’t stick with them. Speaking of fat, a study from Canada disputes the widespread belief that nutrition labeling results in healthier food choices and less obesity. Actually, I think we already knew this. The study involved asking subjects from the US, Canada and France questions about the fat content of food products. French respondents knew little compared to the very nutrition-wise Americans. Nevertheless, as the researchers point out, the French have 1/3rd the obesity rate as the Americans. Conclusion: knowledge of nutrition facts has no effect on what people eat. Well, I’m not sure that’s a cause-and-effect relationship. Food and meal customs in France are dramatically different from in the US, whether or not people know a few nutrition facts or read labels. The lack of a nonstop-snacking culture, small portions and vastly lower consumption of sugary soft drinks probably has as much to do with the lower obesity rate. Reading labels isn’t going to fix those American habits. But the study does point out the futility of depending on labels to fix our eating habits. We don’t eat by numbers, or at least most of us don’t, the Food Police being the exception. Finally, are you lured into buying a product because “ANTIOXIDANTS” is plastered on the front of the package? One professor of nutrition has a message for you: Stop! According to Dr. Carl Keen from the University of California, Davis, the word “antioxidant” should be banned from food labels. He points out that test tube experiments that identify various antioxidants in foods don’t say anything about whether those chemical do any good once you eat them. So-called super fruits, or foods with high ORAC values may not improve health, if the antioxidants in those foods aren’t used in cells for beneficial effects. And so far, there’s very little evidence to back up claims of health benefits. He thinks the money and effort spent measuring amounts of antioxidants in foods would be better spent investigating whether eating them leads to better health outcomes. The whole antioxidant craze is another example of Nutritionism. Eating by numbers — choosing foods because of some antioxidant content — might make you feel like you’re doing something healthy, but it isn’t necessarily going to make you healthier.
Patrick Wrixon, President of the European Initiative for Sustainable Development in Agriculture (EISA) gives thought to integrated farming and its positive impact on the environment With global population growth, increased focus on addressing food security and concerns about environmental impact and biodiversity loss, it is essential that farming systems are sustainable. Sustainable farming delivers a sites specific farming system supporting the integration of the environment, society and farm economic viability over the long term (LEAF, 2012). Integrated farming is a whole farm business approach that delivers sustainable farming. It uses the best of modern technology and traditional methods to deliver prosperous farming that enriches the environment and engages local communities. A farm business managed to IFM principles will demonstrate site-specific and continuous improvement across the whole farm. The European Initiative for Sustainable Development in Agriculture (EISA) was formed in May 2001 with the aim of promoting sustainable farming systems, through integrated farming (IF), which are an essential element of sustainable development. EISA is an alliance of National Organisations of 7 European countries (Austria, France, Germany, Luxembourg, the Netherlands, Sweden and the UK) as well as Associate Members from across the industry (including ECPA, FEFAC, ELO, Fertilisers Europe, GOSZ, IFAH-Europe and SAI Platform). Across the National Members, there is a network of Demonstration Farms which show integrated farming in practice. Alongside other management tools, this offers the opportunity: to share IF with other farmers; to support knowledge exchange between researchers and farmers and to discuss the system with stakeholders and policy-makers. Integrated farming promotes the efficient use of natural resources and the provision of positive environmental impacts whilst also meeting the needs of a growing population. Due to the great diversity of individual farming businesses across each country, Europe and the world, sustainable farming relies on the ability for individuals to understand the range of techniques available to them alongside the knowledge which will allow them to choose the most appropriate combination for their site-specific situation. Businesses farming to integrated farming principles use the above framework as a means to make the right decision for productive farming while reducing the impact on the environment. Some growers use interactive management tools such as the LEAF Sustainable Farming Review. Good soil and water management are critical and within the decision making process and framework of IF specific practices are increasingly adopted in the pursuit of more sustainable farming, driving attention to detail in a site-specific manner. Such practices include: precision farming; reduced and zero tillage; good scouting of insects and taking action based on threshold levels; the development of field margins, in-field habitat banks and conservation areas; new skills and a hunger for knowledge; protection and enhancement of valuable habitats and key resources, including biodiversity, soil and water. Increasingly our farmers also manage conservation like a crop, meeting the requirements and needs of wildlife, namely the big 3: a safe nesting habitat; summer food; and winter food. It is imperative that we improve the working relationship between scientific research and farmers such that the knowledge exchange facilitates a better understanding of the issues for which practical solutions are needed as well as the knowledge required for optimal implementation of such practical solutions. As part of the development of IF in the UK, LEAF (Linking Environment and Farming) has felt it important to understand the drivers for change in encouraging more sustainable farming practices and one critical aspect has been the market place. The LEAF Marque is an assurance system, recognising sustainably farmed products and is based on the principles of IF (of Integrated Farm Management, IFM, as it is known in the UK). Good food produced with care and to high environmental standards is identified in-store by the LEAF Marque logo. 22% of UK fruit and vegetables are now LEAF Marque certified and LEAF is now working in some 33 countries across the globe including 10 countries in the EU. Sustainable farming is a global issue, one where we share the problem, as well as solutions. What is critical is engagement among farmers, environmentalists, educationalists, industry, government and society when addressing agricultural and conservation issues. Sustainable farming is forward-looking, planned, practical and productive. Policies and practices need to integrate environment, food, education and health more succinctly with clear engagement and communication among farmers and consumers. Integrated farming is one such model. ■ European Initiative for Sustainable Development in Agriculture (EISA)
Hybrid sol-gel channel waveguide patterning using photoinitiator-free materials Fabrication of organic-inorganic sol-gel glass channel waveguides by ultraviolet (UV) photopolymerization of material synthesized from the precursor methacryloxypropyltrimethoxysilane is performed using a pulsed 248-nm excimer laser without the need of photoinitiators. The transmission spectra of channel waveguides are presented. Absorption bands are identified, revealing high loss (4-5 dB/cm) at 1.55 /spl mu/m mainly due to strong OH group content and an acceptable value of 0.4 dB/cm at 1.3 /spl mu/m. Heat treatment and UV irradiation showed similar effects on the transmission spectra of channel waveguides, revealing no loss improvement in the 1.55-/spl mu/m region.
Skip to main content Some statistical indicators are more useful than others. Data on early voting, for instance, usually doesn’t provide much predictive insight. Historically, the relationship between early voting in a state and the final voting totals there has been weak, and attempts to make inferences from early voting data have made fools of otherwise smart people. In the 2014 midterms, Democrats used early-vote numbers to claim that the polls were underrating their chances. Instead, it was Republicans who substantially beat the polls. None of this deterred reporters and analysts from frequently citing early vote data in the closing weeks of last year’s presidential campaign, very often taking it to be a favorable indicator for Hillary Clinton. On Oct. 23, for instance, The New York Times argued that because Clinton had banked votes in North Carolina and Florida, it might already be too late for Donald Trump to come back in those states: Initially, these reports on early voting were at least consistent with the polls: Clinton had led in most polls of North Carolina and Florida in mid-October, for instance. But when the race tightened after James B. Comey’s letter went to Congress on Oct. 28, early voting data was increasingly cited in opposition to the polls, with pundits and reporters criticizing sites such as FiveThirtyEight and RealClearPolitics for not incorporating early voting data into their forecasts. (It can be easy to forget now, but we spent a lot of time arguing with people who thought our forecast was too generous to Trump.) So what happened? In North Carolina, Clinton won the early vote by 2.5 percentage points, or about 78,000 votes. Furthermore, about two-thirds of votes were cast early. But Trump won the Election Day vote by almost 16 percentage points. That was enough to bring him a relatively healthy, 3.6-point margin of victory over Clinton overall. Early (mail or in-person) 1,474,296 47.1% 1,552,203 49.6% Election day 888,335 55.1 637,113 39.5 Total 2,362,631 49.8 2,189,316 46.2 Clinton won early voting, but Trump won North Carolina Election day votes include provisional ballots. Includes third-party candidates. Source: North Carolina State Board OF Elections The Election Day surge for the GOP wasn’t anything new in the Tar Heel State, however. In 2012, President Obama had built a 129,000 early vote lead over Mitt Romney — substantially larger than Clinton’s over Trump — but had lost the Election Day vote by a huge margin, costing him the state: Early (mail or in-person) 1,297,067 47.2% 1,426,129 51.9% Election day 973,328 55.3 752,262 42.8 Total 2,270,395 50.4 2,178,391 48.4 Obama won early voting, but Romney won North Carolina Source: North Carolina State Board OF Elections So Clinton was running behind Obama’s early voting pace in North Carolina — which obviously wasn’t a good sign, given that Obama had lost the state. Why, then, had people taken the North Carolina numbers as good news for her? Actually, not everybody did. A few news outlets had pointed out that Clinton was running behind Obama’s pace there, and the Clinton campaign itself was worried about its North Carolina numbers.1 Still, early voting data can be easy to misinterpret. Early voting is a relatively new innovation. Traditions and turnout patterns vary from state to state, and they can change whenever new laws are passed, or depending on how much the campaigns emphasize early voting.2 Meanwhile, early voting numbers are reported from lots of different states at once. Many news outlets focused on a supposed turnout surge for Clinton among Hispanic voters while giving less attention to signs of decline in African-American turnout.3 The latter was actually more important than the former because blacks are more likely than Hispanics to be concentrated in swing states. Furthermore, early voting data doesn’t necessarily provide reason to doubt the polls, because early voting is already accounted for by the polls. For instance, some North Carolina polls had shown Clinton losing the state despite winning among early voters, just as actually occurred. So there are multiple interpretations of the data, but there’s not much empirical guidance on which one works best … that makes for a recipe for confirmation bias. The Times, for instance, was exceptionally confident in Clinton’s chances from the start of the campaign onward, and early voting tended to reinforce its pre-existing views of the race. There’s also a broader point to be made about the use and abuse of data in campaign coverage. After the election, some of the pundits who had touted Clinton’s early voting numbers as an alternative to polls claimed that “the data” was wrong and had led them astray. And the Times, which had spent a lot of time reassuring its readers that Clinton would win, wrote an article entitled “How Data Failed Us in Calling an Election.” Whenever I see phrasing like this, I mentally substitute the near-synonym “information” for “data” and reconsider the sentence. Would the Times have published a headline that read “How Information Failed Us in Calling an Election”? Probably not, because that sounds like the ultimate dog-ate-my-homework excuse. Isn’t it the job of journalists to sort through information and uncover the real story behind it? But the thing is, blaming “the data” usually is a dog-ate-my-homework excuse. The problem is often in assuming that because you’ve cited a number, you’ve relieved yourself of the burden of interpreting the evidence. And as we’ve described in the first few installments of this series, news outlets referenced lots of data during the general election but often misinterpreted it, almost always reading it as good news for Clinton even when there were conflicting signals. They touted early voting as favorable for Clinton, even though it hadn’t been very predictive in the past and showed problems for her in states such as North Carolina. They asserted that the Electoral College was a boon for her, even though the data showed it was Trump’s voters and not Clinton’s who were overrepresented in swing states. They highlighted Clinton’s numbers in Arizona, but downplayed data showing Clinton struggling in Ohio and Iowa, which had traditionally been bellwether states. They mostly ignored data showing an unusually high number of undecided voters, which made Clinton’s polling lead much less secure. I don’t mean to suggest that one should have gone to the other extreme and confidently predicted a Trump victory.4 Nor do I mean to imply that interpreting election data correctly is easy; it usually isn’t. (This goes for us too: FiveThirtyEight got itself in one heck of a mess in assessing Trump’s chances in the Republican primary.) But political journalism circa 2016 was in a place where there was a lot of fetishization of “data,” but not a lot of experience with or appreciation for the tools needed to interpret it — namely, probability, statistics5 and the empirical method. That made for a high risk of overconfidence in extracting meaning from the data. 1. Or at least it claimed to be based on comments it made in December at the Harvard Institute of Politics conference. 2. Also, early voting data is incomplete: Many states report early voting turnout statistics by party before Election Day, but they don’t actually count the votes until election night. 3. To its credit, the Times did publish one excellent article on declining black turnout numbers, although it didn’t figure much into their final analyses of the race. 4. My view, instead — even with the benefit of hindsight — is that the preponderance of the data showed that Clinton was a favorite, just not a particularly heavy favorite. 5. The term “statistics” has two common meanings. There’s statistics as in nuggets of quantified information, e.g., “Tom Brady threw for 28 touchdowns this season” and “there were 17 unprovoked shark attacks in Australia in 2016.” And then there’s statistics as in a branch of science devoted to the analysis and interpretation of data, e.g., “there’s no correlation between shark attacks in Australia and Tom Brady’s passer rating.” At FiveThirtyEight, we’re mostly interested in the latter definition of statistics — that is to say, we’re interested in statistical analysis — since statistical factoids cited without context are mostly just noise. Nate Silver is the founder and editor in chief of FiveThirtyEight.
CNET también está disponible en español. Ir a español Don't show this again The Force is strong with 'Clone Wars' writer A screen shot from 'Clone Wars,' the animated series that continues the Star Wars saga. LucasFilm Every job has its advantages and its disadvantages, but Christian Taylor's may have one of the best perks around: regular creative meetings with George Lucas. You may not know who Taylor is, but if you've got kids who watch TV, they almost certainly know his work. He's the head writer on the "Clone Wars," the animated series that picks up the "Star Wars" universe where the movies leave off. After previous stints writing for shows as diverse as "Lost" and "Six Feet Under," Taylor began writing three years ago for "Clone Wars," which finishes up its third season on April 1. A confessed "Star Wars" obsessive, Taylor nonetheless brings a bit of sobriety to a writing role that tasks him with the responsibility of caretaking a cast of characters and a roster of storylines that millions of people take very seriously. "Clone Wars" began airing on the Cartoon Network in 2008 and follows the continued adventures of some of the most loved characters from the original and prequel "Star Wars" films and is set in the time frame in between "Episode II: Attack of the Clones" and "Episode III: Revenge of the Sith." But if you're old enough to have seen any of the original three films in theaters when they first came out, you might not even know about the show. Unless you have children, that is. For many kids, "Clone Wars" was their introduction to the giant Lucas-created ecosystem--so don't tell them that there's nothing like the original films. Taylor says he doesn't get into the game so many want to play with "Star Wars"--which film is the best? Are the prequels awful, or are the first three films too slow? For him, it's just a privilege to be involved. Yesterday, Taylor sat down for a 45 Minutes on IM interview and talked to CNET about things as diverse as writing about the Force, what it's like to meet with Lucas, what his 10-year-old self would say to him about his job, and, yes, about his opinions on Jar-Jar Binks. Q: Thank you very much for doing this. It's great to have you here. Christian Taylor: Thank you. To start, I have to ask: I was told that you were in a notes meeting with Lucas earlier today. Now, I know you can't tell me any of the substance, but can you talk about what it's like to be in meetings with Lucas? Taylor: Well, I've been doing this for three years now, and it never gets old. It's both surreal and completely ordinary at the same time. He's great fun and more mischievous than people give him credit for. He's also incredibly smart, so it's never boring. I loved "Star Wars" as a kid, like all my peers. I was talking to my sister over the Christmas holidays and she said 'imagine if you could talk to your 10-year-old self and tell him what you'd be doing when you grew up!' Sometimes I pinch myself, but it's a fantastic job." What would your 10-year-old self say to you? How impressed would he be? He'd say, 'Bitch, grab you a piece of the Death Star and frame it!' Apparently a load of that stuff ended up in a dumpster [after the filming of the original "Star Wars."] No, seriously, he'd be impressed and excited and would say. "Don't screw it up!" That's kind of what I say to myself every day. What was your experience of seeing the original "Star Wars" the first time? Taylor: It's vague as I was pretty young. I saw it in the cinema, I know that much. I think it took my parents awhile. I remember the line and the ship going overhead. "Empire" was more profound for me. I went to the premier because my sister's friend worked for Warner Bros. and I have a signed program by Dave Prowse [who played Darth Vader] and Mark Hamill. I built the snowscape from "Empire" in my living room with polystyrene. I was obsessed. How did you end up working on "Clone Wars?" Taylor: I had a meeting and then got a call from my agents saying they want you. They had met with a lot of people and as a writer from "Six Feet Under" and "Lost" I didn't think I was an obvious choice. But they wanted to move away from animation writers and focus on a drama writer from TV. It was luck I guess. As a writer for "Lost" and "Six Feet Under," do you see any similarities in the themes between those shows and "Clone Wars"? Taylor: That's a tough one. I think "Six Feet" was a great space to explore characters and "Lost" was the show of riddles and how to get out of them combined with great characters. Mix those together and maybe you have "Star Wars." "Star Wars" is an incredible "universe" to play in because the characters are key but the situations need to be unique and fun. The first episodes I wrote were epic and dealt with the Force. You don't get to do that everyday as a writer. What's amazing about writing this show is you can write it and they can build it. Literally. Most shows I've worked on are an endless game of compromise. On most TV budgets and schedules, you could never achieve what we do dramatically and physically on "Clone Wars." What prepared you to write for this universe that so many people hold so dear? Taylor: I think one of the reasons I've succeeded is that I know "Star Wars" well but am not a fan as such. I really have no understanding or comprehension of the entire universe that is the books, comics, video games, etc. I know the movies and love the characters. That would be a lot to live up to and would confuse my brain. I write from character and always try to be sincere and true and emotional when writing them. I have no idea what droid is called what...there are far better people to audit that. Writing in such a beloved universe is like standing on a cliff's edge. If you look down and see all the fans you'd lose your nerve and get serious writers block. What's your sense of whether old-school "Star Wars" fans accept "Clone Wars" as a legitimate part of the story? Old school being, you know, folks who were old enough to see the original films when they first came out. Taylor: It's funny, I tell people--writers, directors, executives--that I have meetings with what I am doing and they are so jealous. I think a lot of people my age are watching it with their kids and getting to relive the world through two perspectives: Their own and their kids'. Many more people are watching it than I thought. Guilty pleasure or not, it's somewhat a place to come home to. The fans love the show if for no other reason than it's "Star Wars" and they know George Lucas has been in every single story meeting. As we get better at telling stories so does the drama. If they have a problem with the direction of the show they should call George. LOL. Some have said that "Clone Wars" is more fulfilling than the prequel trilogy. Might this series become a coveted classic in 10 to 20 years? Taylor: I think all of "Star Wars" will be that. We have to remember that a whole new generation has grown up understanding star wars through "Clone Wars" and have never seen the movies. The whole "Star Wars" which movies do you like game is a losing battle since each generation loves their movies. Don't forget the kids who saw the prequels find the first movies slow and boring without enough light saber fights. Tell me something about the show that would surprise even the biggest fans? Taylor: Dumbledore is gay! Sorry let me think. I might get thrown in jail. Lucas jail. I think people don't realize how involved George Lucas really is. Also how amazing ["Clone Wars supervising director] Dave Filoni is. I call him the James Cameron of animation. What the artists are doing on the show has been never done before. Period. The show is really ground breaking and an honor to write for. Rarely in my career have I handed over a script and later thought 'They really made that cool.' Dave and his team really do that. At some point, it is out of the question to expect animated adaptations of the third trilogy that were never filmed? Taylor: I've heard George say, and so have others, that there was never a third trilogy. I hope there will be another animated movie to showcase how the show is kicking ass technically and story-wise. I can't say more than that. Tell me the truth: where do you come down on the Jar-Jar Binks spectrum? Love him, hate him? Somewhere in between? Taylor: That's a tricky one. Whether you like it or not he is part of the "Star Wars" universe and has to be written for. Some kids love him and he is an access point for really young children. George is a smart man, and Jar-Jar is who young children could identify with. We will see him again but he is not a major part of the show. Personally I have never had to write him. But he has a good heart. And that is important for kids to celebrate. [Screw] the cynics. Last question, and it's my standard one for this series. I love doing these IM interviews because I get a perfect transcript, and because my interview subjects can be a little more thoughtful, and a little more articulate than they might otherwise be. But also because IM makes multi-tasking easy. So, what else were you doing while we were doing the interview? Taylor: I bought this new back massager and foot reflexology machine, so I basically had a spa session while making lists of all the [stuff] I have to do! No, you kept me busy and wore me out. Well, thanks so much for your time. I really appreciate it.
Reddit r/mtgfinance I am gonna look for some old school cards i need for my deck. Not reserved list but stuff like Su-Chi, UL bolt/Psi blast and OG Chain Lightning. Gonna look to move some of my modern era cards since i play the format very little these days. Im keeping the fetches and some shocks but basically everything else is on the table if i get decent money for them. I am not in a panic, though. I think in general mtg finance decisions has to be looked through a player lens more than ever. If you use the cards for play then you are good, but if you don't play a card you are probably better off selling it now unless its premodern/93-94/RL. As long as WotC reprints cards at this rate keeping newer cards you don't use is a bad financial decision.
Development of the olfactory structures within the nasal cavity has been described in detail in mammalian species, such as the sheep—Ovis aries (Kocianova et al., 2003), the pig—Sus scrofa (Holubcova, Kocianova & Tichy, 1997), the mouse—Mus musculus (Cuschieri & Bannister, 1975) and even in humans (Kimura et al., 2009). Despite the fact that reptiles have recently become very popular domestic pets, clinical and morphologic researches related to these species are rare. Olfaction itself is an important sense, related to primitive behavior patterns (Suzuki & Osumi, 2015); therefore, the investigation of specific structures ontogenesis in reptiles has become a matter of high importance. When referring to the ontogenesis of an individual species, no single taxon of reptiles has been widely adopted as a laboratory-amendable developmental model. Valuable morphologic criteria for developmental staging are the development of limbs and pharyngeal arches (Wise, Vickaryous & Russel, 2009). General embryonic development has been described in several reptilian species such as the Calotes versicolor (Muthukkaruppan et al., 1970), the Uta stansburiana (Andrews & Greene, 2011) and the Liolaemus tenuis tenuis (Lemus et al., 1981). The upper respiratory tract of reptiles includes cavum nasi proprium and conchae. Cavum nasi proprium occupies the area of the head cranial to the eyes (Parsons, 1970). The olfactory region of nasal mucosa in squamate reptiles is restricted to the dorsal aspects of the nasal cavity and nasal conchae and is lined by multi-layered sensory epithelium (Rehorek, Firth & Hutchinson, 2000; Jacobson, 2007). The olfactory region is distinguished from the respiratory region by the presence of submucosal Bowman’s glands and the absence of goblet cells (Rehorek, Firth & Hutchinson, 2000). Some authors, who describe olfactory epithelium in other non-squamate vertebrate species, divide the mature cells of the olfactory epithelium into three main groups—olfactory cells, sustentacular (supporting) cells and basal cells (Kocianova, Tichy & Gorosova, 2001; Morrison & Costanzo, 1990; Kratzing, 1975; Polyzonis et al., 1979). Similar cell populations were also described in reptiles (Kondoh et al., 2012). Chemosensory olfactory structures of many tetrapods include the olfactory organ and the vomeronasal organ (Rehorek, Firth & Hutchinson, 2000). In reptiles, both main olfactory organ and the vomeronasal organ are derived from the olfactory placode (Suzuki & Osumi, 2015; Rehorek, Firth & Hutchinson, 2000; Holtzman & Halpern, 1990). It is believed that the vomeronasal organ and the olfactory epithelium are homologous structures responsible for the olfactory sense. Neural tracts from both of these structures project to the olfactory bulb of the brain (Parsons, 1970). The current work presents the development of the olfactory epithelium in the green iguana (Iguana iguana). Development of nasal structures in green iguana has been described by Slaby (1982). The author presented two important developmental stages of the green iguana and mainly concentrated on comparative work between this species and other species of the class Sauropsida. The aim of this study was to investigate the morphology of the developing olfactory epithelium in the green iguana, as well as provide more developmental stages than described by Slaby (1982). Materials and Methods The samples were gathered in cooperation with Brno Zoo (the city of Brno, Czech Republic). The incubation of the eggs was done at 27°C and with relative humidity of 90%. The embryos used were all removed from eggs, decapitated and placed into a container with formaldehyde. Sampling of embryos did not require special permissions according to local legislative. Sampling began from day 67 of incubation and repeated every four days until hatching. There were 20 embryos used in total. The last embryo was removed from an egg with incubation period of 135 days. The samples were processed by methods of fixation and decalcification which are standard for light microscopy; after the embryos were removed from the eggs, they were immediately fixed in 4% neutral formaldehyde. Due to the ossification of the skull, most embryonic stages were decalcified in a solution of 5.5% EDTA in 4% formaldehyde. The submersion lasted from three weeks (in early stages) till nine months (in late stages). Decalcification point was determined mechanically. During dehydration, a graded ethanol series was used: the first bath had concentration of alcohol 30%, the following baths had alcohol concentrations of 50%, 70%, 80%, 96% and 100%. Further on, the samples were immersed into a bath of acetone and three baths of xylene. After the dehydration process, the samples were embedded in paraffin wax. Serial sections were made transversally (in some early stages) and sagittally, resulting in 4 µm thin slides. The sections were then dried and stained by hematoxylin and eosin. The samples were investigated by the means of light microscopy and documented by digital photography using an Olympus BX51 light microscope and DP70 digital camera. Scanning electron microscopy The samples were dehydrated by alcohol ascending series: 30%, 50%, 70%, 80%, 90%, 96% and 100%. Desiccation was performed by Bal-tec CPD 030 Critical Point Dryer (Bal-Tec, UK) using CO2. The samples were gold-plated by Balzers SCD 040 technique. They were examined and photographed with the TESCAN VEGA TS XM 5136 scanning electron microscope. Staging of ontogenesis was made according to the criteria set by Wise, Vickaryous & Russel (2009) and marked Wx (x = number of the stage). Basal structures of the nasal cavity were visible on day 67 of incubation (stage W36); the cavity was arch-shaped. On day 83 of incubation (W38, Fig. 1), the arch-shaped cavity seemed to have a thin channel which widened dorso-caudally and divided into several projections which were lined by stratified columnar epithelium. In the medial plane, the nasal cavity communicated with the oral cavity by the primary choana. The vomeronasal organ was also visible at this stage. It had the shape of a rounded fissure and was lined by stratified epithelium with hyperchromatic nuclei. The uppermost cellular layer had cilia on its apical surface. Significant changes in the shape and size of the nasal cavity begun approximately at day 91 of incubation (W39). From lateral direction, in the middle plane of the nasal cavity, there was a growth of a cartilaginous disc, the future turbinate, which divided the nasal cavity into dorsal and ventral compartments (Fig. 2). Ventral compartment bulged out in rostral direction. It was separated from the dentogingival lamina by a thin bony plate. Laterally, the nasal cavity formed shapes considered to be paranasal sinuses in higher vertebrates. At 99 day of incubation (end of W40), the thin bony plate transformed into the base of the palate. At the 103rd day, a prominent mucosal fold projected into the nasal cavity from lateral direction. In further development, several more mucosal folds projected into the cavity. The folds were more numerous in the area of interconnection with the oral cavity. Opening of the nostrils to outer environment happened at the 111th day (stage W42). From this stage on, there were no significant changes in the shape and the size of the nasal cavity. Medially, the cavity comprised most of the facial part of the skull. Laterally, it narrowed into an arch-shaped slit, which surrounded a wide glandular formation. The lining epithelium of the nasal cavity changed through the stages. At day 91 of incubation (stage W39), the differences among the individual epithelium layers lining the nasal cavity were significantly visible. In the dorsal direction, the epithelium of the nasal cavity was columnar and stratified (Fig. 3). In the ventral plane of the cavity, the height of the epithelium decreased by a sharp line and the lining had three layers (Fig. 4). There were differences also in the characteristics of the cells. While the nasal cavity epithelium was dorsally made of cells with distinct hyperchromatic spherical nuclei and non-voluminous cytoplasm, ventrally there were cells with small nuclei, rich and lightly-stained cytoplasm. The basal layer of those compartments of nasal cavity was composed of cuboidal and columnar cells. Their nuclei were located in the apical part of the cytoplasm. In some locations, the lining of the ventral compartment of nasal cavity was composed from a flat layer of cells. Nevertheless, it was still stratified. Structures resembling cilia in other species were apparent in some places on the surface of the epithelium in the ventral nasal compartment. At day 107 of the incubation (W41), the lining of the nasal cavity could be divided into three areas. Dorsally and rostrally, the mucosa was lined by stratified columnar epithelium with a rough base; its cells were arranged into up to fifteen uniform layers (Fig. 5). Tall but rare cilia could be found on the apical surface of the epithelium. Dorsally and caudally, the epithelium was lower and the cells could be divided into elements with rounded hyperchromatic nuclei found at the lower half of the epithelium and elements with oval bright nuclei located at epithelial surface. Ventrally, the epithelium is composed of uniform and bright cellular population (Fig. 6). This epithelium was lower than the lining of the dorsal surface of the nasal cavity; its cells had columnar shape. On the surface, there were numerous but relatively short cilia. The ventral mucosa created folds that were the most numerous near the transition into the oral cavity. After the opening of the nostrils (111th day stage—W41), the lining of the initial compartment of the respiratory ways was formed by stratified squamous epithelium which was keratinized in its rostral parts. The structure of the mucosal epithelium remained similar to the previous stages and contained several scattered alveolar glands that drained into the nasal vestibule. The vomeronasal organ was seen at day 111 as a formation enclosed laterally by mucosa. At day 115 of incubation (end of W42), the entire nasal cavity was narrower. In the dorsal part, the mucosal surface appeared wavy due to an unequal height of the ciliated epithelium. Ventrally, mucosal surface was arranged into mucosal folds, which were created by projections of basis of the mucosa. From this stage on, there were no changes within the epithelium, but there was an increased development of the vascularization visible in the connective tissue stroma of the nasal cavity. At 115 day stage (end of W42), lamina propria was highly vascularized by thin-walled vessels. The vessels gradually became more numerous in the following stages. At the 131 day stage of incubation (end of W42, Fig. 7), thin walled vessels were most numerous in the lamina propria of the rostral part of nasal cavity and its vestibule. This vascularization somewhat resembled Kiesselbach’s plexus. Up to day 135 stage (end of W42), the only significant difference was the expansion of the venous network inside the stroma of mucosal connective tissue. It has been located medially from the glandular formation mentioned previously. Scanning electron microscopy At day 87 stage (W39), a cleft which interconnects the nasal and the oral cavity was clearly visible. At the oral part, this cleft connected to an S-shaped pit which protruded into the vomeronasal organ (Fig. 8). At day 103 stage (W41), a mucosal fold appeared on the aboral side of the entrance to the vomeronasal organ (Fig. 9). In the following stage, the fold increased in size and covered almost the entire oral half of the cleft which connected the oral and the nasal cavities. At later stages, the fold developed into a wedge, which protruded caudally and created a fork-like branching of the cleft (Fig. 10). Our study presents shaping of the nasal cavity and olfactory epithelium in the embryo of green iguana in the second half of the embryonic development (according to the staging by Wise, Vickaryous & Russel (2009)). Sampling was done at an earlier stage; however, the structures of our interest were not yet visible (which correlates to the staging mentioned above). It must be mentioned, that no large morphological differences were seen in between each subsequent individual samples (incubation days following one another). Nevertheless, it is important to keep in mind that incubation period (and thus, staging) of embryonic development of iguanas is dependent on some external conditions. Length of incubation cannot be used as a sole criterion for embryonic staging in tetrapods (Licht & Moberly, 1965; Van Damme et al., 1992). For this reason, it is possible to assume that incubation under different or changing conditions would cause a non-continuous development with large morphological gaps. The shape of the cells which seemed to belong to olfactory epithelium had the appearance of olfactory epithelium in other species. In the mouse, the olfactory epithelium has two types of nuclei from 13th day of gestation on; oval nuclei at apical and lower part of epithelium and rounded at middle cellular layer (Cuschierl & Bannister, 1974). In ovine fetus, oval nuclei are believed to belong to sustentacular cells and spherical nuclei to the sensory cells (Kocianova et al., 2003). Kondoh et al. (2012) described similar cell populations in snakes (Kondoh et al., 2012). There is a difference in intensity of the staining of the nuclei of those two types of cells. In mammals, oval nuclei are darker and rounded or spherical lighter. In the green iguana, the situation is opposite. There are two possible explanations to this fact. The first one could be simply circumstantial, related to the specific moment in fetal development and specific cut and view of the slide. The second explanation is suggested by Rehorek, Firth & Hutchinson (2000) and claims that the staining might have to do with species-specific granules that are present in the sustentacular cells of non-mammals. The description of Kondoh et al. (2012) mentions lipofuscin granules in sensory cells. For this reason, we believe that despite this minor difference when compared to mammals, we deal with the same type of epithelium. Moreover, the location of the olfactory epithelium is similar to mammalian species (Kumar, Kumar & Singh, 1993; Kumar et al., 2000) and to the description in other reptiles (Jacobson, 2007; Parsons, 1970). This anatomical location has also led us to believe that the dorso-caudal epithelial portion is sensory (olfactory). Rostrally, the mucosa is lined by pseudostratified epithelium with fifteen layers and a corrugated base. This appearance resembles the regio respiratoria in mammals which is lined by pseudostratified ciliated columnar epithelium with blood vessels present under the surface (Dellmann & Eurell, 1998). Ventrally, the epithelium is composed from uniform cylindrical and bright cellular population. This epithelium is lower than the lining of the dorsal surface of nasal cavity and has three layers of cells. Cilia are numerous and low. In oral direction, the epithelium of the nasal cavity gradually continues through the choana and changes into the epithelium covering the oral cavity. The epithelium found in the area of the oral cavity and covering the hard palate is squamous with polyhedral shaped cells, which would explain the decrease in size of the lining of the ventral department of nasal cavity. The cilia lead us to believe that this epithelium is also a type of respiratory epithelium, with a somewhat different shape due to its transitional location. The question is—why does the nasal cavity have those lining differences already during the embryonic development? We believe that the differentiation of the olfactory epithelium is, phylogenically, one of the oldest senses. Moreover, the young must start using their sense of smell immediately after hatching. Thus, smell organs have to be already developed enough before hatching. The vomeronasal organ of green iguana has similar topographic and structural characteristics to the characteristics present in other tetrapods and mammals (Ciges et al., 1977; Slaby, 1982; Salazar et al., 1996). The roof and the sides of the dome of the vomeronasal organ are lined by sensory epithelium, as mentioned by Kratzing (1975). This organ is derived from olfactory placode, as it does in most terrestrial vertebrates and must be well-lubricated, so odorants can dissolve and trigger the receptors (Rehorek, Firth & Hutchinson, 2000). In our observation, the organ was present already before final shaping of the nasal cavity (from initial stages which we examined), and mucosal folds of the nasal cavity and palate overlapped it, thus creating the vomeronasal channel. Early development might, again, suggest early phylogenic development. In general, it is challenging to make a direct comparison between the development of the entire nasal cavity in the green iguana and other species, since the length of the gravidity or incubation varies remarkably between different animal species. As mentioned earlier, the ontogenesis in green iguana depends also on some external conditions such as temperature and air humidity. In mammals, those conditions would not be an influencing factor. Nevertheless, when comparing individual structures of the nasal cavity, there is a great deal of resemblance; sensory epithelium and structures appear quite early in embryonic development. The division of nasal epithelium into regions happens before the final shaping of the entire cavity. In the last stages before hatching, the cavity does not develop remarkably.
The Leading-Edge Structure Based on Geometric Bionics Affects the Transient Cavitating Flow and Vortex Evolution of Hydrofoils A hydrofoil is a fundamental structure in fluid machinery, and it is widely applied to the fields of propellers, blades of axial flow pumps and underwater machinery. To reveal that the geometric structure of the leading-edge of a hydrofoil is the mechanism that affects the transient cavitating flow, we regard the three fish-type leading-edge structures of mackerel, sturgeon and small yellow croaker as the research objects and use high-precision non-contact 3D scanners to establish three bionic hydrofoils (Mac./Stu./Cro.). We use large eddy simulation to simulate the transient cavitating flow of hydrofoils numerically and compare and analyze their lift–drag characteristics, the transient behavior of unsteady cavitation and the vortex evolution. The numerical simulation results are in good agreement with the experimental results. The warping of leading-edge structure will cause a change in lift–drag characteristics, and the Cro. hydrofoil has a good lift-to-drag ratio. When the leading-edge structure is tilted upward (Cro. hydrofoil), the position of the attached cavity will move forward, which will accelerate the cavitation evolution and improve the velocity fluctuation of the trailing edge. When the leading-edge structure is tilted downward (Stu. hydrofoil), the change in the vortex stretching and dilatation terms will be complex, and the influence area of the vortex will widen.
From subreddit r/bipolar: In the brief description you gave it reminds me a lot of the mixed episode I had that put me in the hospital. I would be ready to step in front of a bus in the morning, but by seven I knew the names of everyone sitting with me at the bar right across from campus and was talkative and charismatic. Then when I wasn't around people I got delusionally paranoid. Talk to the psychiatrist. If you aren't suicidal at the actual time you go in, he can't force it but will change meds- which is all the hospital woould really do more than likely.
Query: Here is an extract from a webpage: "Free Masterclass Gives Away My Proven Process to Creating Online Content That Converts to a Steady Stream of High-Quality Clients Right Now Join the FREE Content-to-Clients Masterclass so you can easily create content that fixes your client attraction issues and attracts a predictable (and consistent) flow of high-quality new leads. TUESDAY, MAY 17TH at 11:30 am PST/2:30 pm EST! (Yes, you will get sent a replay if you can't make it live.) LIMITED SPACES AVAILABLE! RESERVE YOUR SPOT NOW! Want More Details? Hosted by: Wendy "Captivating Content" Maynard what PEOPLE SAY ABOUT WENDY'S TRAININGS... " Wendy is amazing! She's definitely given me new tips on the types of content to post and is a ball of energy that is contagious. She is extremely knowledgeable and gives so much value. It has been a pleasure attending her training. Definitely recommend it to anyone! Becky Lee / Copywriter " Wendy delivers *incredible* value to her audience. She brings wonderful energy wherever she goes; her pos". Write an extensive and detailed course unit suitable for a textbook targeted at college students, related to the given extract, within the context of "Business and Management". Do not just list concepts, but develop each one in detail before moving to the next, as we prioritize depth of understanding and comprehensive exploration of the subject matter over breadth. Focus on: - Rigor: Ensure in-depth coverage of the concepts/sections. - Engagement: Write with an academic, professional and engaging tone that captivates interest. - Application: Incorporate specific, practical examples, such as proofs in calculus or critical dates and figures in history. Do not include a title or an introduction, simply write the content without headlines and introductory phrases. Do not use images. Result: In today's digital age, creating online content that converts into tangible business results is crucial for any organization looking to thrive. This is particularly true for businesses seeking to attract high-quality clients consistently. The process involves careful planning, execution, and optimization – all aimed at providing value to potential clients while subtly nudging them towards becoming paying customers. One person who understands this concept well is Wendy "Captivating Content" Maynard, whose masterclass focuses on turning content into a steady stream of top-notch clients. We will break down some key aspects of this approach and examine how they apply to Business and Management principles. First, let us consider the importance of identifying target audiences. When developing online content, organizations must first understand their ideal customer profile. By knowing the demographics, interests, pain points, and goals of these individuals, businesses can tailor their messaging and offerings accordingly. For instance, Becky Lee, a copywriter who attended one of Wendy's trainings, mentioned receiving valuable insights about the types of content best suited for particular audiences. Understanding and catering to target audiences enables companies to build stronger connections with potential clients, fostering trust and credibility. Second, delivering exceptional value should be at the forefront of every content strategy. Value creation manifests itself through educational materials, actionable advice, entertaining elements, or addressing pressing concerns faced by potential clients. As shared by Becky Lee, Wendy excels at offering incredible value during her sessions. Offering unique perspectives, ideas, or tools helps differentiate a company's brand from competitors and positions it as an authority in its industry. Moreover, when content resonates with users, it increases engagement levels, potentially leading to higher conversion rates. Energy and enthusiasm are also vital components in successful content strategies. Capturing and maintaining an audience's attention requires charisma and dynamism – traits that Wendy clearly embodies based on feedback from attendees like Becky Lee. Passionately conveying information generates excitement around products or services being offered, making people more likely to share experiences with others. Furthermore, injecting positivity throughout presentations creates memorable interactions between brands and consumers. Lastly, consistency plays a significant role in transforming content into a reliable source of quality leads. Regularly publishing relevant and informative pieces ensures continuous interaction with existing followers while simultaneously drawing in new ones. Consistency extends beyond frequency alone; it applies to style, tone, and design across various platforms too. Cohesive visual identities coupled with uniform voices enable businesses to establish recognizable personas among target markets. Over time, this builds familiarity and strengthens relationships with current and prospective clients alike. To summarize, applying Business and Management principles effectively in a digital landscape entails understanding target audiences, delivering substantial value, showcasing infectious energy, and maintaining unwavering consistency. Implementing these practices allows organizations to harness the power of online content and convert visitors into loyal clients successfully. Through Wendy "Captivating Content" Maynard's teachings, entrepreneurs and professionals gain access to proven methods capable of generating predictable flows of high-quality leads.
A building is a compromise — a bundle of risks safe enough to use, but dangerous enough to regulate. How tall is too tall, how safe is safe enough? Before September 11, Americans thought little about such questions. And then the most extraordinary buildings in New York City burned and collapsed in front of a worldwide audience. Though the Twin Towers had long been scrutinized for their vulnerabilities (especially after the 1993 bombing), no one — not even their harshest critics — spent much time contemplating whether they would collapse as a result of catastrophic fires. Once people got past the simple explanation of the disaster — “the terrorists did it” — a much more difficult truth emerged. We trust that our buildings are safe without knowing much about them. The work of establishing American building codes and safety standards is hidden in plain sight, open to public view but rarely of interest. This is curious considering how critical building safety is to everyday life. Still, the fact remains that most Americans assume safety until events show the contrary. Public outrage after disasters from the Triangle Shirtwaist Factory Fire to Hurricane Katrina are constants throughout American history. But it is a rare case in which public action motivates government-funded research and moves the national conversation on building safety in a proactive way. Skyscraper safety since September 11 is one such case. But how far can the will to investigate and learn from disaster move away from the disaster itself? Is 10 years too far away? The disaster experts investigating the collapse of the World Trade Center have found themselves in an uncomfortable position since September 11, trying to understand what went wrong, while at the same time explaining to impatient members of Congress and grieving families the ways that building and fire safety codes are created and tended in the United States. It is not a system designed for rapid response to a call for action. And, as much as the disaster experts may wish it to be otherwise, factors of cost matter. Safety, innovation, and profitability all teeter on the wire of American risk-taking. The safest building is never the cheapest building. Developers, architects, and structural engineers are focused on construction, and safety resides within the bounds of probability — they trust in the community of experts to prescribe building codes, and they use them. The result is usually impressive and profitable, until it isn’t. Firefighters, fire protection engineers, and the people who live and work in skyscrapers usually err on the side of redundancy and protection: They want buildings that are reliable under even extreme conditions such as a plane crash or major fire. Finding consensus among these different parties is an ongoing process that leaves none of them fully satisfied, but results in the ever-changing landscape of the American skyscraper and the ever-changing codebook of skyscraper safety. Families and Experts A few months after September 11, Sally Regenhard attended a meeting of firefighters’ families in a Manhattan office building. Her son Christian, a probationary fireman, had been killed while responding to the World Trade Center attacks. His body remained missing. Walking across the lobby, she saw a gathering of reporters and learned from a security guard that Senator Hillary Clinton was in the building. The Senator would soon be coming downstairs to talk to the press. Regenhard’s letters and calls to Senator Clinton’s office, Senator Chuck Schumer’s office, Federal Emergency Management Agency (FEMA) headquarters — to any government official who might take an interest in investigating the deadliest, costliest building collapse in American history — had all gone unanswered. In those days, Regenhard always wore a business suit and always carried pictures of her son in his firefighter’s uniform, ready for any chance to make her case. When Senator Clinton arrived and began the press conference Regenhard waded into the crowd. She raised her hand at an opportune moment. Clinton acknowledged her, and Regenhard pulled out Christian’s picture and her petition. “My name is Sally Regenhard, and this is a picture of my son.” Every head turned and the television lights went on. She owned the moment. “Senator Clinton, will you support a federal investigation into the collapse of the World Trade Center?” she asked. There was a pause. “Yes, yes I will,” Clinton said. The event was briefly noted in the papers, but for Regenhard and her grassroots organization, the Skyscraper Safety Campaign, it was the first win in an emotionally charged effort to force a public debate over why the World Trade Center collapsed — and what we should be doing to make high-rise buildings safer. In the days and weeks after September 11 disaster, experts tried to understand the collapse of the Twin Towers. A “Building Performance Assessment Team” was rapidly assembled, coordinated by the American Society of Civil Engineers and FEMA. This was a common practice after major structural disasters such as the Oklahoma City and 1993 World Trade Center bombings. But this time the investigators quickly ran into frustrating difficulties. Without subpoena power, the experts’ requests for building plans, 911 call records, video, and photo data were blocked. Even gaining access to the site itself was a challenge. Meanwhile the wreckage — the steel that held the secrets to the fires and the collapse — was being hastily loaded onto trucks and shipped to New Jersey scrap yards for recycling, some of it already on its way to China. The grim search for bodies ruled the front page, as did the opening act of the Bush administration’s War on Terror. The idea that the World Trade Center had collapsed due to fire could not yet penetrate the public or governmental consciousness. Meanwhile, Sally Regenhard, Monica Gabrielle (whose husband Richard had died in the collapse), and an expanding group of victims’ families were forming around shared impatience with their inability to get even basic answers about what had gone wrong in the Towers. Looking for any information they could use, they came across John Jay College fire protection engineering professor Glenn Corbett and Berkeley structural engineering professor Abolhassan Astaneh-Asl. The two disaster experts had received attention for their criticism of the disaster investigation’s lack of progress. The Skyscraper Safety Campaign was created on this common ground, harnessing the moral authority of families of the dead and advised by experts who were focused on the fire protection defects of the Towers. The effort saw some progress. Regenhard had her serendipitous lobby meeting with Clinton. Late in 2001, a timely New York Times story noted the failures of the technical investigation. New York’s congressional delegation jumped on it, with Clinton and Schumer calling for an investigation. Two of New York’s House members, Democrat Anthony Weiner and Republican Sherwood Boehlert, took the lead. Members of the House Science Committee, Boehlert and Weiner convened two tense meetings in the spring of 2002. “The Building Performance Assessment Team is composed of an elite group of engineers and scientists,” Glenn Corbett noted in his testimony. Nevertheless, they had “allowed valuable evidence in the form of the towers’ structural steel to be destroyed.” Committee chairman Boehlert picked up on Corbett’s criticism. “We need to understand a lot more about the behavior of skyscrapers and about fire if we are going to prevent future tragedies,” he warned. In the most widely reported episode from the hearings Representative Weiner asked a seemingly simple question. “Will the person who is in charge of the investigation raise their hand?” Two hands went up, and then a third. “I want to know who is in charge,” Weiner demanded. “Where does the buck stop . . . on this investigation?” National Institute of Standards and Technology (NIST) director Arden L. Bement answered that it was, in fact, his investigation to run. Weiner wanted to know more about why examining the steel debris was critical. “You said that we have the capability to determine the impact of heat on structural failure in buildings. . . . Do you believe that if we had that information before September 11, some of the people that are sitting behind you would not have lost loved ones?” Weinter asked. This was the question that mattered most to the skyscraper safety advocates. Did the experts have the knowledge to prevent the worst from happening, and if so, could they have used it to anticipate the catastrophe, to prevent September 11 in New York from being more than two plane crashes into the Twin Towers? “Perhaps. Yes,” Bement replied. This was a revealing admission. It acknowledged that simply knowing how to build more resilient structures did not — does not — guarantee that such knowledge will be used. When it comes to preventing American disasters, knowledge itself is often not the problem. Surprising to many is the fact that the United States does not have a centralized process for implementing expert recommendations on fire protection and building safety. The country’s national fire protection and building safety system was created in the decades when American cities burned down in horrific conflagrations — a time when the federal government was not active in public safety, state and local government were unwilling to inhibit construction, and when insurance companies and average citizens were paying high prices for the fire dangers of the city. In this context, organizations such as the National Fire Protection Association (NFPA) and Underwriters Laboratories — initially founded by or closely associated with fire insurers — emerged as science-based clearinghouses for risk and disaster research. Building code groups grew similarly, merging in 1994 as the International Code Council (ICC), responsible for the widely used International Building Code. With input from the fire service, insurance companies, architects and engineers, industry and building trades, and government the experts developed what they call “consensus codes” — standards of safety and performance that are codified into law in most states and cities. The process is deliberative, slow, and decentralized, much like democracy itself. After scolding the disaster experts for not yet having the answers, the House Science Committee moved to continue the investigation, pushing forward a 16 million dollar appropriation to support a long-term NIST study. Formerly known as the National Bureau of Standards, NIST is the only American national laboratory focused on structural and fire safety, maintaining a state-of-the art research complex in the suburbs of Washington, D.C. Congress also passed the National Construction Safety Team Act, a bill that granted subpoena powers to NIST, putting the organization clearly in control of building collapse investigations. An enthusiastic Boehlert summed up Congress’ actions as “in many ways, a memorial to those who lost their lives on September 11 and a tribute to their families who have joined together to advocate for this measure in the Campaign for Skyscraper Safety.” NIST now had the money and the political backing to dig deep into the disaster. But time was critical. The first anniversary of September 11 passed with politicians vowing reconstruction as a symbolic and patriotic act. The Lower Manhattan Development Corporation, a special body created to oversee the redesign and reconstruction of the World Trade Center site, started reviewing design proposals for the next World Trade Center before the disaster experts could decisively say why the buildings had fallen. An Unprecedented Study After three years of research, NIST released its World Trade Center investigation in 2005. Here was the first in-depth look at how the buildings performed on September 11. The investigation focused on three main areas: the collapse of the Twin Towers, the difficulties in evacuating the buildings, and a comprehensive evaluation of the “design, construction, operation, and maintenance” of the Towers. The collapse issue was relatively straightforward. Both Towers withstood the aircraft impacts very well, with load being efficiently redistributed to undamaged columns. The redundancy of the external framing system and the sheer scale of the buildings had allowed each to take the hit and stand. This point captured the fascination of many engineers, especially Leslie Robertson, structural engineer for the Twin Towers. Robertson took pride in the “stalwart” character of the Towers, noting in a PBS interview after September 11 that he had “designed the project for the impact of the largest airplane of its time.” Indeed, wind tunnel testing and crash modeling had been inventive aspects of the World Trade Center’s design process. To Robertson, the idea was to imagine a Boeing 707 crashing into one of the Towers, and have it withstand the collision. Though “stalwart” in the face of a crash, the NIST report went on to detail the Towers’ startling vulnerabilities to fire. While most of the jet fuel in the planes had burned up at once, massive fires were ignited in each Tower, challenging every aspect of the fire suppression and evacuation systems. Both systems — the so called “life safety” systems — failed. Firefighters on the way up could not adequately communicate with their commanders or each other, and had to run up to the fire while occupants on the way down clambered by. Under these conditions, it is unlikely that firefighters could have put out the fire. In such an instance, the issue becomes one of time — will the buildings stand long enough for the occupants to escape? Unfortunately, closely grouped stairwells and thin stairwell walls made escape nearly impossible for the thousands trapped above the impact zones. And here the structural system — with lightweight floors designed to enable the soaring heights of the Towers — revealed the designed-in horror of the collapse. Fireproofing material applied to the structural steel had been blown off by airplane impact. The steel was exposed, the floors sagged and buckled, initiating a “progressive collapse.” As devastating as it was — with 2,753 lives taken and $100 billion in direct economic impact — there was in the NIST report a chilling side note. Given the difficulties in evacuating the Towers, if they had been fully occupied the morning of the attack, 14,000 people might have died. Leslie Robertson admitted that the Towers’ design stage had not considered fuel load in the airplane crash scenarios. “Indeed,” he told an interviewer, “I don’t know how it could have been considered… There was no fire suppression system that could even begin to deal with that event. Nothing. Nothing.” Were the Twin Towers so unique — tall, imaginative, and ultimately so deadly — that the lessons of their collapse was inapplicable to other buildings? Here was the key question. By Robertson’s logic, there are just some contingencies for which we cannot prepare. If we want to build tall, then we must take some risks that have no remedies. By extension, if there is no way to build skyscrapers that can withstand terrorist attacks of the magnitude of September 11, then our best bet is to prevent the attacks. Fire protection experts reject Leslie Robertson’s logic. NIST researchers also studied a third building that had collapsed on September 11: World Trade Center 7. This much more modest building had been hit by debris, burned for hours, and collapsed late in the day. Never hit by a jetliner, WTC 7’s collapse suggested that the dangers of skyscraper fires were not limited to terrorism targets alone. At the conclusion of their research, NIST put forward 30 detailed recommendations — from communications to fireproofing, evacuation to design principles for skyscraper safety — aimed at translating the lessons of the collapse into concrete changes for the nation’s buildings. It was the culmination of the most intense structural and fire protection investigation in American history, pushing fire testing and computer modeling of fires to the limit. The House Science Committee convened a third hearing. Now appearing as a witness, Sally Regenhard took aim at NIST’s work. “The recommendations,” she argued, were “vague,” and “the vagueness of the language was influenced by a need for political correctness and a general reluctance or an inability to investigate, use subpoena power, lay blame, or even point out the deadly mistakes of 9/11 in the World Trade Center.” Regenhard was particularly alarmed that the responsibility of the Port Authority of New York and New Jersey had not been adequately investigated. As owners of the land on which the Twin Towers had stood, the Port Authority enjoyed immunity from New York City building codes, and was ultimately responsible for signing off on the experimental design of the Twin Towers. Considering that the 1993 bombing demonstrated many of the same evacuation problems that were so devastating on September 11, and that fire experts had long questioned the fireproofing, Regenhard was angered by NIST’s seeming unwillingness to pursue a more aggressive investigation of negligence on the Port Authority’s part. Experts in Action From the September 11 attacks and collapse of the Twin Towers to the NIST study and recommendations, four years elapsed, an exceptionally short timeline from a fire safety research point of view. But this is where the pace slowed considerably, as the recommendations entered the American consensus code system. Tangible action on NIST’s recommendations is evident, in some instances action that will redefine skyscraper design for the next generation of buildings. Fire experts agree that the most impactful change has been in design features for building evacuation. The International Building Code now requires a third exit stairway for buildings over 420 feet in height; substantial luminescent markings are now required for exit pathways in tall buildings; and stairwells must be no less than 30 feet apart, avoiding (it is hoped) the destruction of all escape pathways due to one type of impact, as happened in the Twin Towers. Most important, skyscrapers must now also equip elevators for use in occupant evacuation. Building evacuation expert and Skyscraper Safety Campaign technical adviser Jake Pauls points out that the use of the elevator for evacuation is a much-needed “paradigm shift,” the most important change he has seen thus far in a career that stretches back to research and critique of the Towers’ evacuation design from their construction through 1993 and to the scene of the 2001 disaster. Using elevators for evacuation also opens the idea of using elevators more effectively for firefighting on high floors. On the frontier are “smart elevators” that may be programmed with building occupant information, ready to prioritize evacuation according to the special needs of occupants with disabilities, for example. Such technology would likely have saved lives on September 11. To date, NIST reports action on 18 of its 30 recommendation areas. Clearly much remains to be done. Specifications for emergency communications systems in buildings, for example, and requirements for the adhesion of fireproofing to steel supports remain incomplete. Concerns over jurisdiction — the issue raised by Sally Regenhard regarding the Port Authority’s ability to unilaterally adopt an unproven design — remain unaddressed. But 10 years after September 11, experts at the ICC and NFPA remain positive that we will continue to see important changes flowing from the consensus code system. The process takes time, and is moving much faster than usual, a fact that representatives from every expert institution involved report is directly attributable to the lobbying of the Skyscraper Safety Campaign. Tall Buildings Are Safe!? “Tall Buildings are Safe!” proclaims Leslie Robertson in a recent article marking the 10th anniversary of the World Trade Center collapse. “From a structural point of view,” Robertson argues, “it isn’t realistic to think there is much you can do against large airplanes flying into tall buildings.” Fire protection experts such as Jake Pauls worry that such a mindset is pervasive and dangerous. Over the past decade, Pauls has watched expert groups such as the ICC and NFPA “bend over backwards” in their work to learn from September 11: “What they didn’t deal with was a change of mindset — right from the beginning there was a leitmotif that what happened was not to be used for design changes. It was a special situation.” Pauls is referring to a battle among the disaster experts over the right way to assess and control building safety going forward — not an academic issue considering we are now entering the age of the “supertall building,” with skyscrapers that dwarf the Twin Towers rising in the Middle East, across Asia, and of course on the World Trade Center site itself in the form of the planned 1,776-foot Freedom Tower. One of NIST’s recommendations called for the adoption of “performance-based” standards for fire and building codes. Performance-based methods would require that every building be evaluated individually not only for its structural abilities, but also for its life safety readiness. Rather than using one-size-fits-all standards, advocates of performance-based design are pushing for a close look at the unique aspects of every skyscraper before it is approved for construction. In some cases this will reveal vulnerabilities like those seen in the Twin Towers, vulnerabilities that could and should be remedied before a disaster strikes. In this view, one shared broadly throughout the fire protection engineering community, lowest common denominator “prescriptive codes” do not adequately prepare buildings as unusual as the Twin Towers for the complex dynamics of a fire. Still, prescriptive codes today remain the norm. The NIST study demonstrated what is already possible. By using detailed structural analysis and fire protection modeling, engineers can design buildings that will both withstand major catastrophes such as a plane crash, and the major fires that might result from such an incident. Fire protection engineer Jose Torero explains that performance-based codes are well understood, but engineers like him are a minority in the larger engineering community. Currently, they have a weak voice, even with September 11 such a recent memory. “The Leslie Robertson’s don’t understand the magnitude of the problem,” Torero worries. Structural engineers have always been focused on building skyscrapers that wouldn’t fall down, but the lesson of September 11 is that engineers need to also focus on helping first responders get to the site of a fire in a building and allowing occupants to evacuate. Further, the lesson of World Trade Center 7 is that high-rise fires can bring down buildings, whether they are attacked by planes or not. But how uniquely tailored should our procedures of design evaluation be, and how much cost will this add to a building’s construction? Again we ask: How safe is safe enough? Skyscrapers will never be safe in the opinion of engineers such as Jose Torero until the available technology of fire safety is used to its full extent. He contends that this can be done in a cost-effective way, and that performance-based design may actually reveal cost savings in many instances, cases where necessary fireproofing or structural redundancy might on close analysis be less than the prescriptive code requires. In 2011, and for the foreseeable future, this is the most divisive issue in the realm of skyscraper design. Tall buildings are safe, but not always. This is one of the legacies of September 11 — an uncertainty on our national skyline. Looking ahead, we can expect NIST’s recommendations to drive life-safety focused upgrades to standards and building codes. In 2008, the Christian Regenhard Center for Emergency Response Studies — a center for emergency management and disaster research launched in part through a contribution from Hillary Clinton — opened at John Jay College in New York City. The work of the Skyscraper Safety Campaign goes on, as does the larger debate over safety standards in the supertall buildings of the 21st century. Whether or not the next generation of skyscrapers embraces safety along the lines advocated by Sally Regenhard, Jake Pauls, and Jose Torero remains to be seen. With the 10th anniversary of September 11 upon us, they (and we) are all hoping it won’t take another disaster to find out. • 2 September 2011
Question:Here is an extract from a webpage: "low-key or low key or lowkey [ loh-kee ] What does low-key mean? Low-key can variously mean “quiet,” “restrained,” “moderate,” or “easygoing.” It can also behave as an adverb meaning “of low or moderate intensity.” Like doing something, but in a “chill” way. For instance: We’re having a party at my place but keeping it low-key so the neighbors don’t complain. Where does low-key come from? Low-key would appear to have musical origins, characterizing something has having a deeper, more muted, or darker tonal register. We can find low-key for “of a low pitch” in the early 19th century. Charles Dickens, for instance, wrote of it that way in his 1844 novel Martin Chuzzlewit: She continued to sidle at Mr. Chuffey with looks of sharp hostility, and to defy him with many other ironical remarks, uttered in that low key which commonly denotes suppressed indignation. In 1857, the reading primer Introductory Lessons in Reading and Elocution used low-key for the tone of voice that a person uses whe". Write an extensive and detailed course unit suitable for a textbook targeted at college students, related to the given extract within the context of "Psychology". This unit should explore all pertinent concepts with in-depth explanations and technical detail. Focus on: - Rigor: Ensure in-depth coverage of the concepts. - Engagement: Write with an academic, professional and engaging tone that captivates interest. - Application: Incorporate specific, practical examples, such as proofs in calculus or critical dates and figures in history. Do not use images. Answer:Course Unit: Low-Key Phenomenon in Psychology: Understanding Quietness, Restraint, Moderation, and Easygoing Behavior I. Introduction The term "low-key" has been widely used in everyday language to describe quiet, restrained, moderate, or easygoing behavior. Although seemingly simple, this concept encompasses several psychological constructs that are crucial in understanding human behavior, emotions, and social interactions. In this course unit, we will delve into the depths of the low-key phenomenon through the lens of psychology, highlighting its implications in personality development, emotional regulation, interpersonal relationships, and mental health. II. Historical Origins and Musical Connections Before diving into the psychological aspects of the low-key phenomenon, let us first revisit the historical origins and musical connections of the term, as discussed in the provided extract. The word "low-key" initially referred to music pieces performed at a lower pitch or volume. Its usage expanded beyond the realm of music to signify moderation, calmness, and subtlety in different aspects of life, including speech tones (as seen in Charles Dickens' quote), lighting conditions, and overall demeanor. These variations suggest that low-key behavior may be associated with reduced physiological activation and increased self-regulation, both of which are essential elements of emotion regulation strategies studied in modern psychology. III. Personality Development and Low-Key Traits A considerable body of research supports the idea that certain individuals tend to exhibit low-key traits consistently across time and situations. Accordingly, these individuals often score high on measures assessing qualities like agreeableness, conscientiousness, and neuroticism—the Big Five factors underlying most individual differences in personality. Specifically, people who identify as introverted, laid back, or emotionally stable might display low-key behaviors regularly due to their temperamental predispositions. Moreover, longitudinal studies reveal that children exhibiting low-key characteristics grow up to become well-adjusted adults capable of forming healthy relationships, managing stress effectively, and maintaining equilibrium amidst challenging circumstances. Consequently, educators, counselors, and psychologists encourage parents to nurture and reinforce low-key behaviors during childhood to foster adaptive coping mechanisms and resilience later in life. IV. Emotional Regulation Strategies and Low-Key Responses Emotion regulation refers to the processes by which humans manage their affective states to achieve desired outcomes while minimizing potential harm. Individuals employing low-key responses typically engage in proactive regulatory efforts, such as cognitive reframing, deep breathing exercises, mindfulness meditation, and seeking support from trusted others. By deploying these techniques systematically, they maintain composure under pressure, avoid impulsiveness, and preserve long-term goals despite momentary setbacks. For example, consider two coworkers facing similar deadlines; one becomes highly agitated when encountering unexpected obstacles, whereas the other remains relatively unflustered throughout the process. While both experience frustration, the latter responds differently due to superior emotion regulation skills rooted in low-key traits. Over time, consistent utilization of such abilities contributes significantly to career advancement, personal growth, and subjective wellbeing. V. Interpersonal Relationships and Social Dynamics Interacting with low-key individuals generally proves pleasant and rewarding because they create positive vibes, communicate respectfully, listen attentively, and empathize genuinely. Furthermore, low-key individuals rarely initiate conflicts intentionally, making them desirable companions in both platonic friendships and romantic partnerships. Nevertheless, some challenges arise when pairing low-key individuals with those possessing contrasting dispositions (e.g., extroverts). To ensure mutual satisfaction and harmonious coexistence, each must learn how to accommodate the needs, preferences, and expectations of the other without compromising core values or jeopardizing authenticity. VI. Mental Health Considerations and Therapeutic Applications Given the numerous benefits linked to low-key behaviors, practitioners increasingly incorporate relevant components into evidence-based therapeutic approaches targeting anxiety disorders, mood disturbances, substance abuse issues, and relational problems. Examples include dialectical behavior therapy (DBT), acceptance and commitment therapy (ACT), mindfulness-based stress reduction (MBSR), and cognitive-behavioral therapy (CBT)—all of which emphasize developing self-awareness, regulating emotions, tolerating distress, improving communication patterns, and fostering compassionate relationships. To illustrate, DBT combines Zen principles with Western psychology to help clients accept reality nonjudgmentally, focus on present-moment experiences, and develop wisdom regarding the impermanence of thoughts, feelings, and external events. Similarly, ACT encourages participants to embrace discomfort, practice radical acceptance, and commit to actions aligned with deeply held values rather than fleeting desires. Both modalities align closely with low-key tenets, suggesting that embracing such attitudes could enhance treatment efficacy and promote lasting change among diverse clinical populations. VII. Conclusion This course unit elucidated the multifaceted nature of the low-key phenomenon and highlighted its significance within contemporary psychological discourse. Through comprehensive discussions centered around personality development, emotional regulation, interpersonal dynamics, and mental health applications, readers gained valuable insights into the theoretical foundations, empirical findings, and practical ramifications surrounding low-key behavior. Ultimately, cultivating low-key traits promises substantial rewards for those willing to embark upon introspective journeys aimed at achieving balance, harmony, and fulfillment in today's fast-paced world.
From r/titanfall >Hmm 11:15 on a school/work night. I wonder why the population is low atm... it's 10pm pst and there are *currently* 346,439 people playing dota2; 60,117 playing cs:go; 41,606 playing tf2; 23,439 playing payday2; 21,086 playing civ V; 18,930 playing civ: beyond earth; 15,195 playing skyrim; 13,984 playing warframe; 13,045 playing garry's mod; 9,946 playing borderlands, and this is only through steam. homework and school have nothing to do with it... as sad as this is to admit, there are actually more people playing farming simulator 15... currently... than there are playing titanfall... and if anyone has to get up early, it's a farmer!
Salah Uddin Shoaib Choudhury Following the sudden increase in the number of coronavirus affected people in India, Tablighi Jamaat (TJ) and its congregation in the Indian capital Delhi have come under criticism, as some of the top brasses of this notorious group had asked the followers to consider coronavirus as a blessing from Allah to destroy the Hindus. The TJ leaders called upon their people to wage #coronajihad with the goal of spreading this amongst hundreds and millions of Hindus. But, immediately after such notorious activities of the TJ members were exposed by anti-radical Islam media in India and Bangladesh, several newspapers in India, as well as a number of influential Western media, have started syndicated propaganda clearly with the agenda of saving Tablighi Jamaat from a possible legal consequence. It may be mentioned here that, during the challenging situation of coronavirus pandemic, when the Indian government has been working hard in saving its huge population from being hit by Covid-19, a notorious nexus of jihadists, under the patronization of Pakistani spy agency Inter Service Intelligence (ISI) have declared #coronajihad, with the notorious agenda of infecting and killing hundreds of millions of Hindus thus finally bringing India under the flag of radical Islam. Defying country-wide lockdown in India, more than two thousand members of the radical Tablighi Jamaat attended its congregation from March 13-15 at Markaz in Nizamuddin area in the Indian capital New Delhi. During this congregation, leaders of Tablighi Jamaat had called upon the coronavirus infected members of the group to consider coronavirus as a blessing from Allah and use it as a weapon for spreading the virus amongst the ‘infidels’. Intelligence agencies suspect some of those who had attended the Tablighi Jamaat event had ISIS connections and that the open defiance of the lockdown was a result of the terror organization’s or other Islamic terror groups’ instructions to them to turn into virtual human bombs, which they could do even as carriers of coronavirus. Security experts say, these missing TJ men are most likely engaged in spreading Covid-19 amongst the non-Muslim populace in India. However, #coronajihad started trending on Twitter soon after the death of patients who attended the Tablighi Jamaat congregation. Twitter users said that the congregation was organized, flouting Delhi government’s orders against gatherings. According to a Twitter user, Corona Jihad is “Infected Muslims want to go and spread Corona to Kafirs (Infidels) so they can die in hundreds of thousands. Their Idea: A few hundred Muslims will die, but they can make up for it by making more children. Use the Corona to kill more Kafirs”. Unfortunately, a segment of the mainstream media in India is either unwilling of failing yet to understand the gravity of the threat posed by Tablighi Jamaat. Some of the Indian media even are making frantic bids in portraying Tablighi Jamaat as an innocent organization that counters jihad. What those Indian media pundits do not understand is, Tablighi Jamaat has been proved to have links with jihadist outfits in a number of countries”. Coronavirus infected Muslims in India are spreading the virus through licking currency notes as well by wiping nose on it, at the clear instruction from a few top brasses in the organization. Coronavirus contaminated currency notes are being circulated into a number of western nations as well as Israel. Pakistani spy agency Inter Service Intelligence (ISI) has been smuggling out coronavirus contaminated currency notes into India through a number of cross-border smuggling rackets. A twitter message from the Nashik Rural Police in India claimsd, it has taken actions against a Muslim man whose video of licking and wiping nose with a bunch of currency notes and calling Coronavirus a punishment by Allah was doing rounds on social media. The video showed a Muslim man taking a bunch of currency notes and licking them with his tongue and wiping his nose with them. He says that the coronavirus has no treatment because it is sent by ‘Allah’, implying that he is going to pass those currency notes around, to spread the disease. The Muslim man in the video uses few 500 denomination notes to wipe his nose and mouth as he looks into the camera and says: “There is no cure for a disease like corona because it is the punishment of Allah, for you people.” According to a credible intelligence source, attendees at the Tablighi Jamaat in India were asked by some of the leaders to spread coronavirus amongst the non-Muslim society through currency notes by licking and wiping the nose of the Covid-19 patients. A similar message was passed onto radical Muslims and members of Tablighi Jamaat in the United States, Canada, the United Kingdom, France, Italy, Germany, Spain, Sweden, Denmark, the Netherlands, Australia, Russia, and Israel. Muslims are reportedly contaminating those currency notes with Covid-19 virus by sitting at their homes or the community mosques and later circulating amongst non-Muslims. This trend is at an alarming level in the United Kingdom, in particular, where this “coronajihad” tactic is being followed by the members and supporters of Muslim Brotherhood, Hezbollah, Hamas, and other jihadist outfits. Although Pakistani spy agency ISI has been circulating counterfeit Indian currency notes for decades, in recent times it has started circulating real currency notes, which are being contaminated with coronavirus. During the past few weeks, Pakistani spy agency reportedly has circulated more than four billion Indian rupees of various denominations. Media’s enthusiasm in saving TJ On June 24, 2019 a report titled ‘Qatar spending millions of dollars in buying western media’, Martha Lee, a research fellow at the Middle East Forum and Sam Westrap, director of the Forum’s Islamist Watch Project wrote, “On June 4, the Washington Times published a “special section” of articles lavishing praise on Qatar, its institutions, and its global influence. Each of these articles was labeled as “sponsored,” although the Times neglects to say by whom. At first glance, this is a surprising insertion in a conservative paper whose editorial board has previously been critical of the Middle Eastern state. “While Qatari money is everywhere, in previous years its influence had been perceived mostly on the American left. Qatar’s media empire Al Jazeera, for example, operates a social media platform named AJ+, which has partnered with hard left-leaning American outlets such as the Young Turks. “Meanwhile, prominent think tanks such as the Brookings Institution have received tens of millions from Doha. Brookings got $15 million in 2013, and at least $2 million in just the past year — perhaps much more. Such generosity has afforded Brookings a plush center in Doha. Meanwhile, the Qatari regime enjoys a steady flow of academic papers downplaying the kingdom’s patronage of violent Islamism and painting its ties to designated terror groups as nothing more than earnest attempts at dialogue, carried out in an attempt to acquire influence for the sake of benevolence”. On June 27, 2019 in another report titled ‘Qatar desperately spending money in buying politicians and media’, Anita Mathur, Senior Correspondent of Blitz wrote, “In recent years, Qatar has become aggressive in establishing its influence over international media and politicians. For the past few years although, Qatari money has been circulating almost everywhere – from media to politics. In the United States, Qatar’s media empire Al Jazeera, for example, operates a social media platform named AJ+, which has patynered with hard left-leaning American outlets such as the Young Turks. American left is under formidable control of Doha, while there are indications of Qatar investing millions of dollars during the 2020-presidential election in the US with the clear mission of defeating President Donald Trump. A number of Democratic Party’s possible candidates, such as Joe Biden, Bernie Sanders and Kamala Harris have already been approached by Qatari agents offering a huge amount of money for election expenses as well as for their personal spending. Opposition leader Nancy Pelosi also is having deep-covered connections with Qatar through one of her family members. Clintons had earlier received a significant amount of “donation” from their Qatari “friends”. In the lengthy report, Ms. Mathur has elaborately described as to how Qatar and other Muslim nations are spending lavishly in buying the voice of the media in the world. It may be mentioned here that, Qatar is known for its policy of patronizing radical Islam, antisemitism and jihad. Pro-TJ media propaganda Immediately after Tablighi Jamaat’s notorious agenda of waging #coronajihad was exposed by a section of Indian and international media, Qatar’s pro-jihadist broadcast network Al Jazeera had immediately jumped and started propaganda in favor of TJ. In those reports, Al Jazeera had accused the Indian government and the ruling Bharatiya Janata Party (BJP) for “anti-Muslim repressive policy”. One of the leading English dailies, Times of India wrote: “Here in India, no other group has been demonized more than the country’s 200 million Muslims, minorities in a Hindu-dominated land of 1.3 billion people”. Another English daily the Midday in a report titled ‘Burden of being a Muslim’ wrote: “The burden of the Muslim identity has felt particularly unbearable during the 21-day national lockdown ordered to check the spread of Coronavirus. With social transactions reduced substantially, and everyone face-to-face with death, it was presumed that the politics of hate would hit the pause button. We failed to reckon that the vocation of TV anchors and the avocation of Hindu radicals are to demonize Muslims. Both quickly latched on to the criminally irresponsible behaviour of the Tablighi Jamaat to read sinister meanings into their decision to continue with the three-day congregation marked in their annual calendar months earlier”. The Wire, in an oped titled ‘The Coronavirus Has Morphed Into an Anti-Muslim Virus’ wrote: “The pandemic and severe lockdowns are a time of immense dislocation and dread for all Indians, as the country fights a malign mutating virus with no cure. But it is particularly a time of despair and desolation if you are an Indian Muslim”. The Washington Post in an editorial titled ‘The world’s largest democracy should set a pandemic-response example. So far, it hasn’t’ wrote: “Mr. Modi is not the only one resorting to heavy-handed measures in the name of defeating the virus. Venezuela’s autocratic regime has deployed the same security forces that violently put down protest marches to force people into their homes. In Kenya, riot police used tear gas and beatings to enforce a dusk-to-dawn curfew. But as the world’s largest democracy, India ought to be setting a standard for how the emergency can be met without resort to repression or censorship. So far, it has not done so”. The Guardian in a report titled ‘Coronavirus conspiracy theories targeting Muslims spread in India’ wrote: “The situation got so bad last week that it prompted Equality Labs, a US-based south Asian human rights organisation researching Islamophobic hate speech, to release a statement urging the World Health Organization to “issue further guidelines against Covid-19 hate speech and disconnect it to religious communities”. The Gulf News in its report titled ‘COVID-19: And now it is ‘corona jihad’ in Narendra Modi’s hate-filled new India’ wrote: “The Modi [Indian Prime Minister Narendra Modi] government’s calculation seems to be to turn pandemic panic into electoral gold”. TIME in a report titled ‘It Was Already Dangerous to Be Muslim in India. Then Came the Coronavirus’ wrote: “Coming just weeks after religious pogroms conducted by Hindu nationalists left 36 Muslims dead in Delhi, the surge in hateful tweets demonstrates how anxieties over the coronavirus have merged with longstanding Islamophobia in India, at a time when the Muslim minority — 200 million people in a nation of 1.3 billion — feels increasingly targeted by the ruling Hindu nationalists”. Clearly, India is under massive media assault. Most of the mainstream media in the world, including New York Times, Huffington Post, Vice News, BBC, Anadolu Agency, Foreign Policy, AL Jazeera, Straits Times, South China Morning Post and many other newspapers and broadcast networks are engaged into demonizing India and the ruling Bharatiya Janata Party, for the “crime” of unearthing #coronajihad and initiating legal action against Tablighi Jamaat. A syndicated propaganda is currently continuing in the world with the agenda of making India into a worst villain in the eyes of the international community, and also branding Hindus as religious bigots. Is Tablighi Jamaat a terrorist outfit? Before giving reply to this question, let me make a point very clear. Amid the wildly fluctuating numbers of coronavirus infected people, that are floating around in the Indian media, one thing is certain. Around four thousand Islamist preachers from India and around the world gathered in New Delhi for the event in mid-March and post an outbreak amid the cluster, the preachers fanned out across India, carrying out “chilla” (proselytising activity) and spreading the virus in the nook and corners of India spanning 20 states. According to the New England Journal of Medicine, that studied the outbreak in Wuhan, China, each infected patient would spread the virus to at least 2.2 individuals, meaning, those Tablighi Jamaat men might have infected above ten thousand people already. The figure could have been several folds higher as we already know, there TJ men were on purpose infecting their target as part of coronajihad. The situation in India due to the Tablighi Jamaat spreaders is not unlike that in South Korea where more than half of the cases were linked to a secretive religious sect called the Shincheonji Church of Jesus. One super spreader had managed to infect 37 individuals in a week — an unusually high number of people — eventually leading to a surge in cases. As the New York Times points out, members of the proselytizing faith eventually accounted for a large majority of the country’s more than 7,500 coronavirus patients. They were linked to Shincheonji Church members in Daegu, a city in the southeast, or people who had come into contact with them. Now the leaders and members of Tablighi Jamaat are trying to avoid responsibility of spreading coronavirus throughout India, but, their patrons in the Middle East, the Muslim nations and some anti-India forces in the west have already formed a conglomerate with the mission of misleading the world, possibly with the final agenda of isolating India from rest of the world. Now, let me give some solid evidences about Tablighi Jamaat, proving its direct affiliations with Al Qaeda and other jihadist outfits such as Hamas. On February 8, 2011, British newspaper the Guardian in a report said, “An Islamic group once described as an “antechamber of fundamentalism” is attempting to extend temporary planning permission on its mosque near the Olympic site in east London. “The group has been connected to Kafeel Ahmed, one of the Indian suspects arrested for the failed attack on Glasgow airport. He died from his injuries. Two of the 7/7 bombers, Shehzad Tanveer and Mohammed Siddique Khan, are said to have prayed at a Tablighi mosque in Dewsbury, Yorkshire. “French intelligence officials are widely reported as once describing Tablighi Jamaat as an “antechamber of fundamentalism” while Michael Heimbach, the deputy chief of the FBI’s international terrorism section, was once quoted as saying that al-Qaida used the movement for recruitment purposes”. The Times in a report dated June 13, 2017 said: “The youngest of the London Bridge attackers was associated with an Islamic group with links to a series of terrorist plots. “Youssef Zaghba, 22, attended meetings of Tablighi Jamaat while studying in Morocco before moving to London two years ago. “The group, which has a strong presence in Britain, has been accused by foreign intelligence agencies of becoming a recruitment ground for foreign extremist organisations”. British newspaper The Telegraph on July 11, 2007 in a report titled ‘The peaceful group linked to radical Muslims’ had also elaborately mentioned Tablighi Jamaat’s militancy connections. On October 25, 2015, Business Insider had re-published a report from the Telegraph titled ‘Islamic group blocked from building Britain’s biggest mosque in London wrote: “The government has blocked an Islamic group with alleged links to fundamentalism from building Britain’s biggest mosque, putting a final end to a 16-year battle. “The highly controversial plans by the Tablighi Jamaat sect – accused by some of being a gateway to terror – would have created a so-called “megamosque” with 190-foot minarets and three times the floorspace of St Paul’s Cathedral. The 290,000 square foot mosque, near the Olympic Park in east London, would have accommodated up to 9,300 worshippers in two main gender-segregated prayer halls and a further 2,000 in a separate hall”. If someone will understand the key goal of Tablighi Jamaat, then it will be easy to realize, there really is very little difference between ideology of TJ or Al Qaeda or ISIS or any other radical Islamic groups around the world. Tablighi’s aspire of an Islamist conquest around the world and bring the whole world under sharia rule or caliphate. Salah Uddin Shoaib Choudhury is an internationally acclaimed multi-award-winning anti-jihadist journalist, counter-terrorism specialist and editor of Blitz
India, according to Dr.V. Raghavan, retired head of the Sanskrit department of India's prestigious University of Madras, was alone in playing host to extraterrestrials in prehistory. Dr. Raghavan contends that centuries-old documents in Sanskrit (the classical language of India and Hinduism) prove that aliens from outer space visited his nation. "Fifty years of researching this ancient works convinces me that there are livings beings on other planets, and that they visited earth as far back as 4,000 B.C., " The scholar says. "There is a just a mass of fascinating information about flying machines, even fantastic science fiction weapons, that can be found in translations of the Vedas (scriptures), Indian epics, and other ancient Sanskrit text. "In the Mahabharata (writings), there is notion of divine lighting and ray weapons, even a kind of hypnotic weapon. And in the Ramayana (writings), there is a description of Vimanas, or flying machines, that navigated at great heights with the aid of quicksilver and a great propulsive wind. "These were space vehicles similar to the so-called flying saucers reported throughout the world today. The Ramayana even describes a beautiful chariot which 'arrived shining, a wonderful divine car that sped through the air'. In another passage, there is mention of a chariot being seen 'sailing overhead like a moon.' "The references in the Mahabharata are no less astounding: At Rama's behest, the magnificent chariot rose up to a mountain of cloud with a tremendous din. Another passage reads: "Bhima flew with his Vimana on an enormous ray which was as brilliant as the sun and made a noise like the thunder of a storm." In the ancient Vymanka-Shastra (science of aeronautics), there is a description of a Vimana: "An apparatus which can go by its own force, from one place to place or globe to globe." Dr. Raghavan points out, "The text's revelations become even more astounding. Thirty-one parts-of which the machine consists-are described, including a photographing mirror underneath. The text also enumerates 16 kinds of metal that are needed to construct the flying vehicle: "But only three of them are known to us today. The rest remain untranslatable." Another authority who agrees with Dr. Raghavan's interpretations is Dr. A.V. Krishna Murty, professor of aeronautics at the Indian Institute of Science in Bangalore. "It is true," Dr. Krishna Murty says, "that the ancient Indian Vedas and other text refer to aeronautics, spaceships, flying machines, ancient astronauts. "A study of the Sanskrit texts has convinced me that ancient India did know the secret of building flying machines-and that those machines were patterned after spaceships coming from other planets." The Vedic traditions of India tell us that we are now in the Fourth Age of mankind. The Vedas call them the "The Golden Age", "The Silver Age", and "The Bronze Age" and we are now, according to their scriptures in the "The Iron Age". As we approach the end of the 20th century both Native Americans, Mayans, and Incans, prophecies claim that we are coming to the end of an age. Sanskrit texts are filled with references to Gods who fought battles in the sky using Vihmanas equipped with weapons as deadly as any we can deploy in these more enlightened times. For example, there is a passage in the Ramayana which reads: The Puspaka car that resembles the Sun and belongs to my brother was brought by the powerful Ravan; that aerial and excellent car going everywhere at will…. that car resembling a bright cloud in the sky.".. and the King [Rama] got in, and the excellent car at the command of the Raghira, rose up into the higher atmosphere." In the Mahabharata, an ancient Indian poem of enormous length, we learn that an individual named Asura Maya had a Vimana measuring twelve cubits in circumference, with four strong wheels. The poem is a veritable gold mine of information relating to conflicts between gods who settled their differences apparently using weapons as lethal as the ones we are capable of deploying. Apart from 'blazing missiles', the poem records the use of other deadly weapons. 'Indra's Dart' operated via a circular 'reflector'. When switched on, it produced a 'shaft of light' which, when focused on any target, immediately 'consumed it with its power'. In one particular exchange, the hero, Krishna, is pursuing his enemy, Salva, in the sky, when Salva's Vimana, the Saubha is made invisiblein some way. Undeterred, Krishna immediately fires off a special weapon: 'I quickly laid on an arrow, which killed by seeking out sound'. Many other terrible weapons are described, quite matter of factly, in the Mahabharata, but the most fearsome of all is the one used against the Vrishis. The narrative records: Gurkha flying in his swift and powerful Vimana hurled against the three cities of the Vrishis and Andhakas a single projectile charged with all the power of the Universe. An incandescent column of smoke and fire, as brilliant as ten thousands suns, rose in all its splendor. It was the unknown weapon, the Iron Thunderbolt, a gigantic messenger of death which reduced to ashesthe entire race of the Vrishnis and Andhakas. It is important to note, that these kinds of records are not isolated. They can be cross-correlated with similar reports in other ancient civilizations. The after-affects of this Iron Thunderbolt have anonymously recognizable ring. Apparently, those killed by it were so burnt that their corpses were unidentifiable. The survivors fared little ether, as it caused their hair and nails to fall out. Perhaps the most disturbing and challenging, information about these allegedly mythical Vimanas in the ancient records is that there are some matter-of-fact records, describing how to build one. In their way, the instructions are quite precise. In the Sanskrit Samaraanganasutraadhaara it is written: Strong and durable must the body of the Vimana be made, like a great flying bird of light material. Inside one must put the mercury engine with its iron heating apparatus underneath. By means of the power latent in the mercury which sets the driving whirlwind in motion, a man sitting inside may travel a great distance in the sky. The movements of the Vimana are such that it can vertically ascend, vertically descend, move slanting forwards and backwards. With the help of the machines human beings can fly in the air and heavenly beings can come down to earth. The Hakatha (Laws of the Babylonians) states quite unambiguously: The privilege of operating a flying machine is great. The knowledge of flight is among the most ancient of our inheritances. A gift from 'those from upon high'. We received it from them as a means of saving many lives. Rate this post:
From Reddit r/NoMansSkyTheGame The number 16. That's how many people work at Hello games. I've been wondering what the connection is with the nms arg and the number 16 that keeps cropping up. I just googled the people that work at Hello games and their is a list of 16 people that work there. In the arg it talks about a simulation and facing the sixteen. If hello games created the game then surley they are the sixteen that created the simulation that the arg keeps referring to. What's your thoughts on this theory?
Here’s something positive on the health news front. Researchers at the Pain Research Center at the University of Utah have discovered that music could help reduce pain symptoms. Music, it seems, really can create a distraction that diverts our mind from focusing on pain. For their study, the U.S. researchers recruited 153 healthy, normal volunteers. These volunteers participated in a test session in which pain responses were measured while listening to music. The researchers found that music reduced pain responses in the volunteers, but with a caveat: personality factors like anxiety and the ability to become absorbed in music increased the pain-reducing effect. In other words, music listening can reduce your responses to pain, depending on your personality. The researchers have this health advice: doctors should consider patients’ personality characteristics when recommending music listening for pain relief. RECOMMENDED: How Music Could Help Your Health Now, here are five other natural pain relievers you can try: 1. Acupressure: Also known as contact healing, acupressure uses finger and hand pressure to promote energy flow. When pressure is applied to your body, neurotransmitters that help to inhibit pain are released. 2. Arnica: This herbal cream could help reduce inflammation associated with pain when applied topically. 3. Bromelain: Fresh papaya or pineapple juice contains special enzymes that could help to reduce pain symptoms. 4. Curcumin: Found in turmeric, this nutrient scavenges free radicals and helps to reduce inflammation. 5. Capsaicin: Found in cayenne pepper, capsaicin could actually block pain signals from being transmitted to your brain.
Brutal asesinato policial de una guerrillera naxalita Fotos del asesinato de la camarada Shruthi La camarada Shruthi de tan solo 23 años fue brutalmente asesinada por la policía india en un presunto enfrentamiento armado el pasado miércoles 16 de septiembre. Ella fue capturada viva y brutalmente torturada. Su codo fue trenzado 180 grados y desgarrado por binette. Policía vertió ácido en su estómago. Ella fue violada por la policía y finalmente asesinada. Esto sucedió cuando dos guerrillerxs, Sruthi, alias Mahitha (23 años) y Vidyasagar Reddy, alias Sagar (32 años) fueron secuestrados de sus domicilios y presentado su asesinato como un encuentro armado entre la guerrilla maoísta y las fuerzas de seguridad indias en la región de Karimnagar-Khammam. Los dos camaradas fueron torturados y posteriormente asesinados. Con la camarada Sruthi se ensañaron con especial brutalidad. Honor y gloria a la camarada Shruthi!!! Apoyar la Guerra Popular en la India!!! Noticias relacionadas: Tension at MGM Hospital Amid heavy police bandobust, post mortem was conducted on the slain naxals at the MGM Hospital here on Wednesday. Tension prevailed for some time as the police did not allow the parents of the deceased naxals to have a glimpse. They, along with other relatives and activists of different organisations staged a demonstration at the hospital mortuary. Following court intervention, parents were allowed to see the bodies before the post mortem was conducted. Police cordoned off the area around mortuary. Despite prohibitory orders, large number of people gathered at the mortuary. Civil Liberties’ activists, members of Revolutionary Poets Association, Naxal martyrs’ relatives association and others arrived at the hospital raising slogans and dubbing the encounter ‘fake’. Two naxals Sruthi alias Mahitha and Vidyasagar Reddy alias Sagar were killed in the exchange of fire with the police on Tuesday at Rangapur forest area in Govindaraopet mandal. The bodies were shifted to MGM Hospital late in the night. After post mortem, they were handed over to the relatives. Those who gathered at the mortuary took out a procession for a while near the hospital. Revolutionary poet Varavara Rao, Madiga Reservation Porata Samithi (MRPS) leader Manda Krishna Madiga and others visited the hospital. Politicos, Activists Dispute Maoist Encounter Theory WARANGAL: A day after the police gunned down two Maoists in the forests of Tadwai mandal in the district, the civil society and political parties have lambasted the government alleging that the encounters were fake and the killings state-sponsored murder. Virasam leader P Varavara Rao, who visited the MGM hospital, alleged that T Shruthi, the woman Maoist, was sexually abused before killing. The parents of the two victims too alleged that their children were detained by the police and later bumped off. Expressing doubts over the authenticity of the police story, TTDP leader E Dayakar Rao demanded a probe by a sitting judge. Congress Jagitial MLA T Jeevan Reddy too termed the encounter as ‘fake’ and held the government responsible for the trend of youngsters joining the Maoist outfit. Director General of Police Anurag Sharma, meanwhile, contended that the police personnel were combing the forest area, when they spotted the Maoists. “Our men came face to face with the Maoists and the latter opened fire on the police. In the retaliatory fire, the two were killed while some others fled the spot,” he claimed. Police across the districts bordering Chhattisgarh have intensified combing operations and are maintaining vigil at the check posts. Vara Vara Rao alleges rape of slain Maoist, seeks probe WARANGAL: Noisy scenes were witnessed at the MGM Hospital here on Wednesday where doctors performed post-mortem on the bodies of two Maoists killed in an encounter with the police in the district on Tuesday. Protesters including families of the two slain Maoists, led by revolutionary writer and Virasam founder Vara Vara Rao, gathered at the hospital and accused the Telangana government of killing Shruti alias Maisakka and Vidyasagar Reddy alias Gopanna in cold blood in the forests of Eturunagarm. Vara Vara Rao levelled a serious allegation that prior to killing Shruti, the police personnel raped her. He further alleged that the police poured acid on different parts of Shruti’s body in a bid to destroy the evidence of their ‘misdeeds’. Calling the encounter as fake and the version of the incident provided by the police as a ‘cock-and-bull’ story, Vara Vara rao demanded a thorough enquiry into the encounter. The well-known writer also accused chief minister K Chandrasekhar Rao of conspiring to eliminate Naxalites by any means. He also said it was as a direct result of the government’s failure to provide employment to the youth in the state. In a testimony to growing desperation among jobless youth as many as 36 highly educated young men and women went underground to join the Maoist movement just in the last two months, he said. Members of the Amaraveerula Bandhumitrula Committee, meanwhile, staged a dharna outside the hospital demanding that the authorities allow the parents into the mortuary to witness the post-mortem. Anticipating trouble, a large number of personnel from Mattewada, Intezargunj, Hanamkonda police stations and senior police officers were present at the hospital. The bodies were handed over to the families after a three-member team of forensic doctors completed the post-mortem which was also videographed. The reports and the video were submitted to the court. Meanwhile, the Telangana units of Congress and TDP in Hyderabad alleged that the state government killed the two Maoists in a fake encounter and demanded a judicial probe into the incident. Extraído del blog Logo de Estás comentando usando tu cuenta de Cerrar sesión / Cambiar ) Imagen de Twitter Foto de Facebook Google+ photo Conectando a %s
FROM SUBREDDIT r/BestBitcoinNews: Crypto Bulls and Bears Wrestle on What Comes Next for the Bitcoin Price By CCN: Many in the crypto community remained calm and banded together despite the steep drop in the bitcoin price on the heels of an otherwise incredibly bullish month. Bitcoin’s value was slashed by approximately $21 billion in the last 24 hours, with the BTC price currently holding above the $7,000 threshold. The declines were traced back to a mega sell order on Bitstamp exchange, either creating an opportunity for investors who missed the previous run or providing a warning before the other shoe drops. Of course, crypto bulls and bears disagree about what comes next. Buying Opportunity Crypto trader The post [Crypto Bulls and Bears Wrestle on What Comes Next for the Bitcoin Price](https://www.ccn.com/crypto-bulls-and-bears-wrestle-on-what-comes-next-for-the-bitcoin-price) appeared first on [CCN](https://www.ccn.com) [Full Article](https://www.ccn.com/crypto-bulls-and-bears-wrestle-on-what-comes-next-for-the-bitcoin-price)
A move away from vulcanization may be the key to much easier tire repairs.Researchers in Germany recently demonstrated a new way to make rubber capable of handling the demanding duties of tires. Unlike all the prior methods, this one substitutes the vulcanization process which uses heat and sulfur, German researchers have found a new way to make rubber tires using imidazolium bromide to modify bromobutyl rubber. The upshot of all this bromine talk is that the cross-linked rubber molecules in the vulcanized rubber cannot self-heal, whereas the new rubber can. The researchers say that tears and holes in the new type of rubber would seal up. It does this at room temperature and IFL Science reports that it can be sped up by using heat. Therefore, your tire, which is actually a structure containing rubber, nylon cords, and steel, could possibly be repaired without a plug or patch in the future by a shop using just heat, or perhaps with a kit in the trunk. Better yet, the research indicates that the repaired rubber can withstand significant stress. The method by which tire rubber is made was developed in the early 1800s by Charles Goodyear. A better method is way overdue. Read the researchers’ abstract yourself at this link.
Professor Christopher Saint will discuss cutting-edge research on the future of Australia’s water supply at a public lecture on Tuesday 17 April. The lecture, part of the University of South Australia’s popular Knowledge Works series, will reveal that water quality, not just quantity, is important in ensuring our future generations have adequate water supply. “Water is a key commodity that we should not take for granted. There is much emphasis on quantity but quality is also of paramount importance. The relatively new science of DNA technology can make an important contribution to measuring and ensuring quality,” Professor Saint said. According to Professor Saint, blue-green algae are a primary concern for South Australia’s water quality. These organisms can produce toxins or compounds that ‘taint’ our drinking water, giving it an unpleasant earthy taste and smell. “It seems that in Australia and around the world the incidences of blooms of these organisms is increasing and this could be related to increased water temperatures associated with climate change,” he noted. Professor Saint, Director at the SA Water Centre for Water Management and Reuse, will outline the centre’s innovative research, which is developing solutions for water quality issues, including combating blue-green algae using DNA technology. “DNA tests can provide an on-site early warning of the presence of organisms such as blue-green algae so that management options can be put in place in a timely manner. “They are also highly specific and provide the opportunity to definitively identify organisms. This is important as only certain species are harmful but they are difficult to distinguish using a microscope. “The really exciting aspect of these new technologies is that the environment will become the laboratory of the future as field detection and online monitoring capabilities are developed and deployed.” The research on water quality comes at a time when Australia is facing an uncertain future in terms of water supply. As one of the world’s largest water users, it is important that we find ways to adapt due to an increasing population and the threat of climate change. “We are one of the most highly urbanised populations in the world and providing this population with a consistent supply of good quality water is a challenge,” said Professor Saint. “We are going to see more and more extreme weather events - drought and floods that make water storage, treatment and distribution difficult.” The lecture ‘Water quality management - it can be in the genes’ will be held at 6 pm on Tuesday 17 April at the Bradley Forum, Level 5, Hawke Building, UniSA City West Campus. To register for the lecture and to subscribe to the Knowledge Works series, click here.
<issue_start>Title: messages larger than 256 sent but error reported user0: Hi, great work. I am trying to send a status message that is larger than 256 bytes. So I increased #define MQTT_MAX_PACKET_SIZE 512 but method PubSubClient::write reports an error when sending messages larger than 256 bytes, though messages are sent fine, and arrive at the broker complete. I found that in line 412 the type of rc is uint8_t which is to small for the length. Changing that to uint16_t fixes the problem. Please verify and change. Thanks Ralf user1: Thanks for reporting it.<issue_closed>
The concept of an Ombuds(man) (or woman) is a fairly recent development that has gone hand in hand with larger entities. You can read about the history and such about it at https://en.wikipedia.org/wiki/Ombudsman “The typical duties of an ombudsman are to investigate complaints and attempt to resolve them, usually through recommendations (binding or not) or mediation. Ombudsmen sometimes also aim to identify systematic issues leading to poor service or breaches of people’s rights” “The major advantage of an ombudsman is that he or she examines complaints from outside the offending state institution, thus avoiding the conflicts of interest inherent in self-policing.” “Many private companies, universities, non-profit organizations and government agencies also have an ombudsman (or an ombuds office) to serve internal employees, and managers and/or other constituencies … Organizational ombudsmen often receive more complaints than alternative procedures such as anonymous hot-lines.” The essence of an Ombuds is that they give a voice to the powerless — especially those outside of a power or social structure. If you’ve ever been at a stake conference where every talk has five-ten minutes of the speaker talking about his or her connection to the other speakers and leaders and five minutes of a talk, you have been in a place where the power structure and social order have squeezed out almost everything else. An Ombuds counteracts the feeling of dispossession and not-belonging that often causes organizations to shed members and become nothing more than an expanded social club, often combined with being large enough that it feels faceless. While there were prototypes in many older societies, the essence of an Ombuds is that the society has outgrown the other forms of formal and informal appeal. For example, in the early church people were known to buttonhole Joseph Smith on the street and anyone could schedule a meeting with Brigham Young. Later in the Church, all you had to do in order to talk to an Apostle was to vote no at Conference, something anyone could walk in and do. Not to mention, many people lived near general authorities. When the church had only a couple hundred thousand members (and a 5-1 child to adult ratio) that wasn’t too hard. Anyone had a voice. Now this did lead to some headaches. We have comments and complaints from a number of church leaders about the types of problems they had to deal with, including the confessions of things that leaders did not feel bore even remembering, nonetheless confessing (a complaint Brigham Young had). Others have tired of “drama lamas” and those on both sides of issues constantly pestering them (Bruce R. McConkie bore a special animus for people who kept pestering him to speak out against chocolate and white bread). Some have attributed Elder McConkie’s tireless work ethic to a desire to avoid all the people who were buttonholing him and trying to get him to endorse their special cause or perspective. If you read about the trials Moses had, where he was overwhelmed by the people (and then appointed captains of thousands, 50 and 100), you can see this goes back a long way in time. Leaders can get overwhelmed, especially by minutiae. [http://biblehub.com/exodus/18-25.htm]. The bigger the organization, the more complaints they hear, the more overwhelming it can get. Now, in the present church, all complaints are referred back to Stake Presidents, unopened or un-reviewed except to figure out where the complaint is coming from so it can be sent back. The only way complaints make it up the chain is by surveys, personal contacts (much rarer) and when a problem is so common that State Presidents bringing it up with Area Authorities moves it up the chain. The question that I’ve had looking at this is what method is available to allow for something more than what we do and do we need something more? That is two questions. Is there a solution is the one everyone asks, but the other question, do we need a solution, is also valid. The question that I’ve had looking at this is what method is available to allow for something more than what we do and do we need something more? The third question is: “What other venues for appeal do we have in the church as it is that I don’t know about, and that exist in the church as it could be?” Again, like my last post, I expect that the question will turn out to be more complicated than my essay and have more facets and things I did not know about. There are problems that seem to be routed in local authorities basically being unchallenged. For example, those who have moved their funerals outside of LDS Chapels because they were told that one parent could not participate if they had left the Church. I don’t know enough to state whether or not or who was right or wrong in the case I blogged about, but I do know that the parents felt they had no venue for an appeal and the process caused some significant ill-will for the Church in the local community. I’ve also heard from people who have cancelled baptisms rather than expose children with anxiety disorders to large numbers of people, or who have been told that their child’s non-member friends could not attend the baptism because baptisms were only for members and those committed to baptism themselves. They were told that a baptism was not a missionary event for exposing people to the Church and the matter stopped with the Stake President who did not believe in anxiety disorders (including those with formal diagnosis and treatment plans). In the LDS Church the process of no appeal beyond a stake president is referred to as “leadership roulette” though often an Area Authority will involve themselves if asked (and they seem to be the natural person to serve as an Ombuds in the current system). In dealing with problems like this, some churches have experimented with Ombuds (the sex-neutral term). Some have had Ombuds imposed from outside, especially in cases involving sexual abuse of minors, some have worked to consider them from the inside (the Church of England’s 2013 plan). Orthodox congregations have considered having ombuds who serve for a twelve month period who would report to leaders things that they were missing. The Catholic Church has had some difficult times with some systems: You can read about the Catholic experience at http://digitalcommons.pepperdine.edu/cgi/viewcontent.cgi?article=1113&context=drlj (Yes, while I do not know everything, I do know that some solutions cause problems as well as solve them). Even non-church and non-government entities have had ombuds. Some that would surprise you. Other churches have tried other alternative dispute resolution systems. Other churches have a fully democratic form of common consent — except when they do not. For example, the Anglican Communion has the majority of votes held by African Churches that oppose gay marriage. They have grown quite adept at subverting the system to support gay marriage regardless of where common consent would lead them. Now, I don’t have the answers. I can’t even spell Ombuds in a way that avoids my spell check and I’m not sure if it should be capitalized or not. I’m aware of lots of problems with such systems and the fact that such a system is no guarantee of uniformity or happier outcomes. But, it is something I’d like to talk about. So, without my having the answers, I’d like to ask some questions: - What method(s) is available to allow for something more in the way of an appeal from a local leader than just sending it to the local Stake President, unread? Does the current Church system have structures I don’t know about that work and are available to most members. - Do we need something more to do with complaints than just sending it to local leaders or is that really the amount of an appeal that is realistic, given the size and complexity of the Church? Should “leadership roulette” be a thing that is a feature, not a bug? - Is there a system (Ombuds or otherwise) that would work with a church as large as the LDS Church currently is? You can tell I’m at a loss to even give much of a suggestion of a solution. - What procedures would you suggest, what are the positives and what are the negatives you see in them? All images are from Wikipedia Commons.
FROM REDDIT r/relationships Hey man sorry about your loss I'll do my best to give you some good advice which I hope you take into consideration. You have to cut contact as best as possible for some amount of time at the least. Since she is your first "real" girlfriend it is going to take a bit longer for you to get over this. I reccommend at least 6 months but the longer the better. The reason you MUST go no contact is because when your relationship is over it is simply the removal of a title in your mind. If you were to see her tommorow chances are you wouldnt suddenly be out of love or unnatracted. You need to give your mind valuable time to get used to being by yourself again. If you decide to ignore no contact you will most likely enter a world of pain, jealousy, resentment, and self-destruction. From your perspective of essentially being on a quest to find yourself this is the most important time for you to learn to get back up on your feet. The journey of trying to find yourself is often marked with disappointment, mistakes, trial, error and outright failure. But It is all made worthwhile by the shining light of success that you find at the end. During this time you will need to be as strong as possible so you must become independent and back away from a friendship with your ex. In a lot of cases one person ends up waiting for things to work out again between the now ex. Unbeknownst to the ex partner he/she moves on leaving the other in the dust. When this happens it hurts much more than the initial breakup. Now for some advice on what to do. Its time to start replenishing that energy independently. Write a list of all your hobbies and make a commitment to spend more time doing those hobbies. Go out with friends, family and whoever. When you go out try new places avoiding the places you always went with your ex. Breathe in the new atmosphere, experiences, and laughter as a proud testament to your strength. Prove to yourself that you CAN have fun, you CAN have laughter, and you CAN do better without her. If there are new activities you wanted to do but couldnt because you were with her research those activities now. After your done researching make them happen! Be the force of change and DO NOT fall back to her. Good Luck OP.
Evolutionary transitions towards eusociality in snapping shrimps This is the story behind the paper " Evolutionary transitions towards eusociality in snapping shrimps " in Nature Ecology & Evolution. I began studying the sponge-dwelling snapping shrimps in the genus Synalpheus in 2011 when I joined Dr. J. Emmett Duffy’s lab at the Virginia Institute of Marine Science for my Ph.D. My goal was to study the sexual biology of these shrimps and their associations with sponges. Synalpheus shrimps occur globally, but the ones we focus on occur primarily in the Caribbean. This group of sponge-dwelling snapping shrimp is of interest to behavioral ecologists like us because it contains the only eusocial species known to occur in the sea. In fact, this group of about 45 species includes nearly all the forms of animal social organization that exist on earth. Yet, compared to nearly every other group of social animals, we know very little about sociality in shrimps. During one of our weekly meetings in the summer of 2014, Emmett handed me a project that he had been working on for quite some time. “We have a lot of unpublished data on shrimp colonies and I won’t have time to finish this.” He said. “Perhaps you could do this as a side project.” Emmett gave me a draft of an R-script to decipher a giant Excel file that records the details of all the Synalpheus specimens that he and his collaborators (Drs. Kristin Hultgren, Tripp Macdonald, Rubén Ríos, and others) collected for the past 20+ years. The goal that Emmett and our collaborator, Dr. Dustin Rubenstein, had in mind was to use the demographic characteristics of each Synalpheus species—things like colony size, the number of breeding female per colony, and more—to delineate the types of society in which they lived. Fig 1. Field notes from early collections (1989) by J.E. Duffy in the Bahamas that form part of the many field notebooks transcribed into a master Excel database of shrimp colony structure and distribution. It took me nearly a year to clean up the data, get the code straight, and use the appropriate approach to cluster species based on their demographic characteristics. We regrouped in the summer of 2015 to talk about where this project might go. Initially, we thought that we should just publish the data so that these social categories could be used for other research that we were working on. However, we quickly realized that with this classification of social categories at hand, and a molecular phylogeny of Synalpheus from our collaborator Kristin Hultgren, we could test a long-standing question among evolutionary biologists: How did eusocial species arise? Fig 2. Part of a colony of the eusocial shrimp Synalpheus filidigitus including the queen (with yellow eggs and ovaries) and six non-breeding individuals. Although the evolutionary transition towards eusociality has been studied intensively in insects, it has not received as much attention in other taxonomic groups. The presence of the diverse forms of social organization in Synalpheus provided us a unique opportunity to test how eusociality evolved in a group that is phylogenetically and ecologically distinct from insects. “This could be a Nature paper if you can do this,” said Dustin on the phone. “Understanding not only how eusociality arose, but also why we see such social diversity in these shrimps is one of our major research goals,” he added. With renewed enthusiasm for this project, I further developed the R-script, taking advantage of R packages like ape and geiger that can perform phylogenetic analyses, to test different models of transition between social categories in Synalpheus. After this long road, what did we find? It turns out that Synalpheus species do cluster naturally into three forms of social organization that we refer to as pair-forming (a pair of breeding male and female living together), communal breeding (multiple breeding males and females living together), and eusociality (a single breeding male and female together with a variable number of non-breeding individuals). To our surprise, further analyses suggested that eusocial and communal shrimp species evolved from pair-forming species independently along different evolutionary paths. This finding corroborates the “subsocial route” towards eusociality that Charles Duncan Michener proposed for insects nearly 50 years ago. Perhaps more importantly, we were especially excited that our results further affirm the importance of kin selection—a backbone of social evolution theory for the last half century—in driving social evolution since eusociality only seems to have evolved in species that form family groups. In addition to the specific results of our study, we believe that this work illustrates the importance of natural history collections and careful taxonomy. Our results are built upon years of field collection, painstaking taxonomic work and meticulous data curation that the co-authors, as well as many others, have shared. As one illustration, the number of recognized West Atlantic species of Synalpheus is now nearly double what it was when Emmett started to work on these obscure shrimp in 1988, when nothing was known of their social organization, and the new taxonomy results in strikingly different patterns of host specificity. Without the more than 60,000 specimens in our collection and the detailed notes on habitat and host associations, both resulting from numerous sampling trips dating back to 1988, these shrimp would be little more than obscure tropical cryptofauna and the expanding story of the ocean’s only eusocial animals would never have come to light. And now that we have a better understanding of how the social diversity in Synalpheus shrimps evolved, our next step is to find more species. After all, we’ve sampled only a small portion of the Caribbean. Many more fascinating species remain to be discovered under the sea. Who knows what wonders still await, hiding in the crevices of the world’s reefs? The Nature Ecology & Evolution paper is here: http://go.nature.com/2n0Y6QG
What is the 21st century skills movement and where did it come from? The 21st century skills movement developed in response to an increasingly global economy powered by rapidly changing technology. The world is changing with unprecedented speed. The last 60 years have seen the development of the computer, the internet, satellite imaging from space, and the mapping of the human genome. There has been a transformation in the ways that we communicate and work. What will the next 60 years bring? Children entering kindergarten this year will spend their educational and professional careers in an exciting period of complex and exponential change. Tomorrow’s workforce needs to evolve in order to respond and thrive in the 21st century world. The movement for 21st century skills has risen to advocate for changes in education to meet the needs of a rapidly shifting, competitive, and connected world. What are 21st century skills? There are six major educational frameworks designed to improve the development of 21st century skills (The Partnership for 21st Century Skills, Tony Wagner’s Seven Survival Skills, Groups enGauge framework, Iowa Core 21st Century Skills, Connecticut State Department of Education, and Assessment and Teaching of 21st Century Skills). While each framework has a slightly different focus for 21st century skill sets, all concur on four critical goals for learning: critical thinking and problem solving, communication, collaboration, and creativity. These skills are sometimes referred to as the 4 Cs. In addition to the 4 Cs, 21st century skills groups advocate for fluency in information, media and technology skills as well as the ability to analyze information and the ability to think critically about that information that is bombarding us from so many media sources every day. Skills in new and changing technologies and their best use are a part of most frameworks. In addition, important life and career skills such as flexibility, adaptability, initiative, and self-direction are identified as important for success in the 21st century. Educators must utilize strategies that support the development of these skills. There are effective tools that support content learning while supporting 21st century skills development. Look for more blog posts that give specific strategies for promoting creativity, collaboration, critical thinking, and communication in your classrooms. 21st century skills development needs to be a priority for today’s students. In order to be competitive in a changing workforce, students need to build the skills that businesses are asking for – 21st century skills.
Having a healthy, well-balanced lifestyle can make a huge difference to all aspects of the body, including inflammation. Inflammation refers to the body’s defense mechanism against pathogens, injury and the effects of certain chemicals or radiation. While having our very own army of cells to protect us is vital, inflammation isn’t always a positive. A host of issues, including skin conditions such as psoriasis, can arise when the immune system fights against the body’s own cells by mistake. Our resident Medical Advisor, Dr. Tiffany Lester, answered our questions on how to reduce inflammation, the repercussions of prescription skin drugs on the gut and the most effective skincare ingredients (that just so happen to be included in our new topical skincare supplement). What causes skin inflammation? It is typically due to a trigger through your immune system due to an allergic reaction, an infection, or an imbalance. What are the symptoms? Acne, rash, burning/itching, redness, hyperpigmentation, dry skin, oiliness and puffiness are the most common. Why are some people more prone to inflammation than others? I typically find this is based on the status of their immune system and stress levels. When these systems are not in a healthy place and they get exposed to triggers, the reactions are more severe. When are antibiotics necessary to treat skin conditions? If you have a skin infection like impetigo or an abscess, antibiotics are absolutely necessary. Most of the time they are just a bandaid for a deeper issue. For example, antibiotics are often prescribed for acne and may clear the skin because it kills ALL the bacteria in the gut, good and bad. However, this significantly alters the microbiome leading to long-term negative health effects like intestinal permeability. What can you do to reduce inflammation? There are certain foods that are known to combat inflammation like turmeric, fatty fish, olive oil, nuts and green leafy vegetables. Eating a combination of these foods on a weekly basis will decrease levels over time. You want to eat foods that increase Omega-6 fatty acids in moderation as these can contribute to higher levels of inflammation. These include red meat, cheese, corn, safflower and sunflower oils. What exacerbates inflammation? Lack of sleep, dehydration, processed and high sugar foods, not doing any exercise and smoking. Can skin inflammation be hereditary? Not that I know of... How does CBD work on skin? The endocannabinoid system—which works to create harmony in response to changes in the environment, regulating many functions including mood, sleep and immune function— is found all over the body, even the skin! We have receptors in our skin that are present in the epidermis containing the primary skin cells called keratinocytes, immune cells, hair follicles, and sebaceous glands (that secrete oil for hair and skin). CBD applied topically may help to regulate sebum production to reduce oiliness, increase cell turnover, and reduce inflammation. Some early research has also shown that it can prolong the regression phase of hair growth which is especially helpful for individuals who want to lessen facial hair. What topical ingredients do you think have the biggest impact on skin? CBD and Hyaluronic Acid.
Alfred Herman Alfred Herman, né le à Tirlemont, décédé le à Lutry, est un pharmacien, écrivain et poète vaudois. Biographie Alfred Herman, pharmacien depuis 1947, écrit dès l'âge de 16 ans des poèmes et des nouvelles. Poussé par des amis, par des poètes, il publie son premier recueil Cascades en 1961. Établi en Suisse, à Pully dès 1970, Alfred Herman écrit essentiellement des recueils de poèmes, dont une quinzaine sont publiés. Il reçoit de nombreuses distinctions, parmi celles-ci, la Médaille d'Argent avec mention de l'Académie internationale de Lutèce 1997 pour L'aube se lève. Sources Journal de Pully, 1988/11/11 Académie européenne des arts Liens externes Écrivain vaudois Poète vaudois Naissance en juillet 1922 Naissance à Tirlemont Projet:Valdensia/Articles liés Décès en février 2018 Décès à Lutry Décès à 95 ans
Dr Kipkirui Langat As the world is converging in Glasgow, Scotland for COP 26 (Conference of the Parties to the UN Framework Convention on Climate Change), some pertinent issues relating to climate change are being discussed to achieve net-zero emission by 2050. COP26 is the biggest climate summit since the Paris Agreement was inked in 2015. It comes at a pivotal moment for the planet as countries and companies hope to hit the gas on the transition to a fossil-fuel-free economy. This year’s Summit is very special as it marks the first five-year interval after Paris, a deadline for countries to demonstrate tangible progress and ratchet up climate change ambitions. For the first time in 2015, developed and developing countries representing more than 95% of global greenhouse gas emissions committed to adhering to a common framework to slow global warming. The Paris Agreement, decades in the making, was built on painstaking compromises defining the rules of the global response to climate change. At the moment, the focus is on countries to deliver on their promises to keep temperature increases well below 2 degrees Celsius compared to pre-industrial levels. This would require going beyond removing fossil fuels from client portfolios by investing in climate solutions. This would have been possible if developed countries honoured a promise they made back in 2009 of mobilizing USD100 billion per year by 2020 to support climate action in developing countries. At the moment, the world has already warmed up to 1.2 degrees Celsius above pre-industrial levels, nearly the maximum amount of 1.5 degrees Celsius agreed on under the Paris Agreement. There is a massive gap between what is needed to attain the 1.5 degrees Celsius limit and countries’ stated emission-reduction targets, particularly for the world’s largest polluters. Kenya is participating in the conference where the country profile was presented as an evidence base to inform the production of Net Zero Future visions. In addressing the conference, our President, HE Uhuru Kenyatta vowed to make Africa’s voice on climate finance heard. The President cautioned that climate impacts are a growing security concern and that he would champion the African cause at the United Nations Security Council where Kenya is currently seated as a non-permanent member. The conference was also addressed by Kenyan activist Elizabeth Wathuti, who made a compelling appeal for action based on the disproportionate impacts and climate challenges that the African people already experience. Climate change is a critical global issue as its impact has far-reaching consequences at the environmental, societal and economic levels, and demands activism in all sectors of work and life. In the education and training sector, UNESCO-UNEVOC has been advocating for the effective transmission of the knowledge and skills needed to promote climate change through mainstreaming of climate response. To achieve this, UNEVOC is assisting TVET agencies and institutions in the development and implementation of green strategies to transform their learning and training environments, in fulfilment of their role in skilling learners, upskilling professionals in green job sectors, and re-skilling those affected by job losses due to the green transition and the recent COVID-19 pandemic. TVET can address knowledge and skills challenges to achieve the SDGs as well as transmit the right mindset and attitude among trainees and the future workforce through well-designed education and training system. As we participate in the global debate, the emphasis should be now on the greening of occupations with a focus on high-skilled, well-paid jobs such as in the renewable energy industries, energy efficiency and mass transit as well as other skills in agriculture, eco-tourism and waste management. This requires repositioning our TVET institutions to be able to supply the experts and technologies required to operate a green economy. This also calls for putting in place policies and strategies that would encourage sectors of the economy to shift their operations to greening practices. More fundamentally, the quality assurance regime that we have put in place as a country will encourage institutions to participate in research, innovation, entrepreneurship and greening of TVET alongside community engagement. This will ensure that TVET institutions go beyond training and translate the knowledge and skills learnt into immediate products and services sustainably directly benefitting the immediate communities and banking on the prospects of a better Kenya build on hope and innovation and contributing to the future we want. The author is the Director-General TVETA and Transport Emissions, Expert